What are the responsibilities and job description for the AWS Data Engineer position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, ComTec Information Systems, is seeking the following. Apply via Dice today!
Role: AWS Data Engineer
Location: Hybrid
Responsibilities:
Role: AWS Data Engineer
Location: Hybrid
- Newark, NJ office
- 1 2 / Days per Week
Responsibilities:
- Design, build, and optimize ETLs using Python, SQL, Lambda, Glue, RedShift, Airflow, Step Functions and other AWS services.
- Develop and maintain ETL workflows using Python, SQL, and AWS-native tools.
- Create SQL queries to segment, manipulate, and formatdata.
- Provide Run/DevOps support for data services, ensuring high availability and performance.
- Collaborate with data scientists, business analysts, SAP functional SMEs and other stakeholders to resolve data-related issues and improve system reliability.
- Implement batch job scheduling and manage data dependencies using tools like Airflow or Step Functions.
- Maintain documentation for data flows, incident resolution, and operational procedures.
- Participate in on-call rotations and incident response for production systems.
- Build automations to ingest, transfer, move, upload, and manipulatedata.
- Build or maintain data ingestion pipelines that move data from source systems into Snowflake.
- Create and manage data models to ensure data integrity and facilitate efficient data analysis.
- Implement and maintain data security and compliance measures, including access controls, encryption, and data masking.
- Ensure data quality, accuracy, and consistency through data validation, cleansing, and monitoring.
- Bachelor s degree in Computer Science, Engineering, or related field.
- 3 5 years of experience in data engineering or production support roles.
- Strong proficiency in AWS services: S3, Glue, Lambda, Redshift, DMS, CloudWatch.
- Good understanding of Medallion architecture
- Solid programming skills in Python and SQL.
- Experience with DevOps tools: GitLab, Jenkins, Bitbucket, Maven.
- Familiarity with data processing frameworks: Spark, Hive, Kafka.
- Must have experience with Airflow and Step Functions.
- Advanced SQL query development proficiency
- Understanding of data modelling principles and techniques.
- Knowledge of data security best practices and compliance requirements.