What are the responsibilities and job description for the AWS Data Engineer (Databricks & DBT) position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Intone Networks Inc., is seeking the following. Apply via Dice today!
Hi,
Hope you are doing great!!!!!
We have a very good position with our client, please let me know if you are comfortable with the job description below with your updated resume.
Position AWS Data Engineer (Databricks & DBT)
Location : Lebanon, NJ (Hybrid)
Duration: : 12 Month
Interview: Phone/Video
Job Description
We are seeking a skilled AWS Data Engineer with strong expertise in Databricks and DBT to design, build, and optimize scalable data pipelines and analytics solutions. The ideal candidate will have hands-on experience with modern data architectures, ETL/ELT processes, and cloud-based data platforms.
Key Responsibilities
Ankit Singh
Hi,
Hope you are doing great!!!!!
We have a very good position with our client, please let me know if you are comfortable with the job description below with your updated resume.
Position AWS Data Engineer (Databricks & DBT)
Location : Lebanon, NJ (Hybrid)
Duration: : 12 Month
Interview: Phone/Video
Job Description
We are seeking a skilled AWS Data Engineer with strong expertise in Databricks and DBT to design, build, and optimize scalable data pipelines and analytics solutions. The ideal candidate will have hands-on experience with modern data architectures, ETL/ELT processes, and cloud-based data platforms.
Key Responsibilities
- Data Pipeline Design & Development
- Design, build, and optimize robust ETL/ELT pipelines using AWS services such as S3, Glue, and Lambda.
- Leverage the Databricks platform (Spark, Delta Lake, DLT) for scalable data processing.
- Ingest and process large volumes of structured and semi-structured data from multiple sources, including APIs, databases, and streaming platforms (Kafka/Kinesis).
- Build and maintain centralized data lake/lakehouse architectures.
- Data Transformation & Modeling
- Develop and maintain data models such as star schema, snowflake schema, and medallion architecture using DBT (Data Build Tool).
- Write efficient and complex SQL queries and Python/PySpark code for data transformation and validation.
- Implement data quality checks, testing, and documentation within DBT workflows.
- Ensure adherence to data governance and security standards.
- Orchestration & Automation
- Orchestrate and monitor workflows using Databricks Jobs and tools like AWS MWAA (Apache Airflow).
- Implement CI/CD pipelines and manage version control using Git.
- Automate deployment of data engineering artifacts including code, configurations, and DBT models.
- Performance Optimization & Operations
- Monitor, troubleshoot, and resolve issues in production pipelines to ensure high performance and reliability.
- Optimize Spark jobs and leverage Delta Lake features such as partitioning and Z-Ordering.
- Ensure cost optimization and scalability of data solutions.
- Collaboration & Stakeholder Engagement
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver insights.
- Provide guidance on data best practices, governance, and quality standards.
- Work in an Agile environment using tools like JIRA.
- Strong proficiency in SQL
- Hands-on experience with DBT Core and DBT Cloud
- Experience with AWS services, especially Redshift, S3, Glue, Lambda
- Strong experience with Databricks on AWS
- Experience working with SQL Server
- Familiarity with CI/CD pipelines and Git
- Experience with Stonebranch (or similar scheduling tools)
- Experience working in an Agile environment (JIRA)
Ankit Singh