What are the responsibilities and job description for the AWS Data engineer with Databricks position at iPivot?
AWS Data Engineer with Databricks
Princeton, NJ – Hybrid
Duration: Long Term
Due to project requirements, this opportunity is restricted to U.S. citizens
Key Responsibilities
- Design, develop, and optimize scalable data pipelines using Databricks, PySpark, and Delta Lake for batch and real-time processing.
- Implement ELT processes, data quality checks, monitoring, and governance using tools like Unity Catalog, ensuring compliance and performance.
- Collaborate with data scientists, analysts, and stakeholders to integrate data from diverse sources and support analytics/ML workflows.
- Mentor junior engineers, lead cloud migrations, and manage CI/CD pipelines with IaC tools like Terraform.
Required Skills and Qualifications
- Bachelor's in Computer Science or related field, with 5 years in data engineering including strong Databricks experience.
- Proficiency in PySpark, Python, SQL, Azure Data Factory, Kafka for streaming, and data modeling (e.g., medallion architecture).
- Hands-on with cloud platforms (Azure/AWS/GCP), ETL/ELT, data lakes/warehouses, and performance optimization
Sr Data Engineer (DataBricks, PySpark & AWS/Azure || Princeton, NJ (Hybrid) || Only Local and s
Jobs via Dice -
Princeton, NJ
Lead AWS Cloud Engineer - Databricks Platform
Greymatter Innovationz -
Princeton, NJ
Data Center Chief Engineer, Data Center Eng Ops
Amazon Web Services (AWS) -
Fairless Hills, PA