What are the responsibilities and job description for the Data Engineer position at VeeRteq Solutions LLC?
Role: Data Engineer
Location: Morris Plains, NJ (Hybrid)
As a Data Engineer you'll design, build, and maintain scalable data pipelines using AWS and Databricks.
Summary
Senior-level engineer responsible for designing, building, and supporting cloud-based data pipelines and health insurance applications using AWS and Databricks. Role involves complex system integrations, data engineering, and working closely with distributed agile teams.
Duties
Location: Morris Plains, NJ (Hybrid)
As a Data Engineer you'll design, build, and maintain scalable data pipelines using AWS and Databricks.
Summary
Senior-level engineer responsible for designing, building, and supporting cloud-based data pipelines and health insurance applications using AWS and Databricks. Role involves complex system integrations, data engineering, and working closely with distributed agile teams.
Duties
- Design, build, and maintain scalable data pipelines using AWS and Databricks
- Implement ETL/ELT processes to load data into data lakes and downstream systems
- Ensure data quality, accuracy, reliability, and observability
- Build and automate unit, integration, and performance tests
- Integrate internal systems, external vendors, mobile apps, and data marts
- Support and enhance retail health insurance applications
- Monitor pipelines and proactively resolve production issues
- Participate in all SAFe/Agile ceremonies
- Collaborate with engineering, business, architecture, security, and infrastructure teams
- 8 years experience designing and architecting solutions from scratch
- Strong expertise in AWS Cloud Services, including:
- Glue, ECS/EKS, Fargate
- Redshift, Aurora, DocumentDB
- S3, Lambda, SNS, IAM, ECR
- Databricks expertise:
- Delta Live Tables (DLT)
- SQL, Spark, Python/PySpark
- Java
- Databricks workflows or Apache Airflow
- Strong understanding of:
- Data integration strategies
- ETL / ELT techniques
- Data warehousing principles
- Data transformation, quality, and observability
- Experience with DevOps practices, design patterns, and performance tuning
- Familiarity with BI & Governance Tools:
- Tableau
- Alation, Collibra
- Migration of large-scale on prem data pipelines to Databricks (cloud)
- Experience with:
- Delta Lake, Iceberg, Apache Hudi
- Partitioning strategies & metadata management
- JSON/XML, document databases, real-time ingestion
- Lakehouse architecture, dimensional modeling, Data Vault
- Strong analytical and problem-solving abilities
- Excellent written and verbal communication
- Customer-facing, consultant mindset
- Ability to lead technically and influence architecture decisions
- Collaborative team player in distributed agile environments
- Bachelor's degree in Information Systems or related field
- Advanced degree (e.g., MBA) is a plus
- Equivalent experience accepted