What are the responsibilities and job description for the Data Engineer - Project Delivery Specialist position at Deloitte?
Are you an experienced, passionate pioneer in technology who wants to work in a collaborative environment? As an experienced Data Engineer - Project Delivery Specialist you will have the ability to share new ideas and collaborate on projects as a consultant without the extensive demands of travel. If so, consider an opportunity with Deloitte under our Project Delivery Talent Model. Project Delivery Model (PDM) is a talent model that is tailored specifically for long-term, onsite client service delivery.
Recruiting for this role ends on May 15 th , 2026.
Work You'll Do/Responsibilities
As part of the Data & Analytics Foundry you will support numerous business product teams in designing, building, and operating modern data products and platforms across a scale delivery program (onshore/offshore). Your focus will be on delivering reliable, performant, and cost-effective data pipelines and curated datasets to enable analytics and downstream applications.
Key responsibilities include:
AI& Data - AI & Engineering leverages cutting-edge engineering capabilities to build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions are powered by engineering for business advantage, transforming mission-critical operations. We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms. Our delivery models are tailored to meet each client's unique requirements.
Qualifications
Required
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Recruiting for this role ends on May 15 th , 2026.
Work You'll Do/Responsibilities
As part of the Data & Analytics Foundry you will support numerous business product teams in designing, building, and operating modern data products and platforms across a scale delivery program (onshore/offshore). Your focus will be on delivering reliable, performant, and cost-effective data pipelines and curated datasets to enable analytics and downstream applications.
Key responsibilities include:
- Architect, build, and operate scalable batch and near-real-time data pipelines on AWS.
- Design robust ingestion patterns from source systems into S3 and into Snowflake.
- Develop transformation layers and curated datasets in Snowflake, including dimensional/data product modeling for analytics and downstream applications.
- Implement orchestration and workflow automation on AWS with retries, backfills, and idempotency.
- Build reusable Python components for ingestion, validation, and transformations; enforce standards via code reviews and testing.
- Optimize Snowflake performance and cost warehouse sizing, concurrency patterns, query tuning, clustering/micro-partition considerations, and workload isolation.
- Partner with stakeholders to translate requirements into well-defined datasets and data contracts.
- Communicate regularly with Engagement Managers (Directors), project team members, and representatives from various functional and / or technical teams, including escalating any matters that require additional attention and consideration from engagement management
- Independently and collaboratively lead client engagement workstreams focused on improvement, optimization, and transformation of processes including implementing leading practice workflows, addressing deficits in quality, and driving operational outcomes
AI& Data - AI & Engineering leverages cutting-edge engineering capabilities to build, deploy, and operate integrated/verticalized sector solutions in software, data, AI, network, and hybrid cloud infrastructure. These solutions are powered by engineering for business advantage, transforming mission-critical operations. We enable clients to stay ahead with the latest advancements by transforming engineering teams and modernizing technology & data platforms. Our delivery models are tailored to meet each client's unique requirements.
Qualifications
Required
- 7 years of experience as a Data Engineer delivering production-grade data pipelines and curated datasets.
- 7 years of hands-on experience with SQL and Python, including Snowflake and/or PySpark for scalable data processing and ELT.
- 7 years of experience designing, building, and operating batch and near-real-time data pipelines on cloud platforms (AWS preferred; Azure/GCP acceptable).
- Experience with data integration frameworks and orchestration tools.
- Proficiency in designing and implementing Lakehouse/warehouse architectures and ELT patterns.
- Knowledge of DevOps principles: CI/CD pipelines, version control, Infrastructure-as-Code.
- Ability to optimize data storage, partitioning, file formats (Delta, Parquet), and performance.
- Understanding of data quality, data governance, and metadata management.
- Bachelor's degree, preferably in Computer Science, Information Technology, Computer Engineering, or related IT discipline; or equivalent experience.
- Limited immigration sponsorship may be available.
- Ability to travel 10%, on average, based on the work you do and the clients and industries/sectors you serve.
- Agile delivery experience (5-10 years).
- Analytical ability to manage multiple projects and prioritize tasks into manageable work products.
- Can operate independently or with minimum supervision.
- Excellent written and communication skills.
- Ability to deliver technical demonstrations.
You may also be eligible to participate in a discretionary annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.
Salary : $102,750 - $171,250