What are the responsibilities and job description for the Databricks AWS Lead Software Engineer position at JPMorgan Chase?
We are seeking a highly skilled Lead Data Engineer with proven experience in Databricks and AWS to join our data engineering team.
As a Lead Software Engineer at JPMorganChase within the Corporate Technology Finance team, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Architect, develop, and optimize large-scale data pipelines and analytics platforms, leveraging Databricks (Spark, Delta Lake) and AWS cloud services
- Lead a team of data engineers, collaborate with data scientists and business stakeholders, and ensure best practices in data engineering, security, and cloud architecture
- Design, build, and maintain scalable ETL/ELT data pipelines using Databricks (Spark, Delta Lake) on AWS
- Architect and implement data lake and data warehouse solutions leveraging AWS services (S3, Glue, Redshift, Lambda, EMR, etc.)
- Lead and mentor a team of data engineers, providing technical guidance and code reviews
- Optimize data workflows for performance, reliability, and cost efficiency
- Collaborate with data scientists, analysts, and business teams to deliver high-quality data products
- Ensure data quality, security, and compliance with organizational and regulatory standards
- Drive adoption of best practices in data modeling, version control, CI/CD, and infrastructure-as-code (e.g., Terraform, CloudFormation)
- Troubleshoot and resolve issues in production data pipelines and analytics platforms
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5 years applied experience
- 10 years of experience in data engineering
- Deep hands-on experience with Databricks (Spark, Delta Lake, notebooks, job orchestration)
- Strong expertise in AWS data ecosystem (S3, Glue, Lambda, IAM, etc.)
- Proficient in Python and/or Scala for data engineering
- Experience with SQL, data modeling, and performance tuning
- Familiarity with CI/CD, DevOps, and infrastructure-as-code in cloud environments
- Excellent communication and leadership skills
- Proven leadership experience in leading and mentoring varying levels of software engineers
- Experience with data governance, security, and compliance frameworks
- Experience with Immuta and Data quality control systems