What are the responsibilities and job description for the Software Engineer III - Data, AWS, ETL, Java/Python, position at JPMorgan Chase?
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities:
- Design, build, and maintain scalable services (APIs, microservices, batch/stream processing) with strong engineering fundamentals.
- Transform existing ETL logic to AWS, Databricks, or equivalent cloud platforms and experience with data platforms/tools (examples): Kafka, Spark, Flink, Airflow, warehouse/lake (Snowflake).
- Implement new or enhance existing services and components using both object-oriented and functional programming
- Establish and enforce guidelines to ensure consistency, quality, and completeness of data assets
- Executes standard software solutions, design, development, and technical troubleshooting
- Writes secure and high-quality code using the syntax of at least one programming language with limited guidance
- Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications
- Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation
- Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity
- Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
- Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems
Required qualifications, capabilities, and skills:
- Formal training or certification on software engineering concepts and 3 years applied experience.
- 5 years of professional software/data engineering experience, including substantial production work with Spark on Databricks.
Proficiency in Spark fundamentals and experience implementing Spark in data lake or data mesh solutions
Experience with either AWS, Databricks or Snowflake
Hands-on practical experience in system design, application development, testing, and operational stability
Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
Demonstrable ability to code in one or more languages
Experience across the whole Software Development Life Cycle
Exposure to agile methodologies such as CI/CD, Application Resiliency, and Security
Emerging knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
Preferred qualifications, capabilities, and skills:
Proficiency in one or more modern programming languages, including Python or Java (only one is required, but both is a major plus)
Experience in AI is a plus
Financial Services experience is a plus
Experience with Infrastructure as Code (Terraform)
Familiarity with relational and non-relational databases
Exposure to cloud technologies