What are the responsibilities and job description for the Data Engineer II position at Pride Global?
Title: Data Engineer II
Location: Cupertino, CA
Duration: 12-month contract
Mode: Hybrid (3days onsite)
Pay rate: $62/hr. - $67/hr. on W2
Role Overview
We are seeking a skilled Data Engineer (Software) with strong expertise in designing and implementing scalable data pipelines and modern data infrastructure. This role requires a blend of data engineering and infrastructure engineering capabilities, with hands-on experience managing containerized workflows in Kubernetes and Docker.
The ideal candidate will contribute to the development, deployment, and optimization of reliable data systems that support analytics, reporting, and data-driven decision-making in a cloud-based environment.
Key Technical Skills
- SQL
- Python
- Bash / Shell Scripting
- Apache Spark
- Apache Airflow
- Snowflake
- DBT
- AWS S3
- Kubernetes
- Docker
- CI/CD Pipelines
- GitHub
- DevOps Practices
Key Requirements
- 2-5 years of professional experience in Data Engineering, Software Engineering, or Analytics Engineering
- Strong proficiency in SQL and Python, with working knowledge of Bash/Shell scripting
- Hands-on experience building and maintaining data pipelines using:
- Apache Spark
- Apache Airflow
- Snowflake
- DBT
- AWS S3
- Proven experience with Kubernetes and Docker, including deployment, management, and troubleshooting of containerized workloads
- Familiarity with CI/CD pipelines, version control (GitHub), and DevOps best practices
- Experience with monitoring, automation, and improving system reliability in cloud environments (AWS preferred)
- Strong understanding of data modeling, data warehousing, and distributed data systems
- Ability to effectively bridge data engineering and infrastructure responsibilities
Responsibilities
- Design, develop, and maintain scalable ELT/ETL pipelines using SQL and Python
- Build and optimize distributed data processing workflows using Spark and Airflow
- Deploy, manage, and monitor containerized data services using Kubernetes and Docker
- Develop and maintain data infrastructure solutions on AWS, including S3-based data storage systems
- Implement and enhance CI/CD pipelines to support automated deployment and delivery of data services
- Ensure system reliability through monitoring, logging, alerting, and performance optimization
- Collaborate with cross-functional teams to deliver robust, well-documented data solutions
- Support urgent data requests, reporting needs, and ad-hoc analytical requirements as needed
Education
- Master’s degree in Computer Science, Engineering, Data Science, or a related field preferred
(or equivalent professional experience)
Salary : $62 - $67