What are the responsibilities and job description for the Senior Data Engineer – Commerce Data Pipelines-1563 position at aKube Inc?
City: Seattle, WA/ Bristol, CT or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Onsite/ Hybrid/ Remote: Hybrid (4 days a week onsite, Friday - Remote)
Duration: 10 months
Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
• Databricks
• PySpark
• Spark
• Snowflake
• Airflow
• Python
• SQL (advanced proficiency)
• ETL pipeline design
• Data modeling
• Data Quality frameworks
• Schema Change or similar deployment tools.
Responsibilities:
• Build and maintain data pipelines that process high-volume subscriber data.
• Work with upstream systems to collect raw data and prepare it for downstream consumption.
• Design table structures and develop ETL workflows in Databricks and Snowflake.
• Develop automated Data Quality checks and enforce data reliability standards.
• Use Airflow for orchestration and schedule management.
• Tune SQL and Spark jobs for performance at large scale.
• Deploy schema changes using Schema Change or similar tools.
• Partner with analytics, infrastructure, and product teams in a fast-paced environment.
• Support both net-new development and enhancements to existing pipelines.
Qualifications:
• 3 years of hands-on data engineering experience.
• Strong SQL expertise with proven performance tuning skills.
• Proficient in PySpark, Spark, and Python for large-scale data processing.
• Experience with Databricks and Snowflake.
• Strong understanding of data modeling and ETL best practices.
• Experience with data orchestration tools such as Airflow.
• Ability to work with large data volumes (tens to hundreds of millions of records per day).
• Strong communication and analytical skills.
• Comfortable working in a highly collaborative Agile environment.
• Bachelor’s degree required.
Salary : $93