What are the responsibilities and job description for the Senior Data Engineer position at Syndesus, Inc.?
About the Company
Well-established consumer software company with a large global footprint. Strong benefits: bonus, pension, medical/dental/vision, generous PTO, and paid parental leave.
About the Role
Senior IC role on a data innovation team responsible for designing, building, and operating the data architecture that powers analytics, ML/AI initiatives, and business intelligence across a large-scale consumer software platform. You'll work at the intersection of data engineering, data science, and data quality — partnering closely with stakeholders, data scientists, and product teams.
Responsibilities
- Design and deploy comprehensive data architecture capturing structured and unstructured data from diverse internal and external sources
- Build resilient ETL/ELT pipelines routing data across cloud structures, local databases, and other storage forms
- Implement data quality frameworks — validation, monitoring, and automated recovery strategies
- Collaborate with data scientists to enable advanced analytics, predictive modeling, and ML initiatives
- Develop web-enabled, self-service analytics solutions that democratize data access company-wide
- Apply AI/ML and big-data techniques to automate data cleansing, transformation, and enrichment
- Leverage MCP (Model Context Protocol) to connect enterprise applications and automate data flows
- Ensure secure, scalable, and compliant data ingestion with appropriate PII handling
- Troubleshoot pipeline issues, optimize performance, and participate in on-call rotations
- Mentor junior team members and contribute to data engineering practice growth
Qualifications
- 8 years of hands-on ETL/ELT pipeline development across varied data sources
Required Skills
- Strong programming skills in Python, Scala, or Java (production-quality code)
- Experience with modern data platforms — Snowflake, Databricks, Apache Spark, Kafka, Airflow
- Cloud platform experience — AWS, Azure, or GCP and their native data services
- Experience with real-time data processing and streaming architectures
- Solid data modeling, warehousing, and dimensional modeling fundamentals
- Knowledge of containerization and orchestration (Docker, Kubernetes)
- Practical knowledge of MCP and AI-assisted development tools
- Familiarity with DataOps and MLOps practices
- Experience managing sensitive/PII data with attention to compliance and governance
- Strong communication skills across technical and non-technical stakeholders
Preferred Skills
- Background in data science or analytics
- Experience in client-facing or Professional Services roles