What are the responsibilities and job description for the Senior Data Engineer position at Publicis Groupe Holdings B.V?
The Publicis Re:Sources Guiding Principles define who we are and what we stand for. They reflect the mindset and behaviors that shape how we work, how we support one another, and how we drive progress together.
•People First, Driving Success Together
•Problem Solving Mindset
•Respect Each Other
•Partner and Collaborate as One Team
•Commit to Quality and Standards
•Innovate and Embrace the Future
Overview
We are seeking a skilled and detail-oriented Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, reporting, and data science initiatives. You will work closely with data analysts, data scientists, and business stakeholders to ensure data is accessible, reliable, and optimized for performance.
Responsibilities
- Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects.
- Develop, implement, and deploy ETL solutions.
- Preprocess and analyze large datasets to identify patterns, trends, and insights.
- Evaluate, validate, and optimize data models to ensure efficiency, and generalizability.
- Monitor and maintain the performance of data pipeline, data models in production environments, identifying opportunities for improvement and update as needed.
- Document development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement.
Qualifications
- Bachelor's or master’s degree in computer science, Engineering, or a related field.
- Strong programming skills in languages such as Python, PySpark, SQL etc.
- Experience in Build and optimize ETL workflows using tools/technologies such as Spark, Snowflake, Airflow and/or Azure Data factory, Glue, Redshift etc.
- Craft and optimize complex SQL queries and stored procedures for data transformation, aggregation, and analysis.
- Develop and maintain data models ensuring scalability and optimal performance.
- Utilize Snowpark for data processing within the Snowflake platform.
- Integrate Snowflake for efficient data storage and retrieval.
- Exposure to API integrations to facilitate data workflows.
- Experience in implementing CI-CD pipelines through DevOps platforms.
- Good experience in cloud infrastructure such as Azure, AWS or GCP
Additional information