What are the responsibilities and job description for the Azure Data Engineer position at ExecutivePlacements.com?
Azure cloud platform
Python
SQL
Postgres
MongoDB
Can you describe your experience with Azure cloud platform?
How do you use Python in data engineering?
How do you work with SQL databases?
Can you explain your experience with Postgres?
How do you handle data transformation and loading across systems?
Azure Data Engineer
Job Description/tech Stack
We are looking for a proficient Azure Data Engineer to design, build, and maintain scalable data pipelines and
engineering tools and technologies including Python, SQL, Postgres, MongoDB, PySpark, Databricks, and
high-quality, performant data solutions that enable data-driven decision-making.
Key Responsibilities
Design, develop, and optimize end-to-end data pipelines and ETL/ELT processes leveraging Azure Data services and frameworks.
Build scalable data solutions using Azure Databricks, PySpark, and Snowflake to process both batch and real-time data workloads.
Develop and maintain data models and schemas in relational and NoSQL databases such as Postgres and MongoDB.
Write efficient, reusable, and maintainable code primarily in Python and SQL to transform and load data across various systems.
Monitor data pipeline performance and implement improvements for reliability, scalability, and optimization.
Ensure data quality, governance, and compliance within all data engineering efforts.
Troubleshoot and resolve data-related issues, working closely with cloud infrastructure and platform teams.
Document data architecture, workflows, and processes to support ongoing maintenance and knowledge sharing.
Python
SQL
Postgres
MongoDB
Can you describe your experience with Azure cloud platform?
How do you use Python in data engineering?
How do you work with SQL databases?
Can you explain your experience with Postgres?
How do you handle data transformation and loading across systems?
Azure Data Engineer
Job Description/tech Stack
We are looking for a proficient Azure Data Engineer to design, build, and maintain scalable data pipelines and
engineering tools and technologies including Python, SQL, Postgres, MongoDB, PySpark, Databricks, and
high-quality, performant data solutions that enable data-driven decision-making.
Key Responsibilities
Design, develop, and optimize end-to-end data pipelines and ETL/ELT processes leveraging Azure Data services and frameworks.
Build scalable data solutions using Azure Databricks, PySpark, and Snowflake to process both batch and real-time data workloads.
Develop and maintain data models and schemas in relational and NoSQL databases such as Postgres and MongoDB.
Write efficient, reusable, and maintainable code primarily in Python and SQL to transform and load data across various systems.
Monitor data pipeline performance and implement improvements for reliability, scalability, and optimization.
Ensure data quality, governance, and compliance within all data engineering efforts.
Troubleshoot and resolve data-related issues, working closely with cloud infrastructure and platform teams.
Document data architecture, workflows, and processes to support ongoing maintenance and knowledge sharing.