What are the responsibilities and job description for the Azure Data Engineer- onsite position at Jobs via Dice?
Job Description
Client is hiring an Azure Data Engineer to lead the build of a next-generation data platform. The resource will be instrumental in migrating Cllient s data infrastructure from a legacy system to a cutting-edge Microsoft Fabric ecosystem. The resource will design the Medallion architecture, ensuring a seamless, scalable data flow from diverse source systems into Client s OneLake environment. This role is central to Client s mission of owning Client s data future and driving innovation in media sales.
What You ll Do
Pipeline Orchestration: Design, build, and deploy complex ETL/ELT pipelines using Azure Data Factory and Fabric Data Pipelines to power Client s enterprise analytics.
Compute & Automation: Develop custom Azure Functions (Python) and API integrations to bridge gaps where native connectors are insufficient, ensuring no data source is out of reach.
Hybrid Ingestion: Architect robust ingestion frameworks capable of handling high-velocity real-time APIs and structured scheduled landings (CSV/Parquet).
Storage Optimization: Manage and optimize OneLake/Delta Lake storage to ensure peak performance for downstream Analytics Engineering and AI consumption.
CI/CD & DevOps: Maintain and enhance GitHub Action pipelines to automate deployments across Dev, Test, and Prod environments, ensuring a stable and reliable release cycle.
Governance & Security: Implement best-class data governance, privacy, and security practices, utilizing Managed Identities and VNETs to protect Client s data assets.
Technical Troubleshooting: Perform root cause analysis on pipeline failures and implement automated monitoring to minimize downtime and points of failure.
Requirements
Experience: 5 7 years as a Data Engineer in an enterprise environment with a proven track record in Azure Cloud.
Cloud Architecture: Deep proficiency in Azure-native components, including Azure Data Lake Gen2, API Gateway, and Microsoft Fabric.
Coding & Scripting: Expert-level Python (including API development with FastAPI) and SQL across relational and NoSQL databases.
Big Data Tools: Strong hands-on experience with Spark (PySpark) and Delta Lake principles.
Automation: Demonstrated experience with serverless compute (Azure Functions) and YAML-based CI/CD workflows.
Certification: Microsoft Certified: Azure Data Engineer Associate is strongly preferred.
Soft Skills: Ability to lead architectural discussions and navigate complex cloud environments (Subscriptions, VNETs, Managed Identities).
Client is hiring an Azure Data Engineer to lead the build of a next-generation data platform. The resource will be instrumental in migrating Cllient s data infrastructure from a legacy system to a cutting-edge Microsoft Fabric ecosystem. The resource will design the Medallion architecture, ensuring a seamless, scalable data flow from diverse source systems into Client s OneLake environment. This role is central to Client s mission of owning Client s data future and driving innovation in media sales.
What You ll Do
Pipeline Orchestration: Design, build, and deploy complex ETL/ELT pipelines using Azure Data Factory and Fabric Data Pipelines to power Client s enterprise analytics.
Compute & Automation: Develop custom Azure Functions (Python) and API integrations to bridge gaps where native connectors are insufficient, ensuring no data source is out of reach.
Hybrid Ingestion: Architect robust ingestion frameworks capable of handling high-velocity real-time APIs and structured scheduled landings (CSV/Parquet).
Storage Optimization: Manage and optimize OneLake/Delta Lake storage to ensure peak performance for downstream Analytics Engineering and AI consumption.
CI/CD & DevOps: Maintain and enhance GitHub Action pipelines to automate deployments across Dev, Test, and Prod environments, ensuring a stable and reliable release cycle.
Governance & Security: Implement best-class data governance, privacy, and security practices, utilizing Managed Identities and VNETs to protect Client s data assets.
Technical Troubleshooting: Perform root cause analysis on pipeline failures and implement automated monitoring to minimize downtime and points of failure.
Requirements
Experience: 5 7 years as a Data Engineer in an enterprise environment with a proven track record in Azure Cloud.
Cloud Architecture: Deep proficiency in Azure-native components, including Azure Data Lake Gen2, API Gateway, and Microsoft Fabric.
Coding & Scripting: Expert-level Python (including API development with FastAPI) and SQL across relational and NoSQL databases.
Big Data Tools: Strong hands-on experience with Spark (PySpark) and Delta Lake principles.
Automation: Demonstrated experience with serverless compute (Azure Functions) and YAML-based CI/CD workflows.
Certification: Microsoft Certified: Azure Data Engineer Associate is strongly preferred.
Soft Skills: Ability to lead architectural discussions and navigate complex cloud environments (Subscriptions, VNETs, Managed Identities).