What are the responsibilities and job description for the Lead Data Enginee -Azure and SAP position at Mega Cloud Lab?
We are seeking a seasoned Lead Data Engineer to architect and execute scalable data engineering and migration strategies. You will be responsible for the end-to-end migration of data from legacy systems to modern cloud platforms, ensuring data integrity, minimal downtime, and robust data governance. This role requires a technical leader who can drive excellence, mentor teams, and deliver optimized data pipelines that enable advanced analytics.
Key Responsibilities
Key Responsibilities
- Architect, develop, and implement high-performance ETL solutions utilizing Azure Data Factory (ADF) and SAP Data Services.
- Lead the migration of data logic, including the analysis and conversion of stored procedures from SAP HANA to Databricks using SQL/PL-SQL.
- Utilize Fivetran to automate and manage data ingestion pipelines from source systems like SAP S4 into the data lakehouse.
- Design, build, and maintain complex, scalable data pipelines using Python and Databricks.
- Champion data governance, security, and compliance standards across all data engineering initiatives.
- Provide technical leadership, mentorship, and guidance to data engineering teams on best practices and architecture.
- Proactively identify, troubleshoot, and resolve performance bottlenecks and network/VPN-related data flow issues.
- Collaborate with stakeholders to translate business requirements into technical solutions and provide regular project updates.
- Document data flows, pipeline designs, and lineage to ensure clarity and maintainability.
- Actively participate in Agile/Scrum ceremonies using tools like Jira or Azure DevOps.
- 5 years of professional experience in data engineering, platform development, with proven leadership/architecture responsibilities.
- Must have led a minimum of 3 end-to-end enterprise data projects utilizing the Microsoft Azure tech stack and Databricks.
- 5 years of hands-on experience building ETL/ELT pipelines with Azure Data Factory (ADF).
- Demonstrable expertise in SQL and experience migrating logic from platforms like SAP HANA.
- Practical experience with Fivetran for automated data ingestion.
- Solid understanding of networking concepts and experience resolving VPN and data flow issues.
- Familiarity with data governance, security protocols, and compliance frameworks.
- Proficiency in Python for data pipeline development.
- Strong interpersonal and communication skills, with the ability to collaborate effectively with both technical teams and business stakeholders.
- Bachelor’s degree (BS/MS) in Computer Science, Information Systems, or a related field.
- Prior experience working in an Agile/Scrum environment with tools like Jira or Azure DevOps.