What are the responsibilities and job description for the Databricks SME position at take2it?
Databricks SME
Position Overview:
We are seeking an experienced Databricks SME to join our team in a crucial role, supporting the migration and development efforts related to cloud data platforms. This role requires a highly skilled professional with deep expertise in Databricks, data engineering, and associated security technologies to facilitate seamless data transformation, security controls, and platform integration within Azure environments. The successful candidate will play a pivotal role in optimizing data workflows, implementing security policies, and supporting ongoing migration projects.
Key Responsibilities:
- Lead the development and optimization of data pipelines using Databricks, specifically focusing on processing and managing data stored in Parquet files.
- Configure and manage data access controls, implementing row-level security policies through Databricks and Immuta platforms.
- Collaborate with cross-functional teams to migrate Oracle databases to Databricks in Azure, ensuring data integrity and performance.
- Design, develop, and maintain Extract, Transform, Load (ETL) processes to facilitate data ingestion into the Summit Data Platform (SDP)—the VA's data platform.
- Gain comprehensive understanding of SAP and SDP architectures to align data engineering efforts efficiently.
- Provide technical guidance, feedback, and best practices for data security and platform enhancements.
- Assist in designing governance frameworks and procedures for data access and security.
- Participate in sprint planning, code reviews, and documentation efforts to ensure high-quality deliverables.
- Proven experience as a Data Engineer with extensive hands-on work with Databricks on Azure.
- Strong familiarity with managing data stored in Parquet format and experience with cloud-based data lake architectures.
- Experience with Immuta or similar data security platforms, including the implementation of row-level security policies.
- Demonstrated expertise in building scalable ETL pipelines for data migration and integration.
- Knowledge of relational databases, especially Oracle, and experience transitioning data to cloud platforms.
- Comfortable learning new platforms such as Summit Data Platform (SDP) and providing actionable insights.
- Excellent problem-solving skills, communication, and ability to work collaboratively in a team environment.
- Master’s Degree in Computer Science, Electronics Engineering, or a related technical discipline, or equivalent professional experience.
- Minimum of 10 years of relevant technical experience or 20 years with significant hands-on data engineering expertise.
- Relevant certifications in Databricks, Azure Data Services, or security platforms are a plus.