What are the responsibilities and job description for the Azure databricks position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, SLK America Inc., is seeking the following. Apply via Dice today!
Key Responsibilities
Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks and Apache Spark.
- Optimize data workflows for performance, scalability, and cost efficiency.
- Collaborate with data scientists, analysts, and business stakeholders to deliver data-driven solutions.
- Manage and monitor Databricks clusters, jobs, and workflows.
- Integrate Databricks with Azure/AWS/Google Cloud Platform services (depending on organization s cloud strategy).
- Ensure data quality, governance, and security compliance across all solutions.
- Troubleshoot and resolve issues related to data ingestion, transformation, and processing.
- Mentor junior engineers and contribute to best practices and standards.
- 6 10 years of experience in data engineering or big data development.
- Strong expertise in Databricks and Apache Spark (PySpark/Scala).
- Hands-on experience with cloud platforms (Azure Data Lake, AWS S3, Google Cloud Platform BigQuery).
- Proficiency in SQL and working with relational and NoSQL databases.
- Experience with Delta Lake, MLflow, and Databricks notebooks.
- Solid understanding of data warehousing concepts, ETL frameworks, and distributed computing.
- Familiarity with CI/CD pipelines, Git, and DevOps practices.
- Strong problem-solving, communication, and collaboration skills.
- Knowledge of data governance frameworks (e.g., GDPR, HIPAA).
- Exposure to streaming technologies (Kafka, Event Hub, Kinesis).
- Certification in Databricks, Azure/AWS/Google Cloud Platform is a plus.