What are the responsibilities and job description for the Azure databricks position at SLK America Inc.?
Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks and Apache Spark.
- Optimize data workflows for performance, scalability, and cost efficiency.
- Collaborate with data scientists, analysts, and business stakeholders to deliver data-driven solutions.
- Manage and monitor Databricks clusters, jobs, and workflows.
- Integrate Databricks with Azure/AWS/Google Cloud Platform services (depending on organization s cloud strategy).
- Ensure data quality, governance, and security compliance across all solutions.
- Troubleshoot and resolve issues related to data ingestion, transformation, and processing.
- Mentor junior engineers and contribute to best practices and standards.
Required Skills & Experience
- 6 10 years of experience in data engineering or big data development.
- Strong expertise in Databricks and Apache Spark (PySpark/Scala).
- Hands-on experience with cloud platforms (Azure Data Lake, AWS S3, Google Cloud Platform BigQuery).
- Proficiency in SQL and working with relational and NoSQL databases.
- Experience with Delta Lake, MLflow, and Databricks notebooks.
- Solid understanding of data warehousing concepts, ETL frameworks, and distributed computing.
- Familiarity with CI/CD pipelines, Git, and DevOps practices.
- Strong problem-solving, communication, and collaboration skills.
Experience with machine learning workflows in Databricks.
- Knowledge of data governance frameworks (e.g., GDPR, HIPAA).
- Exposure to streaming technologies (Kafka, Event Hub, Kinesis).
- Certification in Databricks, Azure/AWS/Google Cloud Platform is a plus.
Salary : $120,000