What are the responsibilities and job description for the Databricks & MLOps Engineer position at LatentView Analytics?
Job Description
LatentView Analytics is a leading global analytics and decision sciences provider, delivering solutions that help companies drive digital transformation and use data to gain a competitive advantage. With analytics solutions that provide a 360-degree view of the digital consumer, fuel machine learning capabilities, and support artificial intelligence initiatives., LatentView Analytics enables leading global brands to predict new revenue streams, anticipate product trends and popularity, improve customer retention rates, optimize investment decisions, and turn unstructured data into valuable business assets.
Travel and relocation are possible to unanticipated client locations throughout the US.
Description:
We are looking for a Databricks & MLOps Engineer with 8+ years of experience and expertise in machine learning operations (MLOps), model lifecycle management, and cloud-based data platforms.
The ideal candidate will have hands-on experience in Databricks, MLflow, CI/CD, and orchestration tools, and should be comfortable working in any cloud environment (Azure, AWS, or GCP).
Responsibilities:
Databricks ML Platform Development
Design and implement scalable ML pipelines in Databricks using MLflow, Delta Lake, and Feature Store.
Optimize ML model training, versioning, and deployment using Databricks Jobs and Workflows.
Build reusable notebooks and libraries for model training, testing, and inference.
MLOps & Model Deployment
Implement CI/CD pipelines for ML models using Databricks Repos, GitHub Actions, Jenkins, or Azure DevOps.
Automate model deployment using MLflow Model Registry, REST APIs, or Databricks Model Serving.
Monitor model drift, performance, and retraining needs.
Cloud & Infrastructure Management
Deploy ML solutions on Azure (Databricks, AKS), AWS (SageMaker, EMR), or GCP (Vertex AI, GKE).
Set up containerized ML workloads using Docker and Kubernetes.
Manage security, IAM roles, and access policies across environments.
Orchestration & Data Pipelines
Migrate ML workflows from Airflow, Cloud Composer, or Step Functions to Databricks Jobs.
Integrate with data engineering pipelines built on Delta Lake & Spark.
Monitoring & Observability
Track data and model lineage using Unity Catalog & MLflow.
Automate alerts for failures, performance degradation, and cost monitoring.
Skills:
Python, SQL, Pyspark, AWS/Azure/GCP
MLOps, Airflow, Vertex AI
Salary : $99,000 - $166,000