What are the responsibilities and job description for the Databricks Engineer position at Argyll Infotech Enterprise Pvt Ltd?
Job Role : Databricks Engineer
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong
foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This
platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from
enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data
for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data
flow across the enterprise, ensure operational excellence, and provide the backbone for strategic
decision-making, predictive modeling, and innovation.
Responsibilities
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong
foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This
platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from
enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data
for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data
flow across the enterprise, ensure operational excellence, and provide the backbone for strategic
decision-making, predictive modeling, and innovation.
Responsibilities
- Data & AI Platform Engineering (Databricks-Centric):
- Design, implement, and optimize end-to-end data pipelines on Databricks, following the
- Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform
- Operationalize Databricks Workflows for orchestration, dependency management, and
- Apply schema evolution and data versioning to support agile data development.
- Platform Integration & Data Ingestion:
- Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using
- Implement connectors and ingestion frameworks that accommodate structured, semi
- Design standardized data ingestion processes with automated error handling, retries, and
- Data Quality, Monitoring, and Governance:
- Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure
- Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL
- Implement Unity Catalog or equivalent tools for centralized metadata management, data
- Security, Privacy, and Compliance:
- Enforce data security best practices including row-level security, encryption at rest/in transit,
- Design and implement data masking, tokenization, and anonymization for compliance with
- Work with security teams to audit and certify compliance controls.
Salary : $60