What are the responsibilities and job description for the Looking for Data Support Engineer (Databricks experience)_Need only local to Chicago, IL position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Vrddhi Solutions LLC, is seeking the following. Apply via Dice today!
Position: Data Support Engineer (Databricks experience)
Location: onsite 3-4 days a week, Chicago, IL
Only locals required β No Relocation
Any Visa
Job Description:
A Data Application Support role specializing in Databricks focuses on maintaining, troubleshooting, and optimizing Spark-based data pipelines and Lakehouse architectures. Key responsibilities include resolving L2/L3 production incidents, performance tuning SQL/Python (PySpark) jobs, managing Delta Lake assets, and collaborating with data engineers to ensure data reliability.
Key Responsibilities
Position: Data Support Engineer (Databricks experience)
Location: onsite 3-4 days a week, Chicago, IL
Only locals required β No Relocation
Any Visa
Job Description:
A Data Application Support role specializing in Databricks focuses on maintaining, troubleshooting, and optimizing Spark-based data pipelines and Lakehouse architectures. Key responsibilities include resolving L2/L3 production incidents, performance tuning SQL/Python (PySpark) jobs, managing Delta Lake assets, and collaborating with data engineers to ensure data reliability.
Key Responsibilities
- Incident & Problem Management: Provide L2/L3 support for data applications, resolving production issues and troubleshooting Databricks jobs, notebooks, and workflows.
- Performance Tuning: Optimize Spark applications, SQL queries, and Delta Lake tables to improve efficiency and reduce costs.
- Pipeline Maintenance: Monitor and troubleshoot ETL/ELT pipelines in Databricks (including Data Factory/Delta Live Tables), ensuring data quality and lineage (Unity Catalog).
- Collaboration: Act as a liaison between users, data engineering teams, and platform engineering, providing technical expertise and contributing to documentation.
- Automation: Create tools to automate routine support tasks and enhance support team productivity. Databricks 5
- Technical Expertise: Strong hands-on experience with Apache Spark, Python/PySpark, and SQL.
- Databricks Ecosystem: Proficiency with Databricks Unified Data Analytics Platform, Delta Lake, and ideally Azure Databricks.
- Cloud Data Storage: Experience with Azure Data Lake Storage (ADLS Gen2) or similar data lake technologies.
- Version Control & CI/CD: Experience with Git/Azure DevOps for code management.
- Problem-Solving: Strong analytical skills, with the ability to diagnose complex data processing bottlenecks.
- Data orchestration tools (e.g., Apache Airflow, Azure Data Factory).
- Data governance tools (e.g., Unity Catalog, Collibra).
- Streaming data knowledge (e.g., Spark Structured Streaming).