What are the responsibilities and job description for the Databricks Engineer with Python position at CyberX Info System?
Job Title – Senior Databricks Engineer with Python Experience (5)
Location: Wilmington, DE – 5 days onsite role, no hybrid
Long Term Contract role
Job Description:
We are looking for a Senior Data Engineer with strong experience in Databricks, PySpark, and modern Data Warehouse systems. The ideal candidate can design, build, and optimize scalable data pipelines and work closely with analytics, product, and engineering teams.
Key Responsibilities:
•Design and build ETL/ELT pipelines using Databricks and PySpark
•Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas)
•Optimize data workflows for performance, scalability, and cost
•Work with cloud platforms (Azure/AWS/GCP) for storage, compute, and orchestration
•Ensure data quality, reliability, and security across pipelines
•Collaborate with cross-functional teams (Data Science, BI, Product)
•Write clean, reusable code and follow engineering best practices
•Troubleshoot issues in production data pipelines
Required Skills:
•Strong hands-on skills in Databricks, PySpark, and SQL
•Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines
•Solid understanding of Delta Lake and Lakehouse architecture
•Experience with at least one cloud platform (Azure preferred)
•Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.)
Nice to Have:
•Experience with CI/CD for data pipelines
•Knowledge of data governance tools (Unity Catalog or similar)
•Exposure to ML data preparation pipelines
Soft Skills:
•Strong communication and documentation skills
•Ability to work independently and mentor others
- •Problem-solver with a focus on delivering business value