What are the responsibilities and job description for the Databrick Developer with Azure position at Swanktek, Inc?
We are seeking a skilled Databricks Developer to design, develop, and optimize large-scale data processing solutions using Databricks, Apache Spark, and cloud platforms. The ideal candidate will have strong experience in building data pipelines, transforming large datasets, and supporting analytics and machine learning workloads in a modern data platform.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using Databricks (Apache Spark)
- Implement data transformations using PySpark / Spark SQL / Scala
- Integrate data from multiple sources (RDBMS, APIs, streaming, files) into Databricks
- Optimize Spark jobs for performance, scalability, and cost efficiency
- Work with Delta Lake for ACID transactions, data versioning, and time travel
- Collaborate with data engineers, data scientists, and business teams
- Implement data quality checks, logging, and error handling
- Support real-time and batch processing workloads
- Participate in code reviews and follow best engineering practices
- Ensure data security, governance, and compliance standards
Required Skills & Qualifications
- Strong experience with Databricks and Apache Spark
- Proficiency in PySpark, Spark SQL, or Scala
- Experience with Delta Lake
- Hands-on experience with cloud platforms:
- Azure (ADLS Gen2, Synapse, ADF)
- Experience building data pipelines and data lakes
- Solid understanding of data modeling and data warehousing concepts
- Experience with Git, CI/CD pipelines
Job Types: Full-time, Contract
Pay: $75.00 per hour
Expected hours: 40 per week
Experience:
- Databrick: 5 years (Required)
- Azure: 5 years (Required)
- Banking Domain: 2 years (Required)
Work Location: Hybrid remote in New York, NY
Salary : $75