What are the responsibilities and job description for the Sr DataBricks Engineer position at SNI Technology?
Senior Data Engineer (Databricks Focus)
Location: Omaha, NE (100% Onsite)
Employment Type: Full-Time / Direct Hire
Compensation: $130,000 - $160,000 (flexible based on experience)
Overview
SNI Technology is partnering with a well-established, growth-oriented organization to hire a Senior Data Engineer. This role is ideal for someone who thrives in a hands-on environment, enjoys building scalable data solutions, and wants to play a key role in shaping modern data architecture.
Key Responsibilities
Required Qualifications
Preferred / Nice-to-Have Skills
Location: Omaha, NE (100% Onsite)
Employment Type: Full-Time / Direct Hire
Compensation: $130,000 - $160,000 (flexible based on experience)
Overview
SNI Technology is partnering with a well-established, growth-oriented organization to hire a Senior Data Engineer. This role is ideal for someone who thrives in a hands-on environment, enjoys building scalable data solutions, and wants to play a key role in shaping modern data architecture.
Key Responsibilities
- Design, build, and deploy scalable data pipelines and workflows using Databricks
- Develop and optimize Apache Spark applications for large-scale data processing
- Integrate data platforms with various internal and external data sources
- Ensure data quality, integrity, and governance across systems
- Optimize database performance and manage high-volume datasets
- Administer and maintain the Databricks environment
- Monitor, troubleshoot, and enhance data pipelines and workflows
- Partner with analysts and stakeholders to support advanced analytics and machine learning initiatives
- Implement best practices across version control, testing, and documentation
- Mentor junior team members and contribute to knowledge sharing
Required Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- 5 years of experience in data engineering or similar role
- Strong programming skills in Python, Java, or Scala
- Hands-on experience with Apache Spark and big data frameworks
- Experience working with cloud platforms (AWS, Azure, or Google Cloud Platform)
- Solid understanding of data warehousing and ETL/ELT processes
- Strong analytical and problem-solving skills
Preferred / Nice-to-Have Skills
- Experience with Databricks Delta Lake and/or MLflow
- Exposure to Infrastructure as Code (Terraform or similar)
- Familiarity with CI/CD pipelines for data or ML workflows
- Experience with containerization tools such as Docker
- Working knowledge of machine learning concepts
Salary : $130,000 - $160,000