What are the responsibilities and job description for the Big Data Developer (Spark / Scala / Python) position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Strategic Staffing Solutions, is seeking the following. Apply via Dice today!
Big Data Developer (Spark / Scala / Python)
Location: Charlotte, NC (Hybrid)
Duration: Contract April 2026 to April 2028
Overview
We are seeking an experienced Big Data Developer to support enterprise risk technology initiatives. This role will focus on developing and maintaining large-scale data processing solutions using Spark, Scala, and Python within a distributed big data environment.
The ideal candidate will have strong experience working with big data frameworks, scheduling tools, and source control platforms, and will collaborate closely with engineering teams to support complex data processing and analytics initiatives.
Key Responsibilities
Big Data Development
Big Data Developer (Spark / Scala / Python)
Location: Charlotte, NC (Hybrid)
Duration: Contract April 2026 to April 2028
Overview
We are seeking an experienced Big Data Developer to support enterprise risk technology initiatives. This role will focus on developing and maintaining large-scale data processing solutions using Spark, Scala, and Python within a distributed big data environment.
The ideal candidate will have strong experience working with big data frameworks, scheduling tools, and source control platforms, and will collaborate closely with engineering teams to support complex data processing and analytics initiatives.
Key Responsibilities
Big Data Development
- Design and develop scalable big data solutions using Spark, Scala, and Python.
- Build and optimize data pipelines and distributed data processing workflows.
- Work with large datasets to support enterprise analytics and risk technology platforms.
- Develop and maintain ETL and data transformation processes.
- Optimize performance for large-scale Spark-based workloads.
- Ensure reliability and scalability across distributed computing environments.
- Implement job scheduling and automation using Autosys or similar scheduling tools.
- Monitor and maintain data processing workflows to ensure operational stability.
- Work closely with engineering teams, data engineers, and technical stakeholders to deliver scalable solutions.
- Participate in design discussions and contribute to architecture decisions for data platforms.
- Maintain source control and code versioning using Git.
- Follow established development standards and collaborate through Agile processes.
- 5 years of software engineering or big data development experience
- Strong experience with:
- Apache Spark
- Scala
- Python
- Experience working with large-scale data processing systems
- Experience with Autosys or similar job scheduling tools
- Experience using Git or other version control platforms
- Experience working in financial services or enterprise data environments
- Experience building big data pipelines or ETL frameworks
- Familiarity with distributed computing and data platform architecture
- Apache Spark
- Scala
- Python
- Big Data Development
- ETL / Data Pipelines
- Autosys Scheduling
- Git Version Control
- Distributed Data Processing