What are the responsibilities and job description for the Lead Cloud Engineer - Apache Spark / Google Cloud Platform / Python / Microservices position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Strategic Staffing Solutions, is seeking the following. Apply via Dice today!
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
Beware of scams. S3 never asks for money during its onboarding process.
Job title: Lead Cloud Engineer Apache Spark / Google Cloud Platform / Python / Microservices
Location: Charlotte, NC
Hybrid work- some on site work
Contract Length: 12 months
Job ref# 245704
Senior hands-on engineer supporting the Model Risk & Finance platform in a hybrid cloud environment (Google Cloud Platform OpenShift/Kubernetes).
This is a backend-focused role centered on distributed data processing and platform engineering not UI development.
Key Responsibilities
STRATEGIC STAFFING SOLUTIONS HAS AN OPENING!
This is a Contract Opportunity with our company that MUST be worked on a W2 Only. No C2C eligibility for this position. Visa Sponsorship is Available! The details are below.
Beware of scams. S3 never asks for money during its onboarding process.
Job title: Lead Cloud Engineer Apache Spark / Google Cloud Platform / Python / Microservices
Location: Charlotte, NC
Hybrid work- some on site work
Contract Length: 12 months
Job ref# 245704
Senior hands-on engineer supporting the Model Risk & Finance platform in a hybrid cloud environment (Google Cloud Platform OpenShift/Kubernetes).
This is a backend-focused role centered on distributed data processing and platform engineering not UI development.
Key Responsibilities
- Build, support, and enhance distributed data platforms and backend systems
- Develop APIs, workflows, and platform components using Python
- Work on large-scale Spark/PySpark data processing systems
- Support and optimize Kubernetes/OpenShift-based environments
- Contribute to CI/CD pipelines and platform automation
- Debug, troubleshoot, and optimize distributed systems at scale
- Support ongoing platform enhancements post cloud migration
- Apache Spark / PySpark (required)
- Google Cloud Platform (Google Cloud Platform) (strongly preferred)
- Kubernetes / OpenShift
- Python (Django, APIs)
- CI/CD: GitHub Actions, Helm, Harness
- Migration from Hadoop to Google Cloud Platform
- Build and support hybrid cloud platform (PyFarm)
- Ongoing platform engineering and optimization after migration
- Spark at scale (production experience)
- Hands-on Google Cloud Platform experience (not just exposure)
- Kubernetes / OpenShift
- Python and microservices development
- Debugging and performance tuning of distributed systems
- AI/LLM integration experience (building capabilities)
- GPU or platform-level AI exposure
- Hadoop migration experience
- Platform and Application Development team
- Works closely with Data and Support teams
- US and India team presence
- Platform Engineer (Data / ML platform)
- Cloud Data Engineer (Spark-heavy)
- Big Data Engineer with Kubernetes and Google Cloud Platform experience