What are the responsibilities and job description for the Data Platform Engineer, Consultant position at Releady?
Backed by a major non-profit healthcare consortium, our client is a newly formed, well-funded healthcare technology spin-off on a mission to fix a fundamentally broken industry. We are building the universal operating system for healthcare—an AI-native, event-driven platform connecting payers, providers, and patients. This isn't about maintaining legacy systems; it's about building a disruptive, intelligent data hub from the ground up within a modern, enterprise-scale Microsoft/Azure ecosystem. We are building a next-generation data platform from the ground up, shifting toward a modern, fully automated, and code-driven environment. This role is focused entirely on tools, infrastructure, and platform engineering, not data product development.
You will own the core orchestration and platform layer, designing and scaling the foundation that enables reliable, high-performance data pipelines across the organization. This is a hands-on role for a builder who can turn infrastructure into a product and drive automation at scale within a rapidly growing commercialization data ecosystem.
*Must be eligible to work on W2 without sponsorship. Not eligible for C2C.
- Pay Rate: $80 – $90/hr
- Duration: 6-month contract-to-hire
- Location: Hybrid; 2x/week in office — San Diego, Long Beach, Sacramento, Rancho Cordova, or Oakland (SF Bay Area)
-
Own and build the Airflow platform, including deployment, scaling, and long-term support
-
Design and manage Kubernetes (AKS) clusters to run Airflow and related workloads
-
Define and implement CI/CD and GitOps strategies using Azure DevOps
-
Build infrastructure using Terraform (or similar IaC tools)
-
Establish and maintain the core platform layer that supports data pipelines and data products
-
Work closely with engineering teams to ensure platform reliability, scalability, and usability
-
Help transition the environment from legacy tools (e.g., Tidal) to modern orchestration (Airflow)
Current Environment:
-
Current / legacy stack: Tidal, Snowflake, dbt Cloud
-
Target state:
-
Apache Airflow for orchestration
-
Azure DevOps for CI/CD
-
AKS (Kubernetes) for compute and workload management
-
Git-driven deployments (GitOps)
-
Terraform for infrastructure as code
-
Strong background in platform engineering, DevOps, or infrastructure engineering
-
Proven experience building and owning Airflow environments
-
Kubernetes (AKS preferred), including cluster design, deployment, and management
-
Docker and containerization
-
CI/CD pipelines and GitOps practices
-
Infrastructure as Code (Terraform preferred; similar tools acceptable)
-
Solid experience working within the Azure cloud ecosystem
-
Ability to design, build, and own systems end-to-end in a highly autonomous environment
Nice to Have:
-
Experience migrating from legacy schedulers (e.g., Tidal) to Airflow
-
Exposure to data platforms (Snowflake, dbt)
-
Experience in highly regulated environments (healthcare, etc.)
Salary : $80 - $90