What are the responsibilities and job description for the DevOps Engineer - Data Ops position at AI Cybersecurity Company?
We're Hiring:
DevOps Engineer – DataOps (SF Bay Area)
Do you get excited about turning complex ideas into sleek, responsive interfaces that just work? We’re building the next generation of web applications and looking for a DataOps Engineer to enhance developer productivity by building scalable data platforms. You’ll craft scalable, high-performance enterprise platform that feel effortless for users, working side by side with designers and backend engineers to translate vision into elegant, maintainable solutions.
About Us
We’re a well-funded AI startup ($25M seed round) in the San Francisco Bay Area, led by serial entrepreneurs with decades of success in cybersecurity (achieving > $3B valuations). We have paying customers and are partnering with Fortune 500 companies on a mission to transform the cybersecurity landscape with cutting-edge AI, including AI agents and Generative AI.
Why This Role Matters
This role focuses on automating workflows, improving platform reliability, and supporting data engineering teams with efficient development and deployment practices. In a world where digital experiences shape trust and adoption, DataOps increases development productivity which directly drives product success and customer confidence.
What You’ll Do
- Design, deploy, and operate scalable data platforms and pipelines, primarily on Azure (Databricks, ADF, ADLS)
- Build, manage, and optimize Apache Spark clusters and workloads for batch and streaming data processing across Azure and AWS environments.
- Implement CI/CD pipelines for data engineering code, Spark jobs, and pipeline configurations using Azure DevOps/GitHub Actions
- Automate infrastructure using Infrastructure as Code (Terraform) and manage containerized workloads with Docker and Kubernetes
- Monitor data pipelines and platforms to ensure data reliability, quality, observability, and cost optimization across Azure and AWS data platforms.
- Enforce security, governance, and best practices, collaborating closely with data engineers and platform teams in Azure-first, multi-cloud environments.
What We’re Looking For
- 6 years of professional experience in data engineering, Data DevOps, or Data platform engineering roles
- Proven experience supporting production-grade data platforms in enterprise environments
- Proven ability to design, build, deploy, and maintain scalable data pipelines (ETL/ELT)
- Deep understanding of Apache Spark for batch and streaming workloads
- Experience creating, configuring, and managing Spark clusters, including performance tuning and cost optimization
- Practical experience with at least one major cloud provider: AWS, Azure, or GCP.
- Strong experience using Terraform for infrastructure automation.
- Proven ability to diagnose and resolve system and infrastructure issues.
- Bonus Skill: Experience deploying and managing Spark workloads on Azure Databricks or Azure Synapse
Why Join Us?
- Early-stage impact with the stability of $25M in seed funding
- World-class leadership across AI, Engineering, and Product
- Traction with Fortune 500s already in motion
- Highly competitive comp & equity
- Health, wellness, and professional development benefits
- Access to the latest tools in AI/ML development
Work Location – San Jose, CA (5 days in the office, founding team)
Let’s Build the Future of Cybersecurity Together
If you're excited about AI, enterprise cybersecurity, and shaping a category from the ground up, we’d love to hear from you