Demo

Staff Software Engineer, GPU Infrastructure (HPC)

Cohere
San Francisco, CA Full Time
POSTED ON 11/18/2025
AVAILABLE BEFORE 12/17/2025
Who are we?

Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.

We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.

Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.

Join us on our mission and shape the future!

Why this team?

The internal infrastructure team is responsible for building world-class infrastructure and tools used to train, evaluate and serve Cohere's foundational models. By joining our team, you will work in close collaboration with AI researchers to support their AI workload needs on the cutting edge, with a strong focus on stability, scalability, and observability. You will be responsible for building and operating superclusters across multiple clouds. Your work will directly accelerate the development of industry-leading AI models that power Cohere's platform North.

We’re hiring software engineers at multiple levels. Whether you’re early in your career or a seasoned staff engineer, you’ll find opportunities to grow and make an impact here.

Please Note: All of our infrastructure roles require participating in a 24x7 on-call rotation, where you are compensated for your on-call schedule.

As a Staff Software Engineer, You Will

  • Build and scale ML-optimized HPC infrastructure: Deploy and manage Kubernetes-based GPU/TPU superclusters across multiple clouds, ensuring high throughput and low-latency performance for AI workloads.
  • Optimize for AI/ML training: Collaborate with cloud providers to fine-tune infrastructure for cost efficiency, reliability, and performance, leveraging technologies like RDMA, NCCL, and high-speed interconnects.
  • Troubleshoot and resolve complex issues: Proactively identify and resolve infrastructure bottlenecks, performance degradation, and system failures to ensure minimal disruption to AI/ML workflows.
  • Enable researchers with self-service tools: Design intuitive interfaces and workflows that allow researchers to monitor, debug, and optimize their training jobs independently.
  • Drive innovation in ML infrastructure: Work closely with AI researchers to understand emerging needs (e.g., JAX, PyTorch, distributed training) and translate them into robust, scalable infrastructure solutions.
  • Champion best practices: Advocate for observability, automation, and infrastructure-as-code (IaC) across the organization, ensuring systems are maintainable and resilient.
  • Mentorship and collaboration: Share expertise through code reviews, documentation, and cross-team collaboration, fostering a culture of knowledge transfer and engineering excellence.

You May Be a Good Fit If You Have

  • Deep expertise in ML/HPC infrastructure: Experience with GPU/TPU clusters, distributed training frameworks (JAX, PyTorch, TensorFlow), and high-performance computing (HPC) environments.
  • Kubernetes at scale: Proven ability to deploy, manage, and troubleshoot cloud-native Kubernetes clusters for AI workloads.
  • Strong programming skills: Proficiency in Python (for ML tooling) and Go (for systems engineering), with a preference for open-source contributions over reinventing solutions.
  • Low-level systems knowledge: Familiarity with Linux internals, RDMA networking, and performance optimization for ML workloads.
  • Research collaboration experience: A track record of working closely with AI researchers or ML engineers to solve infrastructure challenges.
  • Self-directed problem-solving: The ability to identify bottlenecks, propose solutions, and drive impact in a fast-paced environment.

If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply!

We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.

Full-Time Employees At Cohere Enjoy These Perks

🤝 An open and inclusive culture and work environment

🧑‍💻 Work closely with a team on the cutting edge of AI research

🍽 Weekly lunch stipend, in-office lunches & snacks

🦷 Full health and dental benefits, including a separate budget to take care of your mental health

🐣 100% Parental Leave top-up for up to 6 months

🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement

🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend

✈️ 6 weeks of vacation (30 working days!)

Salary.com Estimation for Staff Software Engineer, GPU Infrastructure (HPC) in San Francisco, CA
$121,287 to $147,474
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Staff Software Engineer, GPU Infrastructure (HPC)?

Sign up to receive alerts about other jobs on the Staff Software Engineer, GPU Infrastructure (HPC) career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
Income Estimation: 
$176,149 - $220,529
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$77,657 - $95,021
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Cohere

Cohere
Hired Organization Address Apache Junction, AZ Full Time
Description Cohere Life, Inc. JOB DESCRIPTION: 11/25/2025 Title Porter Flsa Status Non-Exempt – Part-time Department Ass...
Cohere
Hired Organization Address Montgomery, TX Full Time
Description Cohere Life, Inc. JOB DESCRIPTION: 11/26/2025 TITLE: Executive Director FLSA STATUS: Exempt REPORTS TO: Regi...
Cohere
Hired Organization Address Austin, TX Full Time
Description Cohere Life, Inc. JOB DESCRIPTION: 11/26/25 Title: Regional Operations Manager - Central Region FLSA Status:...
Cohere
Hired Organization Address Austin, TX Full Time
Description Cohere Life, Inc. JOB DESCRIPTION: November 26, 2025 TITLE: Community Operations Coordinator FLSA STATUS: No...

Not the job you're looking for? Here are some other Staff Software Engineer, GPU Infrastructure (HPC) jobs in the San Francisco, CA area that may be a better fit.

Staff Engineer GPU infrastructure

DigitalOcean, San Francisco, CA

Software Engineer, GPU Infrastructure

OpenAI, San Francisco, CA

AI Assistant is available now!

Feel free to start your new journey!