Demo

Software Engineer - Distributed Training

clockworksystems
Palo Alto, CA Full Time
POSTED ON 11/25/2025
AVAILABLE BEFORE 1/25/2026

About Us

Clockwork.io – A Software-Driven Revolution in AI Networking

Clockwork Systems was founded by Stanford researchers and veteran systems engineers who share a vision for redefining the foundations of distributed computing. As AI workloads grow increasingly complex, traditional infrastructure struggles to meet the demands of performance, reliability, and precise coordination. Clockwork is pioneering a software-driven approach to AI networking, delivering deterministic time, ultra-low latency, and seamless scalability for modern distributed systems.

To learn more, visit www.clockwork.io.

About the Role

We are looking for an experienced software engineer to help build, optimize, and maintain large-scale distributed training infrastructure based on the PyTorch ecosystem. This role focuses on production-grade training workflows involving multi-GPU and multi-node orchestration, high-performance communication layers, and advanced parallelism strategies.

You’ll work alongside infrastructure and machine learning teams to ensure training jobs are efficient, scalable, and resilient.

What You will do

  • Develop and support distributed PyTorch training jobs using torch.distributed / c10d
  • Integrate and maintain frameworks like Megatron-LM, DeepSpeed, and related LLM training stacks
  • Diagnose and resolve distributed training issues (e.g., NCCL hangs, OOM, checkpoint corruption)
  • Optimize performance across communication, I/O, and memory bottlenecks
  • Implement fault tolerance, checkpointing, and recovery mechanisms for long-running jobs
  • Write tooling and scripts to streamline training workflows and experiment management
  • Collaborate with ML engineers to ensure compatibility with orchestration and container environments (e.g., Slurm, Kubernetes)

What We’re Looking For

  • Deep experience with PyTorch and torch.distributed (c10d)
  • Hands-on experience with at least one of: Megatron-LM, DeepSpeed, or FairScale
  • Proficiency in Python and Linux shell scripting
  • Experience with multi-node GPU clusters using Slurm, Kubernetes, or similar
  • Strong understanding of NCCL, collective communication, and GPU topology
  • Familiarity with debugging tools and techniques for distributed systems

Preferred Skills

  • Experience scaling LLM training across 8 GPUs and multiple nodes
  • Knowledge of tensor, pipeline, and data parallelism
  • Familiarity with containerized training environments (Docker, Singularity)
  • Exposure to HPC environments or cloud GPU infrastructure
  • Experience with training workload orchestration tools or custom job launchers
  • Comfort with large-scale checkpointing, resume/restart logic, and model I/O

Bonus Skills

  • Profiling tools: PyTorch Profiler, Nsight, nvprof, or equivalent
  • Experience with performance tuning in distributed training environments
  • Contributions to ML infrastructure open-source projects
  • Familiarity with storage, networking, or RDMA/GPU Direct technologies
  • Understanding of observability in ML pipelines (metrics, logs, dashboards)

Enjoy

  • Challenging projects.
  • A friendly and inclusive workplace culture.
  • Competitive compensation.
  • A great benefits package.
  • Catered lunch

 

Clockwork Systems is an equal opportunity employer. We are committed to building world-class teams by welcoming bright, passionate individuals from all backgrounds. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, religion, age, sex, sexual orientation, gender identity or expression, national origin, disability, or protected veteran status. We believe diversity drives innovation, and we grow stronger together.

Salary.com Estimation for Software Engineer - Distributed Training in Palo Alto, CA
$138,430 to $167,743
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Software Engineer - Distributed Training?

Sign up to receive alerts about other jobs on the Software Engineer - Distributed Training career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$146,673 - $180,130
Income Estimation: 
$176,149 - $220,529
Income Estimation: 
$77,657 - $95,021
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Not the job you're looking for? Here are some other Software Engineer - Distributed Training jobs in the Palo Alto, CA area that may be a better fit.

Software Engineer - Distributed Training

ExecutivePlacements.com, Palo Alto, CA

Software Engineer - Distributed Training Infrastructure

Clockwork Systems, Inc., Palo Alto, CA

AI Assistant is available now!

Feel free to start your new journey!