Demo

LLM Training Frameworks and Optimization Engineer

togetherai
San Francisco, CA Full Time
POSTED ON 12/15/2025
AVAILABLE BEFORE 2/15/2026

About the Role

At Together.ai, we are building cutting-edge infrastructure to enable efficient and scalable training of large language models (LLMs). We focus on optimizing training frameworks, algorithms, and infrastructure to push the boundaries of AI performance, scalability, and cost-efficiency.

We are seeking a LLM Training Frameworks and Optimization Engineer to drive innovations in the development and optimization of distributed training frameworks. In this role, you will ensure that our LLM training pipelines are robust, efficient, and capable of handling the complexities of large-scale distributed systems.

Responsibilities

  • Framework Development and Optimization:
    • Design, implement, and optimize distributed training frameworks tailored for large language models.
    • Develop custom modules, plugins, and features to enhance framework scalability and performance.
  • Algorithmic and Systems Optimization:
    • Optimize communication patterns (e.g., gradient synchronization, all-reduce) in distributed training.
    • Implement techniques like mixed precision, tensor parallelism, pipeline parallelism, and sharded training.
  • Performance Tuning:
    • Conduct in-depth profiling and debugging of training jobs to identify and resolve bottlenecks.
    • Collaborate with hardware teams to optimize performance for GPUs, TPUs, and other accelerators.
  • Scalability and Resilience:
    • Ensure training systems scale efficiently to thousands of nodes and petabytes of data.
    • Develop resilience mechanisms for fault-tolerant and checkpointed training pipelines.
  • Collaboration and Support:
    • Work closely with researchers, data engineers, and platform teams to ensure training frameworks meet model and workload requirements.
    • Provide guidance and tools to improve the overall efficiency of the LLM development lifecycle.

Requirements

Must-Have:

  • Experience:
    • 5 years of experience in deep learning frameworks, distributed systems, or machine learning infrastructure.
  • Technical Skills:
    • Expertise in distributed training frameworks (e.g., PyTorch DDP, DeepSpeed, Megatron-LM, TensorFlow XLA).
    • Strong understanding of parallelism techniques (e.g., data, tensor, pipeline, and ZeRO-based parallelism).
    • Familiarity with GPU/TPU hardware and deep learning performance optimizations.
  • Programming:
    • Proficient in Python and C or CUDA for high-performance computing.
  • Optimization Techniques:
    • Experience with memory optimization techniques (e.g., activation checkpointing, gradient sharding).
    • Knowledge of training dynamics for large-scale LLMs, including hyperparameter tuning and optimization.
  • Soft Skills:
    • Analytical problem-solving skills and a focus on performance improvement.
    • Strong collaboration and communication skills across teams.

Nice-to-Have:

  • Familiarity with graph optimization and compiler-level performance tuning.
  • Contributions to open-source deep learning or distributed training projects.
  • Experience with low-level hardware optimizations (e.g., kernel fusion, custom CUDA kernels).

 

About Together AI

Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.

Compensation

We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 equity benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.

Equal Opportunity

Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.

Please see our privacy policy at https://www.together.ai/privacy  

 

Salary : $160,000 - $230,000

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a LLM Training Frameworks and Optimization Engineer?

Sign up to receive alerts about other jobs on the LLM Training Frameworks and Optimization Engineer career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$183,599 - $245,105
Income Estimation: 
$219,539 - $304,313
Income Estimation: 
$56,489 - $71,327
Income Estimation: 
$70,310 - $88,223
Income Estimation: 
$66,679 - $90,237
Income Estimation: 
$70,310 - $88,223
Income Estimation: 
$88,950 - $110,401
Income Estimation: 
$84,958 - $111,603
Income Estimation: 
$51,936 - $66,739
Income Estimation: 
$117,059 - $151,769
Income Estimation: 
$115,336 - $159,446
Income Estimation: 
$88,950 - $110,401
Income Estimation: 
$109,186 - $139,009
Income Estimation: 
$115,336 - $159,446
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at togetherai

  • togetherai San Francisco, CA
  • Customer Support Engineer Location: India (Remote) About the role: As a Customer Support Engineer at a pioneering AI company, you'll be the first line of d... more
  • 13 Days Ago

  • togetherai San Francisco, CA
  • About the Role As Senior Director of Capital Markets and Corporate Development at Together, you will be a senior member of the Strategic Finance team, work... more
  • 14 Days Ago

  • togetherai San Francisco, CA
  • Together AI is building the AI Acceleration Cloud, an end-to-end platform for the full generative AI lifecycle, combining the fastest LLM inference engine ... more
  • 16 Days Ago

  • togetherai San Francisco, CA
  • About the Role Together.ai is driving innovation in AI infrastructure by creating cutting-edge systems that enable scalable and efficient machine learning ... more
  • 16 Days Ago


Not the job you're looking for? Here are some other LLM Training Frameworks and Optimization Engineer jobs in the San Francisco, CA area that may be a better fit.

  • Together AI San Francisco, CA
  • Role At Together.ai, we are building cutting-edge infrastructure to enable efficient and scalable training of large language models (LLMs). We focus on opt... more
  • 19 Days Ago

  • togetherai San Francisco, CA
  • About the Role At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs).... more
  • 29 Days Ago

AI Assistant is available now!

Feel free to start your new journey!