Demo

Sr. Product Manager - Runtime Infra, AI/ML, Annapurna Labs

Amazon Web Services (AWS)
Seattle, WA Full Time
POSTED ON 1/4/2026
AVAILABLE BEFORE 2/27/2026
Description

AWS Trainium is deployed at scale, with millions of chips in production, and has been used for training and inference of frontier models. AWS Neuron is the software stack for Trainium, enabling customers to run deep learning and generative AI workloads with optimal performance and cost efficiency.

AWS Neuron is hiring a Technical Product Manager to work backward from Trainium customers and drive the developer experience for running high-performance ML workloads at scale on AWS Trainium, from getting started with Neuron Deep Learning Containers, AMIs, and AWS services to operating at scale through orchestration, resiliency, and observability.

You will drive the product strategy for how developers interact with Trainium through container ecosystems, resource management platforms, and AWS services. This includes Neuron integration with orchestration tools (SLURM, Kubernetes), AWS services (EKS, SageMaker), Neuron Deep Learning Containers and AMIs, and Linux distribution support. You will also drive the strategy for resiliency and observability tools that enable system diagnostics, performance monitoring, health monitoring, automated recovery, and telemetry, allowing customers to operate AI training and inference workloads with maximum uptime and efficiency, as well as how Neuron Runtime System interacts with ML frameworks to ensure scale and high performance execution of models.

To be successful in this role, you will partner with engineering teams and PMs responsible for training, inference, and performance tools, Marketing, Business Development, and Solution Architects supporting customers, and develop deep knowledge and understanding of Trainium Architecture and Neuron Runtime System (including Neuron Runtime Library, Neuron Kernel Driver and Collective Communication Stack) to effectively define product strategy and make informed technical decisions.

Key job responsibilities

Key Job Responsibilities

Product Strategy & Vision: Own product strategy and roadmap. Guide trade-offs between performance, scalability, and developer experience. Write PRFAQs and PRDs.

Customer Discovery: Understand deployment challenges, orchestration needs, and infrastructure pain points. Represent customer needs in executive prioritization.

Technical Leadership: Drive alignment across Neuron components (Runtime, Kernel Driver, Collective Communication, container infrastructure) and AWS services. Partner with training, inference, and performance PMs. Write user stories and define success metrics.

Impact: Enable customers (Anthropic, Databricks, AWS teams) to deploy, monitor, and operate ML workloads at scale through container orchestration, resource management, health monitoring, and observability.

About The Team

About AWS Neuron

AWS Neuron is the software stack for running deep learning and generative AI workloads on AWS Trainium and AWS Inferentia. It includes a compiler, runtime, training and inference libraries, and developer tools for monitoring, profiling, and debugging. Built on an open source foundation, Neuron supports native PyTorch and JAX frameworks and popular ML libraries without code modification. Neuron enables rapid experimentation, distributed training across multiple chips and nodes, and cost-optimized inference powered by optimized kernels. For performance optimization, Neuron provides the Neuron Kernel Interface (NKI) for direct hardware access and a suite of profiling and debugging tools.

Basic Qualifications

  • Bachelor's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
  • 10 years of industry experience with at least 5 years in Technical product management and 3 years of software development
  • Solid knowledge in container orchestration and Kubernetes
  • Solid knowledge in computer architecture fundamentals and operating systems concepts
  • Excellent written and verbal communication abilities

Preferred Qualifications

  • Experience with Linux systems and kernel development
  • Track record of driving developer libraries
  • Experience with Machine Learning accelerators
  • Experience with concepts such as performance optimization, profiling and tooling
  • Experience with Deep Learning model training or inference.
  • Experience with distributed computing and parallel processing
  • Hands on experience with major ML framework: JAX or PyTorch
  • Familiarity with AWS services and cloud infrastructure engineering
  • Track record of driving open standards and ecosystem integration

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $136,100/year in our lowest geographic market up to $235,200/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.


Company - Annapurna Labs (U.S.) Inc.

Job ID: A2916516

Salary : $136,100 - $235,200

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Sr. Product Manager - Runtime Infra, AI/ML, Annapurna Labs?

Sign up to receive alerts about other jobs on the Sr. Product Manager - Runtime Infra, AI/ML, Annapurna Labs career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$95,865 - $120,012
Income Estimation: 
$123,272 - $153,570
Income Estimation: 
$182,291 - $231,524
Income Estimation: 
$235,366 - $268,499
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Amazon Web Services (AWS)

  • Amazon Web Services (AWS) Sparks, NV
  • Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the peop... more
  • 14 Days Ago

  • Amazon Web Services (AWS) Canton, MS
  • Description Join our dynamic AWS team and become a critical guardian of global cloud infrastructure! You'll play a pivotal role in maintaining the heartbea... more
  • 14 Days Ago

  • Amazon Web Services (AWS) Canton, MS
  • Description The Data Center Global Controls team is looking for exceptional individuals to join our Controls organization as a Controls Technician for Serv... more
  • 14 Days Ago

  • Amazon Web Services (AWS) Umatilla, OR
  • Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the peop... more
  • 14 Days Ago


Not the job you're looking for? Here are some other Sr. Product Manager - Runtime Infra, AI/ML, Annapurna Labs jobs in the Seattle, WA area that may be a better fit.

  • Amazon Web Services (AWS) Seattle, WA
  • Description AWS Neuron is looking for an experienced Technical Product Manager to define and drive product strategy for the Neuron Kernel Interface (NKI), ... more
  • 11 Days Ago

  • Amazon Web Services (AWS) Seattle, WA
  • Description In Annapurna Labs we are at the forefront of hardware/software accelerator solutions for not only Amazon Web Services (AWS), but across the ind... more
  • 19 Days Ago

AI Assistant is available now!

Feel free to start your new journey!