Demo

Sr. Staff GPU Software Performance Engineer

Advanced Micro Devices, Inc
San Jose, CA Full Time
POSTED ON 4/13/2026
AVAILABLE BEFORE 4/7/2027


WHAT YOU DO AT AMD CHANGES EVERYTHING 

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.  Together, we advance your career.  




THE ROLE: 

We train large models across multi‑GPU clusters. Your charter is to make training materially faster and cheaper by leading kernel‑level performance engineering—from math kernels and fused epilogues to cluster‑level throughput—partnering with researchers, framework teams, and infrastructure.  

 

KEY RESPONSIBILITIES: 

  • Own kernel performance: Design, implement, and land high‑impact HIP/C kernels (e.g., attention, layernorm, softmax, GEMM/epilogues, fused pointwise) that are wave‑size portable and optimized for LDS, caches, and MFMA units.
  • Lead profiling & tuning: Build repeatable workflows with timelines, hardware counters, and roofline analysis; remove memory bottlenecks; tune launch geometry/occupancy; validate speedups with A/B harnesses.
  • Drive fusion & algorithmic improvements: Identify profitable fusions, tiling strategies, vectorized I/O, shared‑memory/scratchpad layouts, asynchronous pipelines, and warp/wave‑level collectives—while maintaining numerical stability.
  • Influence frameworks & libraries: Upstream or extend performance‑critical ops in PyTorch/JAX/XLA/Triton; evaluate and integrate vendor math libraries; guide compiler/codegen choices for target architectures
  • Scale beyond one GPU: Optimize P2P and collective comms, overlap compute/comm, and improve data/pipeline/tensor parallelism throughput across nodes
  • Benchmarking & SLOs: Define and own KPIs (throughput, time‑to‑train, $/step, energy/step); maintain dashboards, perf CI gates, and regression triage
  • Technical leadership: Mentor senior engineers, set coding/perf standards, lead performance “war rooms,” and partner with silicon/vendor teams on microarchitecture‑aware optimizations
  • Quality & reliability: Build reproducible perf harnesses, deterministic test modes, and documentation/playbooks so improvements persist release‑over‑release

 

PREFERRED EXPERIENCE: 

  • Experience in systems/HPC/ML performance engineering, with hands‑on GPU kernel work and shipped optimizations in production training or HPC
  • Expert in modern C (C 17 ) and at least one GPU programming model (CUDA, HIP, or SYCL/oneAPI) or a GPU kernel DSL (e.g., Triton); comfortable with templates, memory qualifiers, atomics, and warp/wave‑level collectives
  • Deep understanding of GPU microarchitecture: SIMT execution, occupancy vs. register/scratchpad pressure, memory hierarchy (global/L2/shared or LDS), coalescing, bank conflicts, vectorization, and instruction‑level parallelism
  • Proficiency with profiling & analysis: timelines and counters (e.g., Nsight Systems/Compute, rocprof/Omniperf, VTune/GPA or equivalents), ISA/disassembly inspection, and correlating metrics to code changes
  • Proven track record reducing time‑to‑train or $‑per‑step via kernel and collective‑comms optimizations on multi‑GPU clusters
  • Strong Linux fundamentals (perf/eBPF, NUMA, PCIe/links), build systems (CMake/Bazel), Python, and containerized dev (Docker/Podman)
  • Experience with distributed training (PyTorch DDP/FSDP/ZeRO/DeepSpeed or JAX) and GPU collectives
  • Expertise in mixed precision (BF16/FP16/FP8), numerics, and stability/accuracy validation at kernel boundaries
  • Background in compiler/IR (LLVM/MLIR) or codegen for GPU backends; ability to guide optimization passes with performance goals
  • Hands‑on with cluster orchestration (Slurm/Kubernetes), IB/RDMA tuning, and compute/communication overlap strategies

 

ACADEMIC CREDENTIALS: 

  • Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent 

 

LOCATION:

San Jose, CA

 

#LI-MV1

#LI-HYBRID




Benefits offered are described:  AMD benefits at a glance.

 

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law.   We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

 

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position.  AMD’s “Responsible AI Policy” is available here.

 

This posting is for an existing vacancy.

Salary.com Estimation for Sr. Staff GPU Software Performance Engineer in San Jose, CA
$123,028 to $146,126
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Sr. Staff GPU Software Performance Engineer?

Sign up to receive alerts about other jobs on the Sr. Staff GPU Software Performance Engineer career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$90,032 - $105,965
Income Estimation: 
$111,859 - $131,446
Income Estimation: 
$110,457 - $133,106
Income Estimation: 
$105,809 - $128,724
Income Estimation: 
$122,763 - $145,698
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Advanced Micro Devices, Inc

  • Advanced Micro Devices, Inc Arizona, AZ
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data... more
  • 1 Day Ago

  • Advanced Micro Devices, Inc TN
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data... more
  • 1 Day Ago

  • Advanced Micro Devices, Inc Bellevue, WA
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data... more
  • 1 Day Ago

  • Advanced Micro Devices, Inc Bellevue, WA
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data... more
  • 1 Day Ago


Not the job you're looking for? Here are some other Sr. Staff GPU Software Performance Engineer jobs in the San Jose, CA area that may be a better fit.

  • AMD San Jose, CA
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data... more
  • 19 Days Ago

  • CoreWeave Sunnyvale, CA
  • CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innova... more
  • 22 Days Ago

AI Assistant is available now!

Feel free to start your new journey!