What are the responsibilities and job description for the Software Engineer – GPU Kernel position at FriendliAI?
About The Job
FriendliAI is looking for a GPU Kernel Engineer to design, build, and optimize the low-level compute kernels that power our large-scale, GPU-accelerated AI inference platform. You will be delivering world-class inference speed across NVIDIA and AMD GPUs. With our recent $20M funding, we are scaling our team to meet market demand.
This is a deeply technical, high-impact role where you will write GPU code, implement advanced optimizations. As part of our engine team, you will contribute directly to the company’s proprietary inference engine which supports over 450,000 models on Hugging Face. You will work with the inventors of continuous batching and collaborate with the platform team to deploy your work into production.
Key Responsibilities
FriendliAI is building the world’s best AI inference platform that makes large language and multi-modal models fast, efficient, and deployable at scale. We power high-throughput, low-latency AI workloads for organizations worldwide and integrate directly with Hugging Face, giving developers instant access to over 500,000 open-source models.
We are a small, fast-moving team doing work that matters at one of the most exciting moments in the history of technology. With our world-class inference engine, we are building a platform that the AI industry can actually rely on.
FriendliAI is looking for a GPU Kernel Engineer to design, build, and optimize the low-level compute kernels that power our large-scale, GPU-accelerated AI inference platform. You will be delivering world-class inference speed across NVIDIA and AMD GPUs. With our recent $20M funding, we are scaling our team to meet market demand.
This is a deeply technical, high-impact role where you will write GPU code, implement advanced optimizations. As part of our engine team, you will contribute directly to the company’s proprietary inference engine which supports over 450,000 models on Hugging Face. You will work with the inventors of continuous batching and collaborate with the platform team to deploy your work into production.
Key Responsibilities
- Design, implement, and optimize high-performance GPU kernels for AI inference (e.g., GEMM, attention, routing)
- Develop and maintain GPU code in CUDA and C , including low-level assembly when needed
- Implement reduced-precision and quantized kernels (FP8/FP4) for low-latency or high-throughput inference
- Benchmark and ensure cross-vendor performance parity between NVIDIA and AMD hardware
- Contribute to internal GPU libraries and tune performance of performance-critical components
- Accelerate multi-modal model pipelines
- Investigate and integrate next-generation GPU features
- 3 years of experience in GPU programming, HPC, or performance-critical systems
- Bachelor’s or Master’s degrees in Computer Science, Computer Engineering, Electrical Engineering, or a related field
- Strong proficiency in CUDA for NVIDIA GPUs or ROCm/HIP for AMD GPUs
- Deep understanding of GPU architecture: warps, threads, memory hierarchy, synchronization, and latency-throughput trade-offs
- Proficiency in C
- Experience with GPU profiling and performance tuning
- Strong numerical background with understanding of precision trade-offs and quantization techniques
- Experience optimizing transformer, multi-modal, or Mixture-of-Experts (MoE) architectures at the kernel level
- Familiarity with the latest GPU libraries and frameworks (CUTLASS, Triton, …)
- Inter-GPU communication programming experience
- Open-source contributions related to GPU performance or ML acceleration
- Research or conference presentations on GPU optimization, HPC, or numerical computing
- Flexible working hours
- Daily lunch and dinner provided; unlimited snacks and beverages
- Supportive and highly collaborative work environment
- Health check-up support and top-tier equipment/hardware support
- A front-row seat to the generative AI infrastructure revolution
- Competitive compensation, startup equity, health insurance, and other benefits.
FriendliAI is building the world’s best AI inference platform that makes large language and multi-modal models fast, efficient, and deployable at scale. We power high-throughput, low-latency AI workloads for organizations worldwide and integrate directly with Hugging Face, giving developers instant access to over 500,000 open-source models.
We are a small, fast-moving team doing work that matters at one of the most exciting moments in the history of technology. With our world-class inference engine, we are building a platform that the AI industry can actually rely on.