Demo

Lead Software Engineer, Model Serving Platform

Sciforium
San Francisco, CA Full Time
POSTED ON 12/8/2025 CLOSED ON 1/5/2026

What are the responsibilities and job description for the Lead Software Engineer, Model Serving Platform position at Sciforium?

Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.

We offer a fast-moving, collaborative environment where engineers have meaningful impact, learn quickly, and tackle deep technical challenges across the AI systems stack.

Role Overview

This is a rare chance to help architect and lead the development of Sciforium’s next-generation model serving platform, the high-performance engine that will bring a multimodal, highly efficient foundation model to market. As a senior technical leader, you’ll not only build core components yourself but also guide and mentor other engineers, influencing engineering direction, standards, and execution quality.

You will learn and shape the full AI stack: from GPU kernels and quantized execution paths to distributed serving, scheduling, and the APIs that power real-time AI applications. If you enjoy deep systems work, thrive on ownership, and want to lead engineers in building foundational AI infrastructure, this role puts you at the center of SciForium’s mission and growth.

Key Responsibilities

  • Lead the technical direction of the model serving platform, owning architecture decisions and guiding engineering execution.
  • Build core serving components including execution runtimes, batching, scheduling, and distributed inference systems.
  • Develop high-performance C and CUDA/HIP modules, including custom GPU kernels and memory-optimized runtimes.
  • Collaborate with ML researchers to productionize new multimodal models and ensure low-latency, scalable inference.
  • Build Python APIs and services that expose model capabilities to downstream applications.
  • Mentor and support other engineers through code reviews, design discussions, and hands-on technical guidance.
  • Drive performance profiling, benchmarking, and observability across the inference stack.
  • Ensure high reliability and maintainability through testing, monitoring, and engineering best practices.
  • Troubleshoot and resolve complex issues across GPU, runtime, and service layers.

Must-Haves

  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience
  • 5 years of experience designing and building scalable, reliable backend systems or distributed infrastructure.
  • Strong understanding of LLM inference mechanics (prefill vs decode, batching, KV cache)
  • Experience with Kubernetes/Ray, Containerization
  • Strong proficiency in C , Python.
  • Strong debugging, profiling, and performance optimization skills at the system level.
  • Ability to collaborate closely with ML researchers and translate model or runtime requirements into production-grade systems.
  • Effective communication skills and the ability to lead technical discussions, mentor engineers, and drive engineering quality.
  • Comfortable working from the office and contributing to a fast-moving, high-ownership team culture.

Nice to Have

  • Experience with ML systems engineering, distributed GPU scheduling, open source inference engine like vLLM, Sglang, or TRT-LLM
  • Experience in building large scale ML/MLOps infrastructure
  • Proficiency in CUDA or ROCm and experience with GPU profiling tools
  • Experience at an AI/ML startup, research lab, or Big Tech infrastructure/ML team.
  • Familiarity with multimodal model architectures, raw-byte models, or efficient inference techniques.
  • Contributions to open-source ML or HPC infrastructure

Why Join Us

  • Opportunity to build frontier-scale AI infrastructure powering next-generation LLMs and multimodal models.
  • Work with top-tier engineers and researchers across systems, GPUs, and ML frameworks.
  • Tackle high-impact performance and scalability challenges in training and inference.
  • Access state-of-the-art GPU clusters, datasets, and tooling.
  • Opportunity to publish, patent, and push the boundaries of modern AI
  • Join a culture of innovation, ownership, and fast execution in a rapidly scaling AI organization.

Benefits Include

  • Medical, dental, and vision insurance
  • 401k plan
  • Daily lunch, snacks, and beverages
  • Flexible time off
  • Competitive salary and equity

Equal opportunity

Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Software Engineer - Model Serving Infrastructure
Anyscale -
San Francisco, CA
AI Infrastructure Engineer, Model Serving Platform
Scale AI, Inc. -
San Francisco, CA
ML Model Serving Engineer
Sesame -
San Francisco, CA

Salary.com Estimation for Lead Software Engineer, Model Serving Platform in San Francisco, CA
$141,065 to $168,838
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Lead Software Engineer, Model Serving Platform?

Sign up to receive alerts about other jobs on the Lead Software Engineer, Model Serving Platform career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
Income Estimation: 
$110,730 - $135,754
Income Estimation: 
$128,617 - $162,576
Income Estimation: 
$117,033 - $148,289
This job has expired.
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Sciforium

  • Sciforium San Francisco, CA
  • Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by mu... more
  • 6 Days Ago

  • Sciforium San Francisco, CA
  • Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by mu... more
  • 7 Days Ago

  • Sciforium San Francisco, CA
  • Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by mu... more
  • 7 Days Ago

  • Sciforium San Francisco, CA
  • Sciforium is an AI infrastructure company that develops advanced AI models and operates a proprietary serving platform. Following new multi-million dollar ... more
  • 13 Days Ago


Not the job you're looking for? Here are some other Lead Software Engineer, Model Serving Platform jobs in the San Francisco, CA area that may be a better fit.

  • Lead San Francisco, CA
  • Lead is a fintech building banking infrastructure for embedded financial products and services. We operate an FDIC-insured bank headquartered in Kansas Cit... more
  • 7 Days Ago

  • Databricks San Francisco, CA
  • At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality t... more
  • 5 Days Ago

AI Assistant is available now!

Feel free to start your new journey!