What are the responsibilities and job description for the Software Engineer - Model Performance position at AI Fund?
About Baseten
Baseten provides the infrastructure, tooling, and expertise needed to bring great AI products to market - fast. Backed by top investors including
THE ROLE
Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.
EXAMPLE INITIATIVES
You'll get to work on these types of projects as part of our Model Performance team:
Responsibilities
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
Baseten provides the infrastructure, tooling, and expertise needed to bring great AI products to market - fast. Backed by top investors including
THE ROLE
Are you passionate about advancing the application of artificial intelligence? We are looking for a Software Engineer focused on ML performance to join our dynamic team. This role is ideal for someone who thrives in a fast-paced startup environment and is eager to make significant contributions to the exciting field of LLM Inference. If you are a backend engineer who thrives on making things faster and is excited about open-source ML models, we look forward to your application.
EXAMPLE INITIATIVES
You'll get to work on these types of projects as part of our Model Performance team:
Responsibilities
- Implement, refine, and productionize cutting-edge techniques (quantization, speculative decoding, kv cache reuse, chunked prefill and LoRA) for ML model inference and infrastructure.
- Deep dive into underlying codebases of TensorRT, PyTorch, TensorRT-LLM, vllm, sglang, CUDA, and other libraries to debug ML performance issues.
- Apply and scale optimization techniques across a wide range of ML models, particularly large language models.
- Collaborate with a diverse team to design and implement innovative solutions.
- Own projects from idea to production.
- Bachelor's, Master's, or Ph.D. degree in Computer Science, Engineering, Mathematics, or related field.
- Experience with one or more general-purpose programming languages, such as Python or C .
- Familiarity with LLM optimization techniques (e.g., quantization, speculative decoding, continuous batching).
- Strong familiarity with ML libraries, especially PyTorch, TensorRT, or TensorRT-LLM.
- Demonstrated interest and experience in LLM’s.
- Deep understanding of GPU architecture.
- Bonus:
- Proficiency in enhancing the performance of software systems, particularly in the context of large language models (LLMs).
- Experience with CUDA or similar technologies.
- Deep understanding of software engineering principles and a proven track record of developing and deploying AI/ML inference solutions.
- Experience with Docker and Kubernetes.
- Competitive compensation package.
- This is a unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.
- An inclusive and supportive work culture that fosters learning and growth.
- Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.