Recent Searches

You haven't searched anything yet.

13 Software Engineer, LLM Infrastructure Jobs in Cupertino, CA

SET JOB ALERT
Details...
Gridmatic
Cupertino, CA | Full Time
$111k-134k (estimate)
5 Days Ago
OpenReq
Cupertino, CA | Full Time
$124k-148k (estimate)
3 Days Ago
OpenReq
Cupertino, CA | Full Time
$141k-169k (estimate)
4 Days Ago
Etched, LLC
Cupertino, CA | Full Time
$141k-169k (estimate)
3 Weeks Ago
Etched, LLC
Cupertino, CA | Full Time
$141k-169k (estimate)
3 Weeks Ago
Etched, LLC
Cupertino, CA | Full Time
$123k-148k (estimate)
3 Weeks Ago
Apple
Apple
Cupertino, CA | Full Time
$123k-151k (estimate)
4 Months Ago
Veear Projects
Cupertino, CA | Contractor
$141k-169k (estimate)
1 Week Ago
Amazon Web Services Inc.
Cupertino, CA | Full Time
$141k-169k (estimate)
1 Week Ago
Apple
Apple
Cupertino, CA | Full Time
$142k-171k (estimate)
5 Months Ago
Apple
Apple
Cupertino, CA | Full Time
$134k-171k (estimate)
2 Months Ago
Apple
Apple
Cupertino, CA | Full Time
$140k-168k (estimate)
3 Months Ago
Apple
Apple
Cupertino, CA | Full Time
$142k-171k (estimate)
2 Months Ago
Software Engineer, LLM Infrastructure
Etched, LLC Cupertino, CA
$141k-169k (estimate)
Full Time 3 Weeks Ago
Save

Etched, LLC is Hiring a Software Engineer, LLM Infrastructure Near Cupertino, CA

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep chain-of-thought reasoning.

Software Engineer, LLM Infrastructure 

Transformer ASICs, like those built by Etched, dramatically improve time-to-first-token latency. For a large model like Llama-3-70B with 2048 input tokens, the TTFT will be single-digit milliseconds (we will announce performance figures publicly at our launch). 

However, single-digit millisecond latency means nothing if the rest of the serving stack takes 100 ms, and customers actually use it (or adopt the optimizations into their own stack). You will help make both of these happen.

You will work with our software team to build software for continuous batching, and write world-class interactive documentation (like Pytorch’s Run in Colab feature) to show customers how it works. You will get this software working on our pre-silicon platform, and port it over to work on the physical chips once they are done being fabbed. You will find creative, new ways to improve this latency - can we speculatively decode the user’s inputs? Can we pre-empt sequences if we run out of KV cache space and re-compute them later? Can we cache common pre-fills?

Representative projects:

  • Working with emulators like Palladium to develop software for chips while they are being fabricated
  • Developing algorithms for balancing prefill and completion tokens when serving LLMs
  • Profiling network latency when responding to prompts to help eliminate it in our test environment
  • Develop ways for customers to work with our pre-silicon infrastructure and understand how their workloads will run on it.
  • Build tools for Jupyter notebooks to connect to emulated and physical Etched systems

You may be a good fit if you:

  • Have 3 years of software engineering experience
  • Are good at math, and good at communicating mathematical ideas
  • Pick up slack, even if it goes outside your job description
  • Are results-oriented, and bias towards shipping products
  • Want to learn more about machine learning research

We encourage you to apply even if you do not believe you meet every single qualification.

Strong candidates may also have experience with:

  • Palladium emulation
  • Real-time audio and video communication
  • GPU kernel profiling and low-level programming
  • Transformer optimizations, such as FlashAttention
  • Ongoing research in machine learning

Strong candidates may also have experience with:

  • GPU kernel profiling and low-level programming
  • Transformer optimizations, such as FlashAttention
  • Ongoing research in machine learning
  • Palladium emulation

How we’re different:

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Benefits:

  • Full medical, dental, and vision packages, with 100% of premium covered, 90% for dependents
  • Housing subsidy of $2,000/month for those living within walking distance of the office
  • Daily lunch and dinner in our office
  • Relocation support for those moving to Cupertino

Job Summary

JOB TYPE

Full Time

SALARY

$141k-169k (estimate)

POST DATE

05/22/2024

EXPIRATION DATE

07/21/2024

Show more

Etched, LLC
Full Time
$122k-143k (estimate)
2 Weeks Ago
Etched, LLC
Full Time
$226k-275k (estimate)
2 Weeks Ago
Etched, LLC
Full Time
$158k-180k (estimate)
2 Weeks Ago