What are the responsibilities and job description for the Machine Learning Applied Researcher position at Archetype AI?
About Archetype AI
Archetype AI is developing the world's first AI platform to bring AI into the real world. Formed by an exceptionally high-caliber team from Google, Archetype AI is building a foundation model for the physical world, a real-time multimodal LLM for real life, transforming real-world data into valuable insights and knowledge that people will be able to interact with naturally. It will help people in their real lives, not just online, because it understands the real-time physical environment and everything that happens in it.
Supported by deep tech venture funds in Silicon Valley, Archetype AI is currently at the Series A stage and is progressing rapidly to develop technology for their next stage. This presents a unique and once-in-a-lifetime opportunity to be part of an exciting AI team at the beginning of their journey, located in the heart of Silicon Valley.
Our team is headquartered in San Mateo, California, with team members throughout the US and Europe.
We are actively growing, so if you are an exceptional candidate excited to work on the cutting edge of physical AI and don’t see a role that exactly fits you below you can contact us directly with your resume via jobsarchetypeaiio.
About Job
We are building a new class of multimodal foundation models for the physical world. Our focus is on combining time series / sensor data, language, vision, audio, and other real-world signals into unified models that can understand complex systems, reason over long horizons, and support real-world tasks in industrial and physical environments.
We are looking for an experienced, researcher-oriented ML candidate to help build these systems end to end: from problem formulation and experimental design, to model development, evaluation, and deployment.
This role is intended for someone who is highly self-directed, can independently perform strong scientific work, and is excited to work on multimodal intelligence grounded in physical signals.
What You’ll Work On
Archetype AI is developing the world's first AI platform to bring AI into the real world. Formed by an exceptionally high-caliber team from Google, Archetype AI is building a foundation model for the physical world, a real-time multimodal LLM for real life, transforming real-world data into valuable insights and knowledge that people will be able to interact with naturally. It will help people in their real lives, not just online, because it understands the real-time physical environment and everything that happens in it.
Supported by deep tech venture funds in Silicon Valley, Archetype AI is currently at the Series A stage and is progressing rapidly to develop technology for their next stage. This presents a unique and once-in-a-lifetime opportunity to be part of an exciting AI team at the beginning of their journey, located in the heart of Silicon Valley.
Our team is headquartered in San Mateo, California, with team members throughout the US and Europe.
We are actively growing, so if you are an exceptional candidate excited to work on the cutting edge of physical AI and don’t see a role that exactly fits you below you can contact us directly with your resume via jobsarchetypeaiio.
About Job
We are building a new class of multimodal foundation models for the physical world. Our focus is on combining time series / sensor data, language, vision, audio, and other real-world signals into unified models that can understand complex systems, reason over long horizons, and support real-world tasks in industrial and physical environments.
We are looking for an experienced, researcher-oriented ML candidate to help build these systems end to end: from problem formulation and experimental design, to model development, evaluation, and deployment.
This role is intended for someone who is highly self-directed, can independently perform strong scientific work, and is excited to work on multimodal intelligence grounded in physical signals.
What You’ll Work On
- Build and improve multimodal foundation models that incorporate time-series / sensor data alongside language, vision, audio, and related modalities
- Drive research and modeling efforts from problem definition through experimentation and evaluation
- Own modeling work across data, modeling, and evaluation
- Advance model architectures and training strategies for physical-world understanding and long-context reasoning
- Drive and scale research experiments and modeling advances to production models that power diverse use cases in complex industrial scenarios
- Contribute to research directions with potential for publication
- Self-directed and comfortable operating in ambiguous problem spaces
- Able to independently perform strong scientific work, including forming hypotheses, designing experiments, and drawing sound conclusions
- Experience with end-to-end modeling, including data, modeling, and evaluation
- Experience with productionization or deployment of ML models
- Multimodal experience preferred
- Strong technical judgment and experimental rigor
- Many important real-world systems cannot be understood from text or vision alone. Their behavior depends on signals that evolve over time: sensors, operating conditions, environment, and interactions across subsystems. We believe the next generation of useful foundation models will need to integrate these sources of information and reason over them in a unified way.
- You will have a unique opportunity to help shape a new generation of multimodal foundation models grounded in physical signals and real-world dynamics. The role offers a rare combination of deep research challenges, practical deployment impact, and the chance to contribute to a fast-emerging area of AI.