What are the responsibilities and job description for the Research Scientist — Foundation World Models for Robotics position at Gigascale Capital?
Location
Palo Alto
Employment Type
Full time
Location Type
On-site
Department
Research
OverviewApplication
At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.
What You'll Do
What You'll Do
What You'll Do
Palo Alto
Employment Type
Full time
Location Type
On-site
Department
Research
OverviewApplication
At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.
What You'll Do
- Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
- Formulate research problems and hypotheses grounded in real robotic autonomy needs
- Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
- Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
- Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
- Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
- Communicate results clearly through internal writeups, talks, and research reviews
- Publish and present work at top-tier venues
- PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
- Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
- Deep understanding of modern machine learning, with relevance to at least several of:
- Deep learning and representation learning
- Sequence modeling / transformers
- Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
- Model-based learning, planning, and/or control
- RL / imitation learning for robotics
- Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
- Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
- Clear written and verbal communication skills
- Comfort operating in ambiguity in a fast-moving startup environment
- Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
- Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
- Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
- Hands-on experience deploying learning-based components on real robots
- Familiarity with distributed training and performance debugging (multi-GPU / multi-node)
- Work with an elite research team from Stanford, Berkeley, Harvard, etc.
- Research that directly connects to real-world robotic autonomy — not toy benchmarks
- Tight collaboration between research and engineering (no silos)
- High ownership and ability to shape the research agenda
- Opportunity to publish meaningful work while seeing it come alive on real robotic systems
What You'll Do
- Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
- Formulate research problems and hypotheses grounded in real robotic autonomy needs
- Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
- Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
- Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
- Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
- Communicate results clearly through internal writeups, talks, and research reviews
- Publish and present work at top-tier venues
- PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
- Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
- Deep understanding of modern machine learning, with relevance to at least several of:
- Deep learning and representation learning
- Sequence modeling / transformers
- Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
- Model-based learning, planning, and/or control
- RL / imitation learning for robotics
- Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
- Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
- Clear written and verbal communication skills
- Comfort operating in ambiguity in a fast-moving startup environment
- Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
- Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
- Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
- Hands-on experience deploying learning-based components on real robots
- Familiarity with distributed training and performance debugging (multi-GPU / multi-node)
- Work with an elite research team from Stanford, Berkeley, Harvard, etc.
- Research that directly connects to real-world robotic autonomy — not toy benchmarks
- Tight collaboration between research and engineering (no silos)
- High ownership and ability to shape the research agenda
- Opportunity to publish meaningful work while seeing it come alive on real robotic systems
What You'll Do
- Drive research on foundational models and world models for robotics (representation learning, dynamics/prediction, planning, control)
- Formulate research problems and hypotheses grounded in real robotic autonomy needs
- Design and run rigorous experiments at scale, including ablations, benchmarking, and evaluation methodology
- Develop and evaluate model architectures for long-horizon prediction, rollout quality, and downstream robotic task performance
- Explore and advance pre-training and post-training (fine-tuning, alignment, evaluation) of large multimodal models
- Collaborate closely with Research Engineers to translate new ideas into scalable training pipelines and reliable systems
- Communicate results clearly through internal writeups, talks, and research reviews
- Publish and present work at top-tier venues
- PhD in a relevant field (e.g., ML, Robotics, Computer Science, Electrical Engineering, Applied Math, Computer Vision, or closely related)
- Strong publication record demonstrating high-quality research output (e.g., NeurIPS, ICML, ICLR, CoRL, RSS, ICRA, CVPR, etc.)
- Deep understanding of modern machine learning, with relevance to at least several of:
- Deep learning and representation learning
- Sequence modeling / transformers
- Generative modeling (e.g., diffusion, autoregressive, latent-variable models)
- Model-based learning, planning, and/or control
- RL / imitation learning for robotics
- Strong research taste and independence: ability to define problems, execute, interpret results, and iterate quickly
- Proficiency with at least one modern ML stack (e.g., PyTorch or JAX) and the ability to implement research ideas in code
- Clear written and verbal communication skills
- Comfort operating in ambiguity in a fast-moving startup environment
- Prior work specifically on world models (latent dynamics, predictive models, model-based RL/planning, long-horizon rollouts)
- Experience with large-scale multimodal training (VLMs, video models, action-conditioned models, large policy models)
- Experience working with robotic learning data (real-world logs, teleop, simulation-to-real, multimodal sensor streams)
- Hands-on experience deploying learning-based components on real robots
- Familiarity with distributed training and performance debugging (multi-GPU / multi-node)
- Work with an elite research team from Stanford, Berkeley, Harvard, etc.
- Research that directly connects to real-world robotic autonomy — not toy benchmarks
- Tight collaboration between research and engineering (no silos)
- High ownership and ability to shape the research agenda
- Opportunity to publish meaningful work while seeing it come alive on real robotic systems