What are the responsibilities and job description for the Senior ML Engineer, Perception position at Motion Recruitment?
Job Description
Join a fast-moving robotics company building the next generation of warehouse automation. This role focuses on advancing a cutting-edge perception stack, combining modern machine learning with real-world hardware deployment. You’ll work across 2D/3D vision, data systems, and edge inference to help robots operate reliably in unpredictable physical environments.
The team is looking for someone who can balance deep technical expertise with practical problem-solving, and who’s motivated by seeing their work directly influence real-world systems.
This is a hybrid full-time position.
Required Skills & Experience
Tech Breakdown
40% Model Development (2D/3D Vision, Multi-Modal Systems)
30% Deployment & Optimization (Edge Inference, Performance Tuning)
20% Data Strategy & Pipeline Improvements
10% System Integration (Robotics Stack, Production Code)
Daily Responsibilities
70% Hands On
10% Mentorship / Technical Leadership
20% Team Collaboration
Posted By: Sarah Carroll
Join a fast-moving robotics company building the next generation of warehouse automation. This role focuses on advancing a cutting-edge perception stack, combining modern machine learning with real-world hardware deployment. You’ll work across 2D/3D vision, data systems, and edge inference to help robots operate reliably in unpredictable physical environments.
The team is looking for someone who can balance deep technical expertise with practical problem-solving, and who’s motivated by seeing their work directly influence real-world systems.
This is a hybrid full-time position.
Required Skills & Experience
- 5 years of experience in computer vision and machine learning with production deployment in robotics, autonomous systems, or IoT
- Strong Python and PyTorch expertise
- Experience deploying models using C or integrating with production systems
- Hands-on experience with inference optimization (TensorRT, ONNX Runtime, CUDA)
- Background in both 2D vision (e.g., object detection/segmentation, transformers) and 3D vision (point clouds, geometry, calibration)
- Experience working with large-scale datasets, including labeling, QA, and bias detection
- Ability to translate ambiguous product requirements into clear technical execution
- Experience with multi-modal perception systems (2D 3D fusion)
- Familiarity with edge deployment on embedded hardware
- Exposure to active learning or iterative data improvement pipelines
- Experience with cloud environments (AWS or GCP)
- Familiarity with Docker, experiment tracking tools, and labeling platforms
- Prior experience mentoring engineers or contributing to technical roadmap decisions
Tech Breakdown
40% Model Development (2D/3D Vision, Multi-Modal Systems)
30% Deployment & Optimization (Edge Inference, Performance Tuning)
20% Data Strategy & Pipeline Improvements
10% System Integration (Robotics Stack, Production Code)
Daily Responsibilities
70% Hands On
10% Mentorship / Technical Leadership
20% Team Collaboration
- Design and train models that combine visual and spatial data for perception and grasping tasks
- Own deployment workflows, including optimization and runtime performance on edge devices
- Improve and scale data pipelines, including dataset quality and edge case handling
- Write production-grade code to integrate perception into robotic systems
- Contribute to code reviews, technical direction, and team development
- Competitive base salary bonus potential
- Medical Insurance
- Dental Benefits
- Vision Benefits
- Paid Time Off (PTO)
- 401(k) (with match, if applicable)
Posted By: Sarah Carroll