What are the responsibilities and job description for the Perception Engineer position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, VDart, Inc., is seeking the following. Apply via Dice today!
Job Title: Perception Engineer
Location: Detroit, MI
Duration: / Term: 6 months – Contract
Job Description:
Experience Desired: 03 Years.
Qualification:
We are looking for a skilled Perception Engineer to join a cutting-edge autonomous systems team working on advanced vehicle perception technologies. If you’re passionate about building systems that enable machines to see and understand the world, this role is for you.
Responsibilities:
Sensor Fusion, LiDAR, Computer Vision, Object Detection, Tracking, PyTorch, TensorFlow, ROS2, C , Python
Job Title: Perception Engineer
Location: Detroit, MI
Duration: / Term: 6 months – Contract
Job Description:
Experience Desired: 03 Years.
Qualification:
We are looking for a skilled Perception Engineer to join a cutting-edge autonomous systems team working on advanced vehicle perception technologies. If you’re passionate about building systems that enable machines to see and understand the world, this role is for you.
Responsibilities:
- Design and implement advanced perception algorithms for autonomous vehicles using LiDAR, cameras, radar, and GNSS.
- Develop and optimize sensor fusion techniques to combine data from multiple sensors, improving the accuracy and reliability of perception systems.
- Create algorithms for object detection, tracking, semantic segmentation, and classification from 3D point clouds (LiDAR) and camera data.
- Develop sensor calibration techniques (intrinsic and extrinsic) and coordinate transformations between sensors.
- Develop robust perception algorithms that maintain performance in adverse weather conditions such as rain, snow, fog, and low-light scenarios.
- Participate in real-time systems design and optimization to meet the high-performance requirements of autonomous driving.
- Work with ROS2 for integration and deployment of perception algorithms.
- Develop, test, and deploy machine learning models for perception tasks such as object detection and tracking.
- Collaborate with cross-functional teams to integrate perception algorithms into larger autonomous systems.
- Stay up-to-date with industry trends and emerging technologies to innovate and improve perception systems.
- Minimum 3 years of experience in sensor calibration, multi-sensor fusion, or related domains.
- Strong foundation in linear algebra, 3D geometry, coordinate frames, quaternions, probability, Bayesian filtering, and data association.
- Hands-on experience with intrinsic and extrinsic calibration of LiDAR, cameras, and radar, including geometric calibration, coordinate transforms, and sensor synchronization.
- Proven experience with perception algorithms for autonomous systems, particularly in the areas of LiDAR, camera, radar, GNSS, or other sensor modalities.
- Deep understanding of LiDAR technology, point cloud data structures, and processing techniques; experience with PCL or Open3D.
- Proficiency in sensor fusion for combining data from LiDAR, camera, radar, and GNSS, including handling time synchronization and motion distortion.
- Solid background in computer vision techniques; experience with OpenCV and object detection models such as YOLO, Faster R-CNN, or SSD.
- Experience with deep learning frameworks (TensorFlow or PyTorch) for object detection and tracking tasks.
- Hands-on experience with multi-object tracking algorithms such as SORT, DeepSORT, Kalman Filters, UKF, IMM, or JPDA.
- Strong programming skills in C and Python; familiarity with geometric optimization libraries.
- Familiarity with ROS2 for perception-based autonomous systems development.
- Experience with parallel computing for real-time performance optimization (e.g., CUDA, OpenCL).
Sensor Fusion, LiDAR, Computer Vision, Object Detection, Tracking, PyTorch, TensorFlow, ROS2, C , Python