Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/eJcQgrqrVYqrvAdi8
Back to the job results

Senior Perception Engineer

30+ days ago 2026/09/03
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Description About Origin Origin (previously 10xConstruction) is building general-purpose autonomous robots for US construction to tackle rising costs, safety risks, and labour shortages.
Our modular, multi-trade platform combines purpose-built hardware with real-time site intelligence to navigate complex environments and execute tasks with precision.
Trained in high-fidelity simulation and already deployed on live sites, our robots deliver 5x faster execution, 250%+ margin expansion, and significant cost savings.
Join India’s most talent-dense robotics team consisting of individuals from IITs, Stanford, UCLA, etc.
About the Role You will work on building and optimizing 3D perception systems that enable robots to understand and interact with complex real-world environments.
The goal is to develop robust perception pipelines that work reliably across both simulation and real-world construction sites, ensuring accurate scene understanding, localization, and decision-making.
Key Responsibilities 3D Perception Development & Scene Understanding Design, implement, and deploy realtime 3D perception pipelines leveraging LiDAR, IMU, Stereo, and RGB cameras.
Develop algorithms for spatial and temporal data interpretation to enable high-fidelity semantic scene understanding.
Optimize ego-motion estimation and localization modules to ensure seamless integration with downstream planning and control tasks.
Deep Learning Train and integrate deep learning models required for semantic world understanding and surface finish classification for quality control Collect and curate high-quality datasets (real and synthetic) and automate training pipelines and experiment tracking.
Benchmark and optimize perception models for edge devices to ensure real-time performance in resource-constrained environments Sensor Fusion & Localization Design and implement sensor fusion strategies combining visual, inertial, and spatial data Architect and implement sophisticated sensor fusion strategies that integrate classical methods with deep learning approaches to convert noisy, asynchronous data from heterogeneous sensors into a unified environment representation.
Calibration Develop and automate robust extrinsic and intrinsic calibration procedures (Camera, LiDAR, IMU) using target-based and/or targetless methods.
Design and implement algorithms for online calibration drift detection and self-healing to maintain system integrity during long-term deployments.
Establish rigorous quantitative metrics to objectively evaluate and certify calibration quality.
Collaboration Partner with Navigation, Manipulation, and Cloud teams to ensure perception outputs are optimized for downstream path planning, grasping, and remote fleet monitoring.
Define, track system-level and own system level KPIs and perception metrics to identify regressions across software iterations.
Requirements 3+ YoE Strong fundamentals in computer vision and 3D perception Proficiency in C++ (Python is a plus) Familiarity with PyTorch, TensorRT Basic understanding of Localisation and SLAM Ability to work with real-world data and debug complex systems Nice to Have Experience with Nvidia Deepstream, GStreamer, Holoscan Experience with ROS/ROS2 Hands-on experience with LiDAR, RGB-D cameras, or IMUs Familiarity with OpenCV, PCL, or similar libraries Experience working on robotics or vision-based projects
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.