Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/pQ7k5DGZsWM8jQe89
Back to the job results

Computer Vision Engineer

30+ days ago 2026/06/14
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Key Responsibilities
  • Design, implement, and maintain core autonomy modules that integrate sensing, perception, state estimation, mapping, and planner interfaces into a cohesive real-time system.
  • Develop high-performance computer vision pipelines (classical + AI-based) for detection, segmentation, tracking, and scene understanding, ensuring reliable operation on embedded hardware.
  • Build multimodal perception systems that fuse camera, LiDAR, radar, and IMU data into accurate, navigation-ready environment representations.
  • Deploy, optimize, and maintain autonomy software on embedded platforms (Jetson AGX/Orin), including TensorRT optimisation, cross-compilation, CUDA acceleration, and performance tuning for real-time execution.
  • Own sensor bring-up, configuration, calibration, and synchronization (camera, LiDAR, radar, IMU, GPS), ensuring accurate and stable data for downstream modules.
  • Ensure system-level robustness and safety by maintaining strict latency budgets, deterministic behaviour, numerical stability, and fall-back mechanisms for degraded sensing conditions.
  • Conduct field trials, capture datasets, analyse system performance, and drive iterative improvements across sensing, perception, fusion, and planning layers.
  • Debug deep autonomy stack issues including timing mismatches, calibration drift, concurrency conflicts, synchronization faults, and hardware–software integration challenges.
  • Build deployment-ready autonomy systems using ROS/ROS2, Docker, systemd services, and reproducible build pipelines tailored for embedded platforms.
  • Collaborate with mechanical, electronics, and systems teams to align autonomy software capabilities with real-world hardware constraints and vehicle dynamics.
  • Contribute to autonomy architecture evolution, influencing design decisions, modularisation strategy, safety mechanisms, and long-term capability roadmap.
Required Technical Skills
  • ·         Required Bachelor’s degree in Robotics, Computer Science, Mechatronics or related field.
  • ·         Master’s or PhD in Robotics, Autonomous Systems, AI/ML, Computer Vision, or Control Systems is preferred.
  • Strong proficiency in modern C++ (14/17/20) and Python for building production-grade robotics, CV, and autonomy software.
  • Deep understanding of computer vision fundamentals (feature-based vision, geometric methods, multi-view geometry) and AI-based perception using PyTorch.
  • Practical experience deploying and optimising perception models on embedded GPU platforms (Jetson Xavier/Orin or similar).
  • Hands-on expertise with Triton, TensorRT, mixed-precision inference, Numba-JIT, CUDA kernels, and real-time optimisation techniques.
  • Strong command of ROS/ROS2, TF transforms, message passing, node graph architecture, and middleware integration patterns.
  • Extensive experience with robotics sensor integration including RGB/stereo/depth cameras, LiDAR, radar, IMUs, and GPS—covering calibration (intrinsic/extrinsic), synchronization, timestamps, and data integrity.
  • Knowledge of core autonomy concepts: mapping, costmap generation, scene representation, obstacle detection, and planner interfacing.
  • Solid grounding in Linux systems, multithreading, memory optimisation, real-time constraints, and system-level debugging workflows.
  • Experience with Docker, cross-compilation toolchains, embedded deployment pipelines, and CI/CD systems for robotics software.
  • Familiarity with simulation tools (Gazebo, CARLA, Isaac Sim) for developing reproducible test setups and automated validation.
  • Ability to troubleshoot complex issues across perception, fusion, hardware interfaces, timing, concurrency, and algorithmic edge cases.
  • Strong understanding of coordinate frames, transforms, camera models, rigid-body geometry, and numerical optimisation methods.
  • Experience using logging frameworks, telemetry tools, performance profilers, and methods for long-duration stability testing.
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.