Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/REf3gGMvQvgF4FW19
Back to the job results

AI Research Apprentice

30+ days ago 2026/09/03
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

As an AI Research Apprentice you'll push the frontiers of generative and multimodal learning that power our autonomous robots.
You will prototype diffusion-based vision models, vision–language architectures (VLAs/VLMs) and automated data-annotation pipelines that turn raw site footage into training gold.
Key Responsibilities * Design and train diffusion-based generative models for realistic, high-resolution synthetic data.
* Build compact Vision–Language Models (VLMs) to caption, query and retrieve job-site scenes for downstream perception tasks.
* Develop Vision–Language Action Models (VLA) objectives that link textual work-orders with pixel-level segmentation masks.
* Architect large-scale auto-annotation pipelines that transform unlabeled images / point-clouds into high-quality labels with minimal human input.
* Benchmark model performance on accuracy, latency and memory for deployment on Jetson-class hardware; compress with distillation or LoRA.
* Collaborate with perception and robotics teams to integrate research prototypes into live ROS 2 stacks.
Qualifications & Skills * Strong foundation in deep learning, probabilistic modeling and computer vision (coursework or research projects).
* Hands-on experience with diffusion models (e.
g., DDPM, Latent Diffusion) in PyTorch or JAX.
* Familiarity with multimodal transformers / VLMs (CLIP, BLIP, Flamingo, LLaVA, etc.
) and contrastive pre-training objectives.
* Working knowledge of data-centric AI: active learning, self-training, pseudo-labeling and large-scale annotation pipelines.
* Solid coding skills in Python, PyTorch / Lightning, plus git-driven workflows; bonus for C++ and CUDA kernels.
* Bonus: experience with on-device inference (TensorRT, ONNX Runtime) & synthetic data tools (Isaac Sim).
Why Join Us * Research bleeding-edge generative & multimodal tech and watch it land on real construction robots.
* Publish, patent and open-source: we encourage conference submissions and community engagement.
* Help build a company from the ground up—your experiments can become flagship product features.
PyTorch or JAX C++ CUDA kernels ONNX Runtime TensorRT Isaac Sim Latent Diffusion

This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.