Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/cdo43AckVe4qYhGW6
Back to the job results

ML Ops Engineer (EMEA Remote)

3 days ago 2026/08/26
Remote
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

About the job ML Ops Engineer (EMEA Remote)

Location: Fully remote (EMEA timezone)
Start date: ASAP
Languages: Fluent English required
Industry: Cloud Computing / AI / European Deep-Tech SaaS


About the Role


Pragmatike is recruiting on behalf of a fast-scaling, well-funded distributed cloud infrastructure startup building next-generation AI-native cloud services. The company is redefining how compute is delivered by providing GPU-powered infrastructure for AI/ML workloads, secure storage, and high-speed data transfer through a decentralized architecture that significantly reduces environmental impact compared to traditional cloud providers.


We are seeking a ML Ops Engineer with strong experience in production-grade model serving and infrastructure for AI systems. This is a highly technical, hands-on role focused on building scalable, reliable, and efficient ML inference platforms powering real-time AI applications.


You will be responsible for designing and operating the core infrastructure that serves machine learning models at scale. You will work closely with infrastructure, platform, and applied AI teams to ensure high availability, low latency, and cost-efficient inference systems. Strong ownership, production mindset, and experience with distributed GPU systems are essential.


Your Responsibilities


  • Build and operate production-grade model serving infrastructure using frameworks such as vLLM, TGI, Triton, or equivalent
  • Design and implement robust deployment pipelines with blue/green and canary rollout strategies for ML models
  • Develop and maintain auto-scaling systems, multi-model serving architectures, and intelligent request routing layers
  • Optimize GPU utilization, memory efficiency, network throughput, and model artifact storage performance
  • Design observability systems for tracking inference latency, throughput, GPU usage, cost metrics, and system health
  • Manage model registries and CI/CD pipelines enabling automated and reproducible model deployments
  • Own the full lifecycle of ML systems from development through production, including operational support and on-call responsibilities
  • Define engineering best practices and contribute to platform scalability in a fast-moving startup environment

Required Qualifications


  • 4+ years of experience in ML Ops, Platform Engineering, SRE, or similar infrastructure roles focused on ML systems
  • Hands-on experience with model serving frameworks such as vLLM, TGI, Triton, or equivalent
  • Strong background in container orchestration and operating GPU-based workloads in production
  • Experience with MLOps tooling including model registries, experiment tracking, and automated deployment pipelines
  • Proficiency in Python and infrastructure-as-code tools (e.g., Terraform, Helm, or similar)
  • Strong understanding of distributed systems, performance tuning, and production reliability engineering
  • Ability to effectively use AI coding assistants to accelerate development and debugging workflows
  • Ownership mindset with the ability to operate independently in a remote-first environment




Preferred Qualifications


  • Experience with ML platforms such as Kubeflow, MLflow, or KubeAI
  • Knowledge of GPU scheduling, CUDA/ROCm optimization, or multi-tenant inference systems
  • Experience with cost optimization across different GPU types and inference workloads
  • Background in early-stage startups or greenfield infrastructure projects
  • Proven experience building production systems from scratch rather than maintaining legacy platforms




Why Join Us


  • Take ownership of critical infrastructure powering a rapidly scaling AI-native cloud platform
  • Build foundational ML inference systems from the ground up in a high-growth, well-funded startup
  • Work at the intersection of distributed systems, GPU computing, and sustainable cloud architecture
  • Gain deep expertise in next-generation AI infrastructure and large-scale model serving systems
  • Influence core engineering decisions and define best practices that will scale with the company.

Pragmatike is committed to a fair, transparent, and inclusive recruitment process. We do not discriminate based on age, disability, gender, gender identity or expression, marital or civil partner status, pregnancy or maternity, race, religion or belief, sex, or sexual orientation.


In accordance with GDPR, your personal data will be processed lawfully, fairly, and securely, and used solely for recruitment purposes, including sharing it with our client(s) for employment consideration. You may request access, correction, or deletion of your data at any time. We are committed to maintaining the confidentiality and security of your information throughout the recruitment process.




This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.