Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/eF84CAM5WSDVR4SUA
Back to the job results
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

It's fun to work in a company where people truly BELIEVE in what they are doing!


We're committed to bringing passion and customer focus to the business.


Role overview:


We’re building a next-gen LLMOps team at Fractal to industrialize GenAI implementation and shape the future of GenAI engineering. This is a hands-on technical leadership role for AI engineers with strong ML and DevOps skills — ideal for those who love building scalable systems from the ground up. You will be designing, deploying, and scaling GenAI and Agentic AI applications with robust lifecycle automation and observability.


Required Qualifications:


  • 10 - 14 years of experience in working on ML projects that includes product building mindset, strong hands on skills, technical leadership, leading development teams
  • Model development, training, deployment at scale, monitoring performance for production use cases
  • Strong knowledge on Python, Data Engineering, FastAPI, NLP
  • Knowledge on Langchain, Llamaindex, Langtrace, Langfuse, LLM evaluation, MLFlow, BentoML
  • Should have worked on proprietary and open-source LLMs
  • Experience on LLM fine tuning including PEFT/CPT
  • Experience in creating Agentic AI workflows using frameworks like CrewAI, Langraph, AutoGen, Symantec Kernel
  • Experience in performance optimization, RAG, guardrails, AI governance, prompt engineering, evaluation, and observability
  • Experience in GenAI application deployment on cloud and on-premises at scale for production using DevOps practices
  • Experience in DevOps and MLOps
  • Good working knowledge on Kubernetes and Terraform
  • Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
  • Team player with excellent communication and presentation skills

Must have skills:


  • Product thinking that includes ideation, prototyping, and scale internal accelerators for LLMOps
  • Architect and build scalable LLMOps platforms for enterprise-grade GenAI systems
  • Design and manage end-to-end LLM pipelines from data ingestion and embedding to evaluation and inference
  • Drive LLM-specific infrastructure: memory management, token control, prompt chaining, and context optimization
  • Lead scalable deployment frameworks for LLMs using Kubernetes and GPU-aware scaling
  • Build agentic AI operations capabilities including agent evaluation, observability, orchestration and reflection loops
  • Guardrails & Observability: Implement output filtering, context-aware routing, evaluation harnesses, metrics logging, and incident response
  • Platform Automation for LLMOps: Drive end-to-end automation with Docker, Kubernetes, GitOps, DevOps, Terraform, etc.

Product Thinking: Ideate, prototype, and scale internal accelerators and reusable components for LLMOps


GenAI Engineering: Productionize LLM-powered applications with modular, reusable, and secure patterns


Pipeline Architecture: Create evaluation pipelines — including prompt orchestration, feedback loops, and fine-tuning workflows


Prompt & Model Management: Design systems for versioning, AI governance, automated testing, and prompt quality scoring


Scalable Deployment: Architect cloud-native and hybrid deployment strategies for large-scale inference


Guardrails & Observability: Implement output filtering, context-aware routing, evaluation harnesses, metrics logging, and incident response


DevOps & Platform Automation: Drive end-to-end automation with Docker, Kubernetes, GitOps, Terraform, etc.


Must-Have Technical Skills


  • LLMOps frameworks: LangChain, MLflow, BentoML, Ray, Truss, FastAPI
  • Prompt evaluation and scoring systems: OpenAI evals, Ragas, Rebuff, Outlines
  • Cloud-native deployment: Kubernetes, Helm, Terraform, Docker, GitOps
  • ML pipeline: Airflow, Prefect, Feast, Feature Store
  • Data stack: Spark/Flink, Parquet/Delta, Lakehouse patterns
  • Cloud: Azure ML, GCP Vertex AI, AWS Bedrock/SageMaker
  • Languages: Python (must), Bash, YAML, Terraform HCL (preferred)

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!


Hiring Related Queries

India: HiringsupportIndia@fractal.ai


Outside India: HiringsupportROW@fractal.ai


This inbox does not process resume submissions. All applications must be made through posted job openings


Not the right fit?  Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!


This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.