Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/i52CBdLVhmBcmjVY7
Back to the job results

Lead Quality Assurance Engineer- AI

Yesterday 2026/08/27
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

OPENTEXT - THE INFORMATION COMPANY



OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation.




AI-First. Future-Driven. Human-Centered.



At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us.




YOUR IMPACT



We are seeking a passionate and detail-oriented  Lead Quality Assurance (QA) Engineer to join our AI Engineering and Enablement team.



In this role, you will be responsible for validating Generative AI systems, multi-agent workflows, and Retrieval-Augmented Generation (RAG) pipelines developed using frameworks like LangGraph, LangChain, and Crew AI.



You will work closely with AI engineers, data scientists, and product owners to ensure the accuracy, reliability, and performance of LLM-powered enterprise applications.




What The Role Offers



Be part of a next-generation AI engineering team delivering enterprise-grade GenAI solutions.
Gain hands-on experience testing LangGraph-based agentic workflows and RAG pipelines.
Learn from senior AI engineers working on production-grade LLM systems.
Opportunity to grow into AI Quality Specialist or AI Evaluation Engineer roles as the team expands.
Develop and execute test cases for validating RAG pipelines, LLM integrations, and agentic workflows.
Validate context retrieval accuracy, prompt behaviour, and response relevance across different LLM configurations.
Conduct functional, integration, and regression testing for GenAI applications exposed via APIs and microservices.
Test Agent-to-Agent (A2A) & Model Context Protocol (MCP) communication flows for correctness, consistency, and task coordination.
Verify data flow and embedding accuracy between vector databases (Milvus, Weaviate, pgvector, Pinecone).
Build and maintain automated test scripts for evaluating AI pipelines using Python and PyTest.
Leverage LangSmith, Ragas, or TruLens for automated evaluation of LLM responses (factuality, coherence, grounding).
Integrate AI evaluation tests into CI/CD pipelines (GitLab/Jenkins) to ensure continuous validation of models and workflows.
Support performance testing of AI APIs and RAG retrieval endpoints for latency, accuracy, and throughput.
Assist in creating automated reports summarizing evaluation metrics such as Precision@K, Recall@K, grounding scores, and hallucination rates.
Validate guardrail mechanisms, response filters, and safety constraints to ensure secure and ethical model output.
Use OpenTelemetry (OTEL) and Grafana dashboards to monitor workflow health and identify anomalies.
Participate in bias detection and red teaming exercises to test AI behavior under adversarial conditions.
Work closely with AI engineers to understand system logic, prompts, and workflow configurations.
Document test plans, results, and evaluation methodologies for repeatability and governance audits.
Collaborate with Product and MLOps teams to streamline release readiness and model validation processes.



What You Need To Succeed



Education: Bachelor’s degree in Computer Science, AI/ML, Software Engineering, or related field.
Experience: 7–10 years in Software QA or Test Automation, with at least 2 years exposure to AI/ML or GenAI systems.
Solid hands-on experience with Python and PyTest for automated testing.
Basic understanding of LLMs, RAG architecture, and vector database operations.
Exposure to LangChain, LangGraph, or other agentic AI frameworks.
Familiarity with FastAPI, Flask, or REST API testing tools (Postman, PyTest APIs).
Experience with CI/CD pipelines (GitLab, Jenkins) for test automation.
Working knowledge of containerized environments (Docker, Kubernetes).
Understanding of AI evaluation metrics (Precision@K, Recall@K, grounding, factual accuracy).
Exposure to AI evaluation frameworks like Ragas, TruLens, or OpenAI Evals.
Familiarity with AI observability and telemetry tools (OpenTelemetry, Grafana, Prometheus).
Experience testing LLM-powered chatbots, retrieval systems, or multi-agent applications.
Knowledge of guardrail frameworks (Guardrails.ai, NeMo Guardrails).
Awareness of AI governance principles, data privacy, and ethical AI testing.
Experience with cloud-based AI services (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
Curious and eager to learn emerging AI technologies.
Detail-oriented with strong problem-solving and analytical skills.
Excellent communicator who can work closely with engineers and product managers.
Passion for quality, reliability, and measurable AI performance.
Proactive mindset with ownership of test planning and execution.





OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws.



If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.





This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.