Submitting more applications increases your chances of landing a job.
Here’s how busy the average job seeker was last month:
Opportunities viewed
Applications submitted
Keep exploring and applying to maximize your chances!
Looking for employers with a proven track record of hiring women?
Click here to explore opportunities now!You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for
Would You Be Likely to Participate?
If selected, we will contact you via email with further instructions and details about your participation.
You will receive a $7 payout for answering the survey.
Key Responsibilities
AI Assurance Architecture
* Architect platforms and frameworks for AI assurance, evaluation, and benchmarking
* Design systems for LLM, agent, and RAG evaluation across functional, non functional, and risk dimensions
* Define architectural patterns for Responsible AI, bias detection, explainability, and safety validation
* Build reusable assurance components supporting Business Assurance, Risk Assurance, and Reliability
Security, Reliability & Governance
* Architect AI testing and validation for security, privacy, prompt injection, and adversarial robustness
* Integrate red teaming, threat simulation, and chaos style validation for AI systems
* Define governance mechanisms for model usage, auditability, traceability, and compliance
* Ensure AI systems meet enterprise standards for resilience, fault tolerance, and observability
Platform & Engineering Enablement
* Design AI assurance platforms supporting automated test execution, reporting, and insights
* Enable integration with CI/CD pipelines to enforce AI quality gates
* Collaborate with QE engineering teams to embed AI assurance into the SDLC
* Mentor teams on AI risk identification and mitigation from an engineering perspective
Core Platforms, Frameworks & Tooling
* LLM and AI evaluation frameworks (PromptFoo, DeepEval, custom LLM evaluation harnesses)
* Prompt, RAG, and agent validation tooling (prompt testing frameworks, retrieval accuracy validators, agent workflow evaluators)
* Responsible AI and model risk tooling (Fairlearn, SHAP, Explainable AI libraries, toxicity and bias scanners)
* Security and adversarial testing tools for AI systems (PyRIT, Garak)
* AI red teaming and threat simulation frameworks (automated red team scripts, adversarial test suites for LLMs and agents)
* AI assurance automation and QE frameworks (Galileo)
* Observability for AI behavior and drift (Langfuse, Arize, Evidently, custom telemetry dashboards)
Client Orientation & Leadership
* Partner with product and engineering teams to identify AI Assurance opportunities and shape roadmaps
* Support client workshops, RFPs, and solution presentations
* Mentor engineers on AI/ML/Gen AI best practices and emerging technologies
* Translate complex AI concepts into business-friendly narratives
Must Have Qualifications
* 13+ years of experience in software engineering with 3+ years in AI with strong architecture ownership
* Hands on expertise in AI/ML systems, LLM evaluation, and assurance frameworks
* Experience with AI red teaming, model risk management, or AI audit tooling
* Strong understanding of Responsible AI, AI risks, and governance principles
* Experience with security testing, adversarial testing, and reliability engineering
* Proficiency in Python, automation frameworks, and cloud platforms
Good to Have Skills
* Knowledge of regulatory or compliance considerations for AI systems
* Exposure to performance engineering, chaos engineering, or resilience testing for AI
* Contributions to internal platforms, frameworks, or standards
You'll no longer be considered for this role and your application will be removed from the employer's inbox.