كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


تم إلغاء حظر المستخدم بنجاح
https://bayt.page.link/skNsu3q93QxzdCY37
العودة إلى نتائج البحث‎

Sr Staff ML Engineer - Production & MLOps Focus - GenAI Security Platform (Prisma AIRS, NetSec)

اليوم 2026/09/01
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

Sr Staff ML Engineer - Production & MLOps Focus - GenAI Security Platform (Prisma AIRS, NetSec) Bengaluru, Karnataka, India Product Engineering Ref ID: JR-015011
Apply

Current Employees, apply here



Our Mission


At Palo Alto Networks®, we’re united by a shared mission—to protect our digital way of life. We thrive at the intersection of innovation and impact, solving real-world problems with cutting-edge technology and bold thinking. Here, everyone has a voice, and every idea counts. If you’re ready to do the most meaningful work of your career alongside people who are just as passionate as you are, you’re in the right place.


Who We Are


In order to be the cybersecurity partner of choice, we must trailblaze the path and shape the future of our industry. This is something our employees work at each day and is defined by our values: Disruption, Collaboration, Execution, Integrity, and Inclusion. We weave AI into the fabric of everything we do and use it to augment the impact every individual can have. If you are passionate about solving real-world problems and ideating beside the best and the brightest, we invite you to join us!


We believe collaboration thrives in person. That’s why most of our teams work from the office full time, with flexibility when it’s needed. This model supports real-time problem-solving, stronger relationships, and the kind of precision that drives great outcomes.

Job Summary


The TeamEngineering - The Engineering team is at the core of our products and services. We are a team of innovators, problem-solvers, and builders who are passionate about creating cutting-edge cybersecurity solutions. We work collaboratively to tackle complex challenges, from cloud-native security to threat intelligence and endpoint protection. Our work is critical to protecting our customers' digital way of life.
Job Summary

Join our team building a cutting-edge multi-tenanted GenAI Security Platform that helps organisations validate and secure their AI systems against adversarial attacks. We're looking for a production-focused ML engineer who can both build ML systems and own their deployment at scale.
Key Responsibilities

  • Build and deploy LLM-based agents and multi-step evaluation workflows
  • Fine-tune models, optimize embeddings, and manage model weights and artifacts
  • Deploy and scale ML services on Kubernetes with proper monitoring and resource management
  • Implement experiment tracking, model versioning, and deployment automation
  • Develop observability dashboards for ML metrics, costs, latency, and quality
  • Optimize LLM API usage through caching, batching, and intelligent routing strategies
  • Manage vector database infrastructure and semantic search systems
  • Create CI/CD pipelines for ML artifacts and automated testing frameworks
  • Collaborate with ML researchers to productionize prototypes and scale experiments

Qualifications


Required Qualifications

  • 4+ years of ML engineering experience with hands-on LLM/NLP work
  • Practical experience building LLM-based applications (agents, multi-turn systems, evaluators)
  • Understanding of model fine-tuning, embedding optimization, and prompt engineering
  • Experience with LLM APIs (OpenAI, Anthropic, AWS Bedrock, Azure OpenAI)
  • Knowledge of LLM orchestration frameworks ( LangChain, LlamaIndex, Pydantic AI, custom solutions)
  • Familiarity with model architectures and when to fine-tune vs prompt engineer
  • Strong experience deploying ML models to production at scale
  • Experience with Model serving frameworks (vLLM preferred; TensorRT-LLM, Ray Serve, or similar a plus)
  • Kubernetes and Docker proficiency for ML workload orchestration
  • Hands-on experience with ML experiment tracking and model versioning tools
  • Understanding of CI/CD for ML systems with automated testing and validation
  • Knowledge of distributed computing, async processing, and job queues
  • Experience with monitoring, observability, and cost optimisation for ML systems
  • Proficiency with cloud platforms (GCP preferred, AWS/Azure acceptable)
  • Experience managing vector databases and similarity search at scale
  • Understanding of caching strategies (Redis) and data pipeline architectures
  • Knowledge of infrastructure-as-code and GitOps workflows
  • Expert Python skills (async/await, type hints, Pydantic, testing)
  • Experience with ML frameworks (PyTorch/TensorFlow helpful but not required)
  • SQL proficiency for analytics and data pipeline development
  • Strong software engineering practices (testing, code review, documentation)


Preferred Qualifications

  • Experience with model training, LoRA, PEFT, or custom fine-tuning pipelines
  • Background in building multi-agent systems or complex LLM workflows
  • Knowledge of AI safety, adversarial ML, or security testing
  • Previous work optimizing LLM costs and latency at scale
  • Familiarity with graph databases or relationship modeling
  • Experience in high-scale production ML environments

Our Commitment


We’re trailblazers that dream big, take risks, and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together.


We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com.


Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.


All your information will be kept confidential according to EEO guidelines.


Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.
Apply

Current Employees, apply here



لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.