كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


تم إلغاء حظر المستخدم بنجاح
https://bayt.page.link/wGtzLw7QRufT4wu36
العودة إلى نتائج البحث‎
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

NVIDIA is seeking a dynamic and experienced Generative AI Solution Architect with specialized expertise in training Large Language Models (LLMs) and Agentic AI . As a key member of our AI Solutions team, you will play a pivotal role in architecting and delivering cutting-edge solutions that leverage the power of NVIDIA's generative AI technologies. This position requires a deep understanding of language models, particularly LLMs, and a strong proficiency in designing and implementing agentic and RAG-based workflows.
What you will be doing:


  • Architect end-to-end generative AI solutions with a focus on LLMs, Agentic and RAG workflows.


  • Collaborate closely with customers to understand their language-related business challenges and design tailored solutions.


  • Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of LLM and RAG capabilities.


  • Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI technologies.


  • Engage directly with customers to understand their language-related requirements and challenges.


  • Lead workshops and design sessions to define and refine generative AI solutions focused on LLMs and RAG workflows and lead the training and optimization of Large Language Models using NVIDIA’s hardware and software platforms.


  • Implement strategies for efficient and effective training of LLMs to achieve optimal performance.


  • Design and implement RAG-based workflows to enhance content generation and information retrieval.


  • Work closely with customers to integrate RAG workflows into their applications and systems and stay abreast of the latest developments in language models and generative AI technologies.


  • Provide technical leadership and guidance on best practices for training LLMs and implementing RAG-based solutions.



What we need to see:


  • B.Tech ,Master's or Ph.D. in Computer Science, Artificial Intelligence, or equivalent experience


  • 8+ years of hands-on experience in a technical role, specifically focusing on generative AI, with a strong emphasis on training Large Language Models (LLMs).


  • Proven track record of successfully deploying and optimizing LLM models for inference in production environments.


  • In-depth understanding of state-of-the-art language models, including but not limited to GPT-3, BERT, or similar architectures.


  • Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers.


  • Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs.


  • Strong knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference.


  • Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders.


  • Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences.



Ways to stand out from the crowd:


  • Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization.


  • Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for scalable and efficient model deployment.


  • Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts.


  • Hands-on experience with NVIDIA GPU technologies, and GPU cluster management and ability to design and implement scalable and efficient workflows for LLM training and inference on GPU clusters


With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you! NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.