كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


تم إلغاء حظر المستخدم بنجاح
https://bayt.page.link/dXmSgNxUjrKifKVS9
العودة إلى نتائج البحث‎
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

Project description We are seeking a Senior Data Engineer with strong hands-on expertise in Databricks, PySpark, and cloud-based data platforms to support the development, migration, and optimization of our enterprise data platform within the investment domain. This role will focus on building and maintaining scalable data pipelines and lakehouse data models that support investment analytics, portfolio management, risk analysis, and trading data workflows. The successful candidate will work closely with data engineers, quantitative analysts, and investment stakeholders to deliver high-quality, reliable, and performant data solutions. Experience with financial datasets such as market data, portfolio holdings, transactions, pricing data, and risk metrics is highly valuable. Responsibilities Data Engineering & Pipeline Development: Build, optimize, and maintain end-to-end data pipelines using Databricks, PySpark, and SQL across ingestion, curation, and consumption layers. Develop and manage Declarative Pipelines (e.g., Lakeflow / DLT-style pipelines) to support scalable incremental processing and operational reliability. Implement robust transformations and modelling patterns to deliver trusted datasets for downstream consumption (analytics, operations, reporting, applications). Data Quality, Controls & Operational Excellence: Implement data quality validation, monitoring, reconciliation, and alerting to ensure datasets meet required standards for completeness, accuracy, timeliness, and consistency. Debug pipeline failures, resolve production incidents, and continuously improve pipeline stability, performance, and cost efficiency. Apply best practices around auditability, lineage, and data correctness — particularly in time-series and historically tracked datasets. Data Modelling & Domain Delivery Contribute to the design and evolution of data models supporting the organization’s investment footprint (Public Markets, Private Markets, reference/master data, corporate actions, portfolio, pricing, risk, etc.). Partner with business stakeholders to translate requirements into implementable data solutions while preserving maintainability and governance standards. Support integration of multi-vendor and internal data sources into curated datasets that align with ADIA’s operational and analytical needs. Platform & Engineering Standards Follow and enhance engineering standards for version control, CI/CD, testing, documentation, and secure development practices. Optimize compute and storage usage through partitioning/clustering strategies, incremental patterns, and performance tuning. Contribute reusable libraries, patterns, templates, and approaches that improve delivery speed and consistency across the team. Skills Must have Mandatory Skills: Bachelor’s degree (Computer Science, Engineering, Information Systems, or related discipline). 5+ years experience in data engineering roles (flexible based on depth of capability). Strong hands-on experience with Databricks in production environments (prerequisite). Strong programming experience with PySpark (must) and strong SQL (must). Proven experience with Declarative Pipelines / pipeline orchestration on Databricks (prerequisite). Strong understanding of data engineering fundamentals: ingestion patterns, transformation design, incremental processing, testing, performance tuning. Experience delivering production-ready datasets with appropriate operational controls (monitoring, troubleshooting, reliability patterns). Experience with modern Lakehouse concepts (Delta tables, optimization strategies, file skipping, metadata/statistics awareness). Exposure to data governance practices: cataloguing, documentation, business glossary/terms, lineage. Experience working in enterprise environments with CI/CD pipelines and structured release processes. Familiarity with vendor market data feeds (e.g., Bloomberg, Refinitiv, MSCI, FactSet) or similar multi-source mastering patterns. Nice to have Strong Hands-on Expertise in Palantir Foundry. Proven experience with Foundry pipelines, ontologies, data lineage, transformations, and platform governance. Proven Migration Experience from Palantir / to Databricks. Demonstrated experience leading or executing platform migrations, including pipeline conversion, data model redesign, and production cutover. Familiarity with Dynatrace or Datadog for system observability and monitoring. Databricks certification, cloud certifications (Azure/AWS), or enterprise data architecture certifications. Other Languages English: C1 Advanced Seniority Senior


لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.