كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!
إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:
عدد الفرص التي تم تصفحها
عدد الطلبات التي تم تقديمها
استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!
هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟
اضغطي هنا لاكتشاف الفرص المتاحة الآن!ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.
هل ترغبين في المشاركة؟
في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.
ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.
The Data Engineer is responsible for designing, building, and maintaining reliable, scalable data pipelines that support analytics and reporting use cases. The role requires consistent application of enterprise frameworks, data quality controls, and security standards while owning components end to end. Data Engineers collaborate closely with platform, analytics, and business teams and are expected to continuously optimize performance, reliability, and cost efficiency of data solutions. Understanding of the Lakehouse Architecture and medallion.
A. Data Pipeline Development (Primary Responsibility)
• Design, build, and maintain scalable, reliable data ingestion and transformation pipelines across curated, analytics, and reporting layers.
• Develop ELT workflows using standardized enterprise frameworks, ensuring consistency, reusability, and alignment with platform standards.
• Assemble, process, and optimize large and complex datasets from diverse internal and external data sources.
• Implement CDC, delta-based updates for incremental processing.
• Understanding of the different data format like Parquet, Delta etc.
• Implement automated validations, reconciliations, and error handling mechanisms to ensure data accuracy and reliability.
• Support pipeline deployment, scheduling, and operational stability across non production and production environments.
• Working knowledge of streaming pipelines using Event Hub / Kafka /Structure Streaming is strongly preferred.
B. Platform, Security & Governance Alignment
• Embed data quality rules, security controls, and documentation into pipelines by design.
• Ensure datasets adhere to access controls, RBAC policies, and PII handling standards.
• Strong understanding of Unity Catalog (Databricks)
• Apply approved ingestion and transformation patterns that meet enterprise governance and audit requirements.
• Identify and remediate data quality or performance issues, escalating risks where appropriate.
C. Collaboration & Continuous Improvement
Strong working knowledge of Python for building and maintaining data pipelines
Databases
• Proficient SQL skills including complex joins, aggregations, and performance aware queries
Cloud & Data Platforms
• Hands on experience with Azure Data Factory (ADF) for orchestration and scheduling
• Practical experience using Azure Databricks in production environments
• Strong understanding of Delta Lake (Delta tables, ACID transactions, Time Travel etc.)
• Proficient in using ADLS, Azure Event Hub etc,
ETL / ELT Engineering
• Ability to design and implement ELT workflows across multiple data sources
Data Modelling
• Working knowledge of conceptual and logical data modelling for analytics use cases
Frameworks
• Proficient use of Spark / PySpark / SparkSQL for distributed processing
DevOps
• Good working knowledge of Azure DevOps/GitHub actions, pipeline deployment automation
B. Desirable / Nice to Have Skills
• FinOps and cloud cost optimization concepts.
• Azure Analysis Services
لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.