كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


تم إلغاء حظر المستخدم بنجاح
https://bayt.page.link/kWEmeze6QFJMGpiD8
العودة إلى نتائج البحث‎
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

Full‑Stack data engineer

We are seeking a highly self‑sufficient, motivated engineer based with strong full‑stack data engineering skills to join our innovative, dynamic team. This is a remote/offshore role that requires autonomy, excellent communication, and the ability to deliver high‑quality work with limited supervision while collaborating with a predominantly US‑based team. You will build reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting—working end‑to‑end from data ingestion and transformation through to UI.


Our Python‑based data platform is undergoing a major evolution toward a modern, cloud‑native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment. This role will be pivotal in independently designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives—delivering clean, reliable, and well‑modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely (primarily via remote channels) with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.


Key Responsibilities


  • Remote collaboration & communication:
    Operate effectively as an offshore member of a distributed team, proactively communicating status, risks, and blockers across time zones and coordinating overlap with US working hours as needed.


  • Full‑stack data engineering:
    Build across the entire stack, including data ingestion/acquisition and transformation, APIs, front‑end components, and automated test suites, delivering production‑grade solutions with minimal hand‑holding.


  • Autonomous delivery & ownership:
    Take end‑to‑end ownership of features and projects—clarifying requirements, breaking work into milestones, estimating timelines, and delivering high‑quality, well‑documented solutions.


  • Specification and design:
    Translate short‑ and long‑term business requirements, architectural considerations, and competing timelines into clear, actionable technical specifications and design documents.


  • Code quality:
    Write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.


  • Continuous improvement:
    Contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements, continuously suggesting better tools, patterns, and automation.


Required Skills & Experience


  • Professional experience:
    5+ years in software engineering, with a full‑stack background building complex, scalable data‑engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.


  • Modern data engineering:
    Strong experience with medallion data architecture patterns using data warehouse technologies (e.g., Snowflake), data transformation tooling (e.g., dbt), BI tooling, and NoSQL data marts (e.g., Elasticsearch/OpenSearch).


  • Testing and QA:
    Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.


  • Remote work & autonomy:
    Proven track record working in a remote or distributed environment, demonstrating self‑motivation, reliable execution, and the ability to make sound technical decisions independently.


  • Agile methodology:
    Working knowledge of Agile development practices and workflows (e.g., sprint planning, stand‑ups, retrospectives) in a distributed team setting.


  • Education:
    Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.


Preferred Skills & Experience


  • Machine learning and AI:
    Hands‑on experience with large language models (LLMs) and agentic frameworks/workflows.


  • Search and analytics:
    Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.


  • Cloud expertise:
    Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.


  • Front‑end expertise:
    Experience building user interfaces with Angular or a modern UI stack.


  • Financial domain knowledge:
    Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.


لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.