كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!

إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:

عدد الفرص التي تم تصفحها

عدد الطلبات التي تم تقديمها

استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!

هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟

اضغطي هنا لاكتشاف الفرص المتاحة الآن!
نُقدّر رأيكِ

ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.

هل ترغبين في المشاركة؟

في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.

ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.


تم إلغاء حظر المستخدم بنجاح
https://bayt.page.link/zPLAitEV7FEPq9rL9
العودة إلى نتائج البحث‎

Data Engineer | GCP, Big Data, Spark, ETL/ELT, Data Security, Cloud Infrastructure

قبل 30+ يومًا 2026/08/26
خدمات الدعم التجاري الأخرى
أنشئ تنبيهًا وظيفيًا لوظائف مشابهة
تم إيقاف هذا التنبيه الوظيفي. لن تصلك إشعارات لهذا البحث بعد الآن.

الوصف الوظيفي

Job Summary
Synechron is seeking a highly skilled Data Engineer specializing in Google Cloud Platform (GCP) and Big Data technologies to architect, develop, and optimize scalable data pipelines and data management solutions. This role is pivotal in supporting enterprise data initiatives, ensuring data quality, security, and accessibility for critical business insights. The successful candidate will collaborate across teams to drive data platform innovations and operational excellence aligned with organizational goals.


Software Requirements


  • Required:


    • Extensive experience with GCP services including BigQuery, Dataflow, Cloud Storage, and Cloud Pub/Sub


    • Strong proficiency in Apache Spark for distributed data processing and analytics


    • Hands-on expertise in building and maintaining data pipelines using ETL/ELT processes


    • Proficiency in Python for data scripting, automation, and orchestration tasks


    • Experience with distributed data storage and management, including relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)


    • Familiarity with version control tools such as Git


    • Knowledge of Linux/Unix environments for data processing and scripting


  • Preferred:


    • Experience with data governance, metadata management, and data security best practices


    • Knowledge of data orchestration tools like Apache Airflow or Prefect


    • Understanding of containerization (Docker) and orchestration (Kubernetes) for data deployment


Overall Responsibilities


  • Design, develop, and maintain scalable, resilient data pipelines on GCP to support enterprise analytics and reporting solutions


  • Collaborate with business stakeholders, data scientists, and analytics teams to understand data requirements and implement optimized data workflows


  • Implement and enforce data quality, security, and governance standards across platforms


  • Optimize data ingestion, transformation, and processing workflows to ensure high performance and cost efficiency


  • Perform data profiling, troubleshooting, and resolution of pipeline issues to ensure operational reliability


  • Stay current with emerging data engineering best practices, tools, and industry trends, leading continuous improvement initiatives


Technical Skills (By Category)


  • Programming Languages:
    Required: Python for scripting and orchestration
    Preferred: SQL, Java (for integration or data processing tasks)


  • Databases & Data Management:
    BigQuery, PostgreSQL, MySQL, MongoDB, data modeling, data security, and query optimization


  • Cloud Technologies:
    GCP services including BigQuery, Dataflow, Cloud Storage, Pub/Sub, IAM, and Cloud Function deployment


  • Frameworks & Libraries:
    Apache Spark, Dataflow, Airflow (preferred), TensorFlow or PyTorch (if ML integrations are involved)


  • Development & Orchestration Tools:
    Git, Jenkins, Docker, Kubernetes, Terraform, Apache Airflow, and other CI/CD tools


  • Security & Compliance:
    Implementing security policies, data encryption, access controls, and compliance standards such as GDPR or HIPAA


Experience Requirements


  • Minimum of 6 years of practical experience in data engineering, with a significant focus on GCP and Big Data ecosystems


  • Hands-on experience designing and implementing end-to-end data pipelines and workflows at scale


  • Proven expertise in distributed data processing, data security, and infrastructure automation


  • Experience working in agile teams supporting enterprise or large-scale data platforms


  • Industry experience in finance, healthcare, retail, or technology sectors is advantageous


Day-to-Day Activities


  • Develop and optimize data pipelines ensuring high performance, data quality, and security


  • Collaborate with analytics, data science, and cross-functional teams to translate business requirements into scalable data solutions


  • Automate data workflows, manage infrastructure as code, and support deployment pipelines


  • Perform data profiling and troubleshooting to resolve pipeline issues promptly


  • Implement security best practices for data access, encryption, and compliance


  • Document architecture, workflows, procedures, and operational guidelines


  • Participate in sprint planning, reviews, and continuous improvement initiatives


Qualifications


  • Bachelor’s or Master’s degree in Computer Science, Data Science, Information Technology, or a related field


  • Extensive experience with GCP and Big Data processing frameworks such as Hadoop, Spark, and Dataflow


  • Certifications in GCP (e.g., Professional Data Engineer), AWS, or Azure are a plus


  • Proven ability to develop robust, scalable, and secure data pipelines supporting enterprise analytics


Professional Competencies


  • Strong analytical and problem-solving skills, especially related to distributed data systems


  • Excellent collaboration and communication skills to work effectively across teams and stakeholders


  • Leadership qualities to guide data engineering best practices and mentor junior team members


  • Strategic mindset to align data platform development with organizational goals


  • Continuous learner to stay updated with evolving data technologies and industry standards


  • Effective time management skills to handle multiple data projects simultaneously


S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 


Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.



All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.


Candidate Application Notice


لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.

لقد تجاوزت الحد الأقصى المسموح به للتنبيهات الوظيفية (15). يرجى حذف أحد التنبيهات الحالية لإضافة تنبيه جديد.
تم إنشاء تنبيه وظيفي لهذا البحث. ستصلك إشعارات فور الإعلان عن وظائف جديدة مطابقة.
هل أنت متأكد أنك تريد سحب طلب التقديم إلى هذه الوظيفة؟

لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.