Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/jDv3GdW4LQGibrhk7
Back to the job results

Senior Data Engineer (PySpark)

30+ days ago 2026/09/03
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Job Title: Data Engineer (PySpark) We are seeking a highly skilled Senior Data Engineer with strong expertise in PySpark and data modelling to join our Digital Platforms delivery team.
The ideal candidate will design, build, and optimize scalable data pipelines and contribute to enterprise-grade data architecture supporting analytics, reporting, and digital banking initiatives.
Key Responsibilities Design, develop, and maintain scalable data pipelines using PySpark and distributed data processing frameworks Build and optimize ETL/ELT workflows for large-scale structured and unstructured datasets Implement robust data models (dimensional, relational, and data vault) aligned with business requirements Work closely with stakeholders, data analysts, and business teams to translate requirements into technical solutions Ensure data quality, governance, and consistency across multiple data sources Optimize performance of Spark jobs and handle large datasets efficiently Collaborate with DevOps teams for deployment, CI/CD, and monitoring of data pipelines Troubleshoot data issues and ensure high availability and reliability of data systems Contribute to data platform architecture and best practices within the Data Engineering chapter Required Skills & Experience 5–8+ years of experience in data engineering or related roles Strong hands-on experience with PySpark / Apache Spark Expertise in data modelling (Star Schema, Snowflake, Data Vault) Experience with big data ecosystems (Hadoop, Hive, Spark, Kafka – preferred) Proficiency in Python and SQL Experience with cloud platforms (AWS / Azure / GCP – any one preferred) Strong understanding of ETL frameworks and data warehousing concepts Experience working in agile environments Good to Have Experience in banking or financial services domain Knowledge of real-time data processing / streaming (Kafka, Spark Streaming) Exposure to data governance and security practices Familiarity with tools like Airflow, Databricks, or similar platforms
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.