Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/FbgimYTAKBp3aB4JA
Back to the job results

Data Analysis and Simulation Professional

Today 2026/09/03
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

We are looking for candidate who is a hands-on Data Engineer with 3–5 years of experience building scalable batch and real-time data pipelines and delivering production-grade data products on modern cloud platforms. The candidate will bring strong programming and data querying skills, along with a proven ability to improve performance through efficient storage and processing design. The candidate is comfortable defining data quality checks, implementing automated tests, and setting up reliable release processes to support continuous delivery. Strong problem-solver with clear communication, collaborative teamwork, analytical thinking, and a structured, quality-focused approach. Your tasks and responsibilities: You build and optimize scalable batch and streaming data pipelines using Spark 3.x with RDDs, DataFrames, Spark SQL, and Structured Streaming. You develop and maintain robust Lakehouse solutions based on Medallion Architecture using Parquet and Delta formats. You work with Databricks capabilities such as Workflows, SQL Warehouses/Endpoints, Delta Live Tables, Pipelines, Unity Catalog, and Auto Loader. You contribute to data storage and partitioning strategies considering distribution, data skew, compaction, and overall big data storage efficiency. You contribute to development using Python and functional programming principles and collaborate effectively with engineering teams using IntelliJ, PyCharm, Git, Azure DevOps, and GitHub Copilot. You implement data quality and test strategies using pytest and Great Expectations. You build and maintain CI/CD pipelines using Azure Pipelines YAML, ensuring continuous delivery and acceptance testing. You collaborate with architects, analysts, and product teams to define scalable data models and reliable data processing solutions. You support modern data platform evolution by contributing to best practices in data governance, storage design, and platform standardization. To find out more about the specific business, have a look at Magnetic Resonance Imaging Your qualifications and experience: You have successfully completed a university degree (Bachelor/Master's/PhD) in Computer Science, Data Engineering, Data Science, Information Technology, or a similar field You have 3 to 5 years of professional experience in big data engineering, distributed data processing, and cloud-based data platform development. You have strong hands-on experience with Spark 3.x, including batch and streaming workloads using RDDs, DataFrames, and SQL. You have solid experience with Databricks, including Workflows, SQL Warehouses, DLT, Pipelines, Unity Catalog, and Auto Loader. You have strong knowledge of Lakehouse and Medallion Architecture, Delta/Parquet formats, partitioning, distribution, compaction, and performance tuning. You are proficient in Python and have a good understanding of functional programming concepts. You have strong SQL skills, including Spark SQL, HiveQL, and T-SQL. You have practical experience in CI/CD, Azure Pipelines YAML, continuous delivery, and testing practices for data platforms. You have experience with data validation and testing frameworks such as pytest and Great Expectations. It is beneficial if you also have exposure to ADF, Synapse Pipelines, Airflow, Oozie, Scala, Java, NoSQL databases, Hadoop ecosystem components, data catalog tools, or cube technologies such as SSAS/AAS/Tabular. Your attributes and skills: Since the development teams are spread internationally across multiple locations, communication in English is not a problem for you. You are proactive in solving complex engineering problems and delivering reliable, high-quality data solutions. You present your ideas and technical results confidently and convincingly in cross-functional development teams. Personally, you are characterized by strong teamwork and cooperation skills, analytical thinking, and a structured way of working. You are passionate about scalable data systems, engineering excellence, and continuous improvement. The highest quality standards in product development are a matter of course for you.
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.