Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/ndN4AqiM8VrMDL6R9
Back to the job results
Remote
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Company Summary Irth Solutions is a software product company building cutting-edge technology platforms that continuously set industry benchmarks.
With a strong product culture, collaborative environment, and high growth potential, Irth offers an exciting opportunity to work on modern, enterprise-scale data platforms.
Irth is currently building a modern, multi-cloud, enterprise-grade data estate using Databricks to centralize data across AWS, Azure, and GCP.
We are hiring a Data Engineer to support the implementation and operationalization of this unified data platform.
Job Summary Irth is developing a unified Databricks-based Lakehouse platform to consolidate and govern data across multiple cloud environments and products.
As a Data Engineer, you will play a hands-on role in implementing scalable data pipelines, applying governance and quality standards, and supporting the enterprise data architecture.
You will work closely with the Senior Data Architect to build reliable, secure, and high-performance data pipelines using Databricks, Spark, and cloud-native technologies.
This role is ideal for mid-level engineers looking to expand their expertise in Databricks, Spark, Delta Lake, and modern lakehouse architecture.
WHAT IS IN IT FOR YOU · Being an integral part of a dynamic, growing company that is well respected in its industry.
· Competitive pay based on experience.
Data Pipeline Development · Build and maintain ingestion pipelines across AWS, Azure, and GCP.
· Implement batch and streaming pipelines using Databricks, Spark/PySpark, SQL, Delta Live Tables, and Lakeflow.
· Apply medallion architecture patterns for structured data transformation.
· Implement CDC, SCD Type 1/2, schema evolution, and data validation.
Platform & Storage Implementation · Manage Delta Lake storage, tables, partitions, and performance optimization (OPTIMIZE, Z-ORDER, VACUUM).
· Support metadata management, lineage tracking, and cataloging using Unity Catalog.
· Assist with multi-cloud integrations between cloud storage platforms and Databricks.
Governance, Quality & Compliance · Implement data quality validation, profiling, and monitoring aligned with governance standards.
· Apply security controls including RBAC, encryption, and data classification.
· Support metadata and lineage implementation using Unity Catalog, Purview, or similar tools.
Orchestration, Automation & Operations · Build and manage workflows using Databricks Workflows, Delta Live Tables, Azure Data Factory, or similar tools.
· Support CI/CD pipelines, code versioning, and deployment automation.
· Assist in troubleshooting, pipeline recovery, and performance tuning.
Collaboration & Documentation · Work closely with the Senior Data Architect to implement architecture designs.
· Participate in design reviews and technical discussions.
· Document pipelines, data models, transformation logic, and operational procedures.
Skills & Experience Required: · 3–5 years of experience in data engineering, ETL, or cloud data platforms.
· Experience with Databricks, Spark, PySpark, or distributed data processing.
· Strong SQL and structured data transformation skills.
· Experience with at least one cloud platform (Azure preferred; AWS/GCP acceptable).
· Knowledge of data modeling, schema evolution, pipeline troubleshooting, and data quality.
· Understanding of security practices including RBAC, encryption, and credential management.
Preferred: · Experience with Delta Lake, medallion architecture, and lakehouse design.
· Familiarity with Unity Catalog, Purview, Glue Catalog, or similar tools.
· Experience with orchestration tools (ADF, Airflow, Databricks Workflows).
· Experience with Git, CI/CD, and DevOps practices.
· Knowledge of Power BI, geospatial data, or AI/ML data preparation.
· Relevant cloud or Databricks certifications (DP-203, Data Engineer Associate, etc.
). EDUCATION · Bachelor’s or master’s degree in computer science, Software Engineering, or a related field, or equivalent professional experience.
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.