Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/PLRFqEW7mUnUk4No8
Back to the job results
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Job Title: Data Engineer — Analytical Warehouse (FNZ)



About FNZ:



FNZ is a global fintech firm transforming the way financial institutions serve their clients. By



combining cutting-edge technology, infrastructure, and investment operations, FNZ



enables wealth management firms to deliver personalized investment solutions at scale.



Operating across multiple regions and supporting over $1.5 trillion in assets under



administration, FNZ partners with leading banks, insurers, and asset managers to create



seamless and innovative wealth platforms that empower millions of investors worldwide.



Job Summary:



We are seeking a hands-on Data Engineer to build and maintain the Analytical Warehouse



on Microsoft Fabric. This role focuses on developing data pipelines that ingest enriched



Gold-layer data from the NRT-ODS streaming platform into OneLake, building



transformation layers using SQL-based transformation frameworks or Fabric notebooks,



and delivering analytical datasets to wealth management clients. You will work at the



intersection of the real-time ODS and the analytical lakehouse, enabling historical



analytics, business intelligence, and client-facing reporting.



Key Responsibilities:



• Kafka-to-Fabric Ingestion: Build and maintain the Kafka Connect sink connectors



that write Gold topics from the NRT-ODS into Fabric OneLake in Delta/Parquet



format. Ensure near real-time ingestion with automatic schema evolution via Avroto-Delta mapping.



• Data Pipeline Development: Develop data transformation pipelines within



Microsoft Fabric using Fabric notebooks (PySpark/Spark SQL), Dataflows, and Data



Factory pipelines. Implement Bronze/Silver/Gold layering within the Analytical



Warehouse.



• Data Transformations: Build and maintain SQL-based transformation models that



convert raw ingested data into analytical datasets. Implement incremental models,



snapshot tables, and materializations optimized for analytical query patterns.



• OneLake Storage Management: Design and manage OneLake storage structures



— partition strategies (by date, entity type, client), file compaction, retention



policies, and storage optimization for cost and query performance.



• Batch Extract Modernization: Migrate existing batch extract processes from SQLdriven CSV to Kafka-sourced Parquet via Fabric pipelines. Retain metadata-driven



configuration from CentralHub while outputting to OneLake in Parquet/Delta



format.



• Semantic Layer Development: Build semantic layer definitions for businessfriendly metrics — AUM, NAV, trade volumes, fee breakdowns, client counts —



ensuring consistent metric definitions across all consumption channels.



• Data Sharing: Implement Fabric Data Sharing using OneLake shortcuts or Delta



Sharing for clients who consume analytics in their own Fabric tenant. Ensure



governed access where clients see only their own data.



• Data Quality: Implement data quality checks within the Analytical Warehouse using



Great Expectations or Soda. Validate row counts, null rates, referential integrity, and



freshness against defined data contracts.



• Performance Optimization: Tune query performance across Fabric SQL endpoints,



optimize Delta table layouts (Z-ordering, partitioning, file sizing), and manage



compute resource allocation.



• CI/CD & DevOps: Implement CI/CD pipelines for Analytical Warehouse artifacts



(transformation models, Fabric notebooks, pipeline definitions) using GitHub



Actions. Follow GitOps practices for deployment.



Qualifications:



• Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a



related technical field.



• Experience: 4+ years of hands-on experience in data engineering with a focus on



analytical/warehouse workloads.



• Microsoft Fabric / Azure: Demonstrated experience with Microsoft Fabric, Azure



Synapse Analytics, or Azure Data Factory. Familiarity with OneLake, Fabric



notebooks, and Fabric SQL endpoints.



• SQL Expertise: Strong SQL skills including complex analytical queries, window



functions, CTEs, and query performance tuning.



• Spark / PySpark: Proficiency in PySpark or Spark SQL for large-scale data



transformations.



• Data Transformation Frameworks: Experience with SQL-based transformation



frameworks for managing transformation layers — models, tests, documentation,



and incremental materializations.



• Delta Lake / Parquet: Understanding of Delta Lake table format — ACID



transactions, time travel, schema evolution, partition management, and file



compaction.



• Kafka Fundamentals: Working knowledge of Apache Kafka — consumer concepts,



Kafka Connect, Avro serialization — sufficient to build and troubleshoot the



ingestion layer from ODS Gold topics.



• CI/CD: Experience with CI/CD pipelines (GitHub Actions preferred) for data pipeline



deployments.



Preferred Qualifications:



• Experience working in the Wealth Management or Financial Services industry with



understanding of investment data domains (accounts, portfolios, transactions,



positions).



• Experience with Apache Iceberg table format for time-travel queries and multiengine access.



• Familiarity with data quality frameworks such as Great Expectations or Soda



integrated into data pipelines.



• Experience with semantic layer tools for defining governed business metrics.



• Exposure to data catalog and lineage tools (Purview, Atlan, or similar).



• Microsoft Fabric certifications or Azure Data Engineer certifications (DP-203) are a



plus.





About FNZ




FNZ is committed to opening up wealth so that everyone, everywhere can invest in their future on their terms. We know the foundation to do that already exists in the wealth management industry, but complexity holds firms back. 




We created wealth’s growth platform to help. We provide a global, end-to-end wealth management platform that integrates modern technology with business and investment operations. All in a regulated financial institution. 




We partner with the world’s leading financial institutions, with over US$2.4 trillion in assets on platform (AoP).
Together with our clients, we empower nearly 30 million people across all wealth segments to invest in their future.






This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.