Submitting more applications increases your chances of landing a job.
Here’s how busy the average job seeker was last month:
Opportunities viewed
Applications submitted
Keep exploring and applying to maximize your chances!
Looking for employers with a proven track record of hiring women?
Click here to explore opportunities now!You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for
Would You Be Likely to Participate?
If selected, we will contact you via email with further instructions and details about your participation.
You will receive a $7 payout for answering the survey.
Project description We are seeking a Senior Data Engineer with strong hands-on expertise in Databricks, PySpark, and cloud-based data platforms to support the development, migration, and optimization of our enterprise data platform within the investment domain. This role will focus on building and maintaining scalable data pipelines and lakehouse data models that support investment analytics, portfolio management, risk analysis, and trading data workflows. The successful candidate will work closely with data engineers, quantitative analysts, and investment stakeholders to deliver high-quality, reliable, and performant data solutions. Experience with financial datasets such as market data, portfolio holdings, transactions, pricing data, and risk metrics is highly valuable. Responsibilities Data Engineering & Pipeline Development: Build, optimize, and maintain end-to-end data pipelines using Databricks, PySpark, and SQL across ingestion, curation, and consumption layers. Develop and manage Declarative Pipelines (e.g., Lakeflow / DLT-style pipelines) to support scalable incremental processing and operational reliability. Implement robust transformations and modelling patterns to deliver trusted datasets for downstream consumption (analytics, operations, reporting, applications). Data Quality, Controls & Operational Excellence: Implement data quality validation, monitoring, reconciliation, and alerting to ensure datasets meet required standards for completeness, accuracy, timeliness, and consistency. Debug pipeline failures, resolve production incidents, and continuously improve pipeline stability, performance, and cost efficiency. Apply best practices around auditability, lineage, and data correctness — particularly in time-series and historically tracked datasets. Data Modelling & Domain Delivery Contribute to the design and evolution of data models supporting the organization’s investment footprint (Public Markets, Private Markets, reference/master data, corporate actions, portfolio, pricing, risk, etc.). Partner with business stakeholders to translate requirements into implementable data solutions while preserving maintainability and governance standards. Support integration of multi-vendor and internal data sources into curated datasets that align with ADIA’s operational and analytical needs. Platform & Engineering Standards Follow and enhance engineering standards for version control, CI/CD, testing, documentation, and secure development practices. Optimize compute and storage usage through partitioning/clustering strategies, incremental patterns, and performance tuning. Contribute reusable libraries, patterns, templates, and approaches that improve delivery speed and consistency across the team. Skills Must have Mandatory Skills: Bachelor’s degree (Computer Science, Engineering, Information Systems, or related discipline). 5+ years experience in data engineering roles (flexible based on depth of capability). Strong hands-on experience with Databricks in production environments (prerequisite). Strong programming experience with PySpark (must) and strong SQL (must). Proven experience with Declarative Pipelines / pipeline orchestration on Databricks (prerequisite). Strong understanding of data engineering fundamentals: ingestion patterns, transformation design, incremental processing, testing, performance tuning. Experience delivering production-ready datasets with appropriate operational controls (monitoring, troubleshooting, reliability patterns). Experience with modern Lakehouse concepts (Delta tables, optimization strategies, file skipping, metadata/statistics awareness). Exposure to data governance practices: cataloguing, documentation, business glossary/terms, lineage. Experience working in enterprise environments with CI/CD pipelines and structured release processes. Familiarity with vendor market data feeds (e.g., Bloomberg, Refinitiv, MSCI, FactSet) or similar multi-source mastering patterns. Nice to have Strong Hands-on Expertise in Palantir Foundry. Proven experience with Foundry pipelines, ontologies, data lineage, transformations, and platform governance. Proven Migration Experience from Palantir / to Databricks. Demonstrated experience leading or executing platform migrations, including pipeline conversion, data model redesign, and production cutover. Familiarity with Dynatrace or Datadog for system observability and monitoring. Databricks certification, cloud certifications (Azure/AWS), or enterprise data architecture certifications. Other Languages English: C1 Advanced Seniority Senior
You'll no longer be considered for this role and your application will be removed from the employer's inbox.