Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/Au9GL1X1kupciTz48
Back to the job results
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Career CategoryEngineeringJob Description

Role Description: 


The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes 


Roles & Responsibilities:   


  • Design, develop, and maintain data solutions for data generation, collection, and processing   


  • Be a key team member that assists in design and development of the data pipeline  


  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems  


  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions  


  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks  


  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs  


  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency  


  • Implement data security and privacy measures to protect sensitive data  


  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions  


  • Collaborate and communicate effectively with product teams  


  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions  


  • Identify and resolve complex data-related challenges  


  • Adhere to best practices for coding, testing, and designing reusable code/component  


  • Explore new tools and technologies that will help to improve ETL platform performance  


  • Participate in sprint planning meetings and provide estimations on technical implementation  


  • Design and develop data pipelines leveraging Databricks, PySpark, and SQL to ingest, transform, and process large-scale datasets. 


  • Engineer solutions for both structured and unstructured data to enable advanced analytics and insights. 


  • Implement automated workflows for data ingestion, transformation, and deployment using Databricks Jobs and notebooks, with ongoing monitoring and scheduling. 


  • Apply performance optimization techniques, including Spark job tuning, caching, partitioning, and indexing, to improve scalability and efficiency. 


  • Build integrations with multiple data sources, such as SQL databases, APIs, and cloud storage platforms, ensuring seamless connectivity and reliability. 


  • Collaborate effectively with global teams across time zones to maintain alignment, resolve issues, and deliver on shared objectives. 


Basic Qualifications and Experience: 


  • Bachelor’s / Master’s degree and 4 to 8 years of Computer Science, IT or related field experience  


Functional Skills: 


Must-Have Skills 


  • Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing  


  • Proficiency in data analysis tools (e.g. SQL) and experience with data visualization tools  


  • Excellent problem-solving skills and the ability to work with large, complex datasets  


  • Strong understanding of data governance frameworks, tools, and best practices.  


Good-to-Have Skills: 


  • Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) processing  


  • Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development  


  • Strong understanding of data modeling, data warehousing, and data integration concepts  


  • Knowledge of Python/R, Databricks, SageMaker, cloud data platforms  


  • Experience implementing automated orchestration and monitoring of data pipelines using Databricks Jobs, Apache Airflow, or similar workflow tools. 


  • Familiarity with performance optimization techniques for big data processing, such as Spark job tuning, caching, partitioning, and indexing. 


  • Exposure to multi-source integration involving APIs, SQL databases, and cloud storage platforms. 


  • Demonstrated ability to collaborate across global teams and time zones, ensuring alignment and delivery in distributed environments. 


Professional Certifications (Preferred): 


  • Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) 


Soft Skills: 


  • Excellent critical-thinking and problem-solving skills  


  • Strong communication and collaboration skills 


  • Demonstrated awareness of how to function in a team setting 


  • Demonstrated presentation skills  


.
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.