Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/JWSDfiFWtiaf81pS6
Back to the job results

Data Engineer | GCP, Hadoop, Spark, ETL/ELT, Cloud Migration, Data Platforms

20 days ago 2026/08/16
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Job Summary
Synechron is seeking an experienced Data Engineer specializing in Hadoop and cloud-based data processing to support enterprise ETL and data platform modernization. The successful candidate will design, develop, and optimize scalable data pipelines, automation frameworks, and data integration solutions. They will collaborate closely with cross-functional teams to enhance system performance, implement new technologies, and support cloud migration strategies, contributing to operational efficiency and data-driven decision-making.


Software Requirements


  • Required:


    • Strong proficiency in UNIX Shell scripting and Python (latest stable version) for automating data processes and utility development


    • Hands-on experience with Apache Spark for large-scale data processing


    • Deep knowledge of SQL and experience with relational database management systems like Snowflake, PostgreSQL, or other data warehouses


    • Working knowledge of Hadoop ecosystem components (HDFS, Hive, Impala, Spark) and ETL/ELT frameworks


    • Familiarity with version control systems such as Git, enterprise CI/CD tools (Jenkins, GitHub), and DevOps practices


  • Preferred:


    • Experience with cloud platforms such as AWS, Azure, or Google Cloud, supporting data integration and cloud migration projects


    • Knowledge of data integration tools like Informatica Cloud or other IDMC solutions


    • Understanding of security standards, data masking, and access controls


Overall Responsibilities


  • Design, develop, and maintain scalable and resilient data pipelines on GCP and Hadoop ecosystems, ensuring data quality and reliability


  • Collaborate with stakeholders to analyze requirements and translate them into efficient automation processes and data workflows


  • Support existing ETL frameworks and lead enhancements based on business needs and emerging technologies


  • Conduct troubleshooting, performance tuning, and system optimizations for Spark, Snowflake, and related data jobs


  • Develop automated tools and utilities to improve processing efficiency and error handling


  • Collaborate with analytics, data science, and platform teams to support data ingestion, transformation, and reporting needs


  • Support cloud migration initiatives, evaluating new technologies and developing proof of concepts


  • Document architecture, data models, and operational procedures to ensure compliance and knowledge sharing


Technical Skills (By Category)


  • Programming Languages:
    Required: Python, Bash/Shell scripting
    Preferred: Java, Scala, or other scripting languages for automation and data processing


  • Databases & Data Management:
    SQL (PostgreSQL, MySQL), Snowflake, NoSQL (MongoDB, DynamoDB — good to have)


  • Cloud Technologies:
    GCP (BigQuery, Dataflow), AWS (S3, Lambda, ECS), Azure (Blob Storage, Data Factory), supporting data migration and cloud-native workloads


  • Frameworks & Libraries:
    Spark (PySpark, Spark SQL), Hive, Impala, Dataflow, Airflow (preferred)


  • Development & Deployment Tools:
    Git, Jenkins, Terraform, Docker, Kubernetes, CI/CD pipelines


  • Security & Compliance:
    Data encryption, access management, and security best practices aligned with enterprise standards


Experience Requirements


  • Minimum of 4 years of hands-on experience in data engineering or related roles


  • Proven expertise developing and optimizing large-scale data pipelines in Hadoop and cloud environments


  • Strong background in data ingestion, transformation, and automation framework development


  • Hands-on experience with Spark, SQL, and relational/noSQL databases


  • Experience supporting cloud migration initiatives or cloud-native data solutions is preferred


  • Industry experience in financial services, healthcare, or technology sectors is advantageous


Day-to-Day Activities


  • Develop, enhance, and monitor data pipelines supporting enterprise analytics and reporting


  • Automate ETL and data processing workflows, including pipeline tuning and error handling


  • Collaborate with cross-functional teams to understand data needs and implement scalable solutions


  • Troubleshoot pipeline issues, optimize query performance, and ensure data security and compliance


  • Support cloud migration efforts, including validating data workflows and developing proof of concept solutions


  • Document system architecture, data models, operation procedures, and technical workflows


Qualifications


  • Bachelor’s or Master’s degree in Computer Science, Data Science, Information Technology, or related field


  • Proven experience in building, deploying, and maintaining large-scale data pipelines using Hadoop ecosystem and cloud platforms


  • Certifications such as GCP Professional Data Engineer, AWS Big Data Specialty, or equivalent strongly preferred


  • Ability to adapt to evolving data technologies and enterprise architecture standards


Professional Competencies


  • Strong analytical and troubleshooting skills tailored toward large-scale data pipelines and automation


  • Excellent communication skills for collaborating effectively with technical and non-technical stakeholders


  • Leadership qualities to mentor junior staff and promote best practices in data engineering


  • Strategic thinking to design scalable, secure, and compliant data architectures


  • Adaptability and continuous learning to keep pace with emerging data technologies and industry standards


  • Effective time management to prioritize workload and deliver within deadlines


S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 


Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.



All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.


Candidate Application Notice


This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.