Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/JQs1vjaPtMSqYE746
Back to the job results

Software Engineering Advisor - HIH - Evernorth

Today 2026/09/04
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

ABOUT EVERNORTH: 


Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care,
we solve the problems others don’t, won’t or can’t. 


Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference.


We are always looking upward. And that starts with finding the right talent to help us get there.


Position Overview


Evernorth is seeking a build-and-operate Data Engineer/Developer to code, deploy, and support data pipelines within our Data & Analytics organization. You will build ETL/ELT in Databricks (Python/Spark), write scalable SQL transformations, and integrate data from multiple sources into curated, production-ready datasets. You will own the full pipeline lifecycle development, release, monitoring, and ongoing optimization to keep data flowing reliably for downstream systems.


You will implement end-to-end Databricks jobs from ingestion through transformation and delivery, including reusable frameworks, data-quality checks, and unit/integration tests. You will operate what you build schedule and orchestrate runs, monitor clusters and job health, troubleshoot failures, and remediate data issues to meet delivery SLAs. You will continuously tune Spark code and pipeline design for performance and cost and automate deployments using CI/CD and operational runbooks.


Responsibilities


  • Design, build, and deploy ETL pipelines to ingest, transform, and load data from multiple sources.
  • Develop and maintain data catalogues and metadata management to improve data discovery and governance.
  • Implement automated data-quality validations and monitoring to ensure accuracy, completeness, and consistency.
  • Monitor and troubleshoot Databricks jobs and clusters, including performance, failures, and resource utilization.
  • Tune and refactor existing ETL workflows to improve scalability, reliability, and runtime performance.
  • Define and promote engineering standards and best practices for the data transformation layer to support smooth delivery and ongoing support.
  • Apply security and compliance controls, including role-based access, encryption, and auditability for sensitive data.
  • Manage and optimize cloud services (AWS preferred) for storage, compute, and orchestration to support scalable data processing.
  • Automate pipeline scheduling and operational workflows using orchestration tools and CI/CD practices.
  • Evaluate emerging tools and patterns and deliver proof-of-concepts (POCs) to validate solutions.
  • Stay current with new technologies and apply relevant innovations to improve the platform.
  • Develop and execute test strategies (unit/integration) for pipelines to validate logic and prevent regressions.
  • Partner with cross-functional teams to deliver production-ready data solutions on time.
  • Create and maintain technical documentation, including design notes, operational runbooks, and support guides.

Qualifications


Required Skills:


  • Write and tune complex SQL on OLAP platforms (Teradata) and OLTP platforms (Oracle, DB2, PostgreSQL, SingleStore) to support high-volume transformations and reporting.
  • Programming: Build and support production data-engineering code using Scala or Python, including debugging, refactoring, and writing modular libraries.
  • Big Data & Analytics: Run Spark workloads in Databricks build notebooks/jobs, manage clusters, and troubleshoot performance and failures in distributed processing.
  • ETL Development: Build, schedule, and operate ETL/ELT pipelines end-to-end, including ingestion, transformations, error handling, and monitoring for large-scale systems.
  • Databases: Build schemas, write optimized queries, and troubleshoot performance across relational and analytical data stores.
  • Relational: Implement and optimize SQL workloads in Oracle and PostgreSQL (indexing, query plans, partitioning, and data modelling).
  • CI/CD: Implement build/release automation for data pipelines (branching, packaging, environment promotion, and rollback) using modern DevOps tools.
  • Cloud: Build and operate data workloads on AWS (preferred), configure storage/compute, manage access, and troubleshoot cloud runtime issues.
  • Profile and optimize SQL using execution plans, indexing/partitioning strategies, statistics, and query refactoring.
  • Write clear technical specs and operational updates that enable smooth delivery and support across teams. 
  • Deliver work in Agile/Scrum sprints, break down stories, estimate effort, execute, and keep commitments visible through regular updates. 

Required Experience & Education: 


  • Experience: 12+ years in software engineering with a strong focus on data engineering.
  • Bachelor's Degree or higher from an accredited university or a minimum of three (3) years of experience in software development in lieu of the bachelor’s degree education requirement

Desired Experience:  


  • Advanced proficiency with Databricks
  • Strong knowledge of cloud architecture.

Location & Hours of Work


Full-time position, working 45 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required.  


Equal Opportunity Statement


Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations.


About Evernorth Health Services


Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.









This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.