Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


User unblocked successfully
https://bayt.page.link/rt4fqJXbvDaqy2YT7
Back to the job results

Senior AI engineer

28 days ago 2026/08/01
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Our Company

At Teradata, we believe that people thrive when empowered with better information. That's why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers-and our customers' customers-to make better, more confident decisions. The world's top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise.


Who You'll Work With


This position sits within the Data Intelligence Platform team, a group focused on building next-generation AI-assisted data services as part of Teradata's core platform. Our team operates at the intersection of cloud infrastructure, data engineering, and applied AI - shipping highly available, multi-tenant services that power intelligent query routing and data discovery at scale.


Our platform responsibilities include:


  • Designing and operating highly available microservices for data catalog ingestion and serving
  • Building AI-assisted query generation and routing services across heterogeneous data sources
  • Deployment and lifecycle management of services on Kubernetes (K8s) across AWS, Azure, GCP and on-prem.
  • Data pipeline development for catalog extraction, normalization, and semantic enrichment
  • Centralized observability: monitoring, alerting, and distributed tracing for all platform services
  • Providing DevOps tooling and CICD pipelines to support continuous delivery

What You'll Do


We are building a new service to collect and normalize data catalogs from diverse data sources - including relational databases, data lakes, data warehouses, and streaming systems - and expose them to an AI agent that dynamically constructs and routes queries to the appropriate source. This is a greenfield initiative that requires strong engineering judgment, a systems-thinking mindset, and experience shipping production-grade services.


You will be a core contributor on this project, working from architecture to implementation - designing ingestion pipelines, building the catalog API layer, and collaborating with the AI/ML team to surface the right metadata signals for intelligent query generation.


Responsibilities
  • Design, build, and operate a highly available data catalog collection service that ingests schema and metadata from heterogeneous data sources (RDBMS, data lakes, streaming platforms, APIs)
  • Develop robust data pipelines for catalog extraction, normalization, lineage tracking, and semantic tagging to power AI-driven query routing
  • Build and maintain RESTful and/or gRPC APIs that expose catalog data to an AI query agent
  • Deploy and manage services on Kubernetes (K8s), including helm chart authoring, autoscaling configuration, and multi-cluster operations
  • Ensure service reliability through SLO definition, circuit breakers, retry logic, and distributed tracing
  • Integrate with open-source and cloud-native technologies including Apache Kafka, Spark, dbt, Apache Atlas, or OpenMetadata
  • Collaborate with AI/ML engineers to design and iterate on the metadata schema and query routing interface
  • Participate in on-call rotations and contribute to incident response, postmortems, and reliability improvements
  • Contribute to CICD pipelines, infrastructure-as-code (Terraform / Helm), and automated testing frameworks

What Makes You a Qualified Candidate


  • 3+ years of software engineering experience building and operating production services
  • Proficiency in one or more of: Rust, Go, Python, Java- with a preference for Go or Python for backend services
  • Hands-on experience with data pipeline development: ingestion, transformation, and metadata management at scale
  • Solid understanding of RESTful API design principles and service-to-service communication patterns
  • Experience deploying and operating services on Kubernetes (K8s) in production cloud environments
  • Familiarity with at least one major public cloud platform: AWS, Azure, or GCP
  • Strong knowledge of relational and non-relational database systems and their schema/catalog semantics
  • Experience with distributed messaging systems such as Apache Kafka or AWS Kinesis
  • Proficiency with Git, code review workflows, and agile development practices
  • Excellent troubleshooting skills and comfort operating in Linux environments

What You Will Bring


  • Experience with data catalog or metadata management tools such as Apache Atlas, OpenMetadata, DataHub, or Collibra
  • Familiarity with semantic search, vector databases, or LLM-based query generation systems
  • Experience designing or integrating AI/ML model APIs into production backend services
  • Knowledge of data governance, lineage tracking, and schema registry patterns
  • Experience with infrastructure-as-code tools
  • Background in multi-tenant SaaS platform engineering
  • Contributions to open-source data or infrastructure projects
This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.