كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!
إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:
عدد الفرص التي تم تصفحها
عدد الطلبات التي تم تقديمها
استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!
هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟
اضغطي هنا لاكتشاف الفرص المتاحة الآن!ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.
هل ترغبين في المشاركة؟
في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.
ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.
If you desire to be part of something special, to be part of a winning team, to be part of a fun team – winning is fun. We are looking forward tohire Sr. Data Engineer in Pune, India. In Eaton, making our work exciting, engaging, meaningful; ensuring safety, health, wellness; and being a model of inclusion & diversity are already embedded in who we are - it’s in our values, part of our vision, and our clearly defined aspirational goals. This exciting role offers opportunity to:
Requirement :
1. Framework Authorship & Adoption Leadership
Design, document, and version-control all six engineering frameworks in a central standards repository (GitHub), ensuring they are discoverable, living documents with clear change governance.
Conduct framework enablement sessions, workshops, and pair-programming to drive active adoption — not just publication — across the engineering team.
Define conformance criteria and lightweight review checkpoints so that new pipeline work is assessed against framework standards before promotion to production.
Act as the technical authority and tiebreaker on engineering design decisions — establishing consistent patterns while preserving pragmatic flexibility where needed.
2. DataOps & CI/CD Pipeline Engineering
Design and implement CI/CD pipelines for data engineering workloads using GitHub Actions or equivalent — covering lint, unit test, schema validation, and environment promotion stages.
Establish automated unit testing patterns — including test coverage standards and coverage reporting.
3. Data Quality & Observability Engineering
Implement data contract frameworks at ingestion, transformation, and consumption boundaries — defining schemas, SLOs, and acceptable value ranges as code.
Build reusable data quality monitoring templates — parameterizable and composable across data products.
Instrument pipelines with observability metadata: lineage, runtime metrics, freshness timestamps, and row count deltas — surfaced into operational dashboards.
Design and test the incident response workflow for data quality breaches: automated alerting, quarantine patterns, stakeholder notification, and self-healing logic where feasible.
4. Snowflake Platform & Access Control Engineering
Design and implement scalable RBAC models in Snowflake — covering functional roles, object ownership hierarchies, and data product consumer roles.
Build row-level security (RLS) frameworks using Snowflake row access policies — creating reusable, metadata-driven policy templates that can be applied consistently across Finance data products.
Define and implement dynamic data masking policies aligned to the data classification taxonomy — ensuring sensitive financial data is protected at the platform layer, not just the application layer.
Govern Snowflake resource utilization: warehouse sizing standards, query optimization guidelines, and cost attribution tagging by domain or product.
5. GenAI-Augmented Data Engineering
Champion the exploration and adoption of GenAI tooling to amplify data engineering productivity — including AI-assisted SQL/python code generation, automated documentation, and intelligent pipeline debugging.
Prototype and evaluate LLM-powered data engineering assistants: natural language to SQL interfaces, automated data contract generation, and AI-driven anomaly root cause analysis.
Define guardrails and governance standards for GenAI use in data engineering workflows — covering code review requirements, hallucination risk in data contexts, and audit traceability.
Share findings and tooling recommendations with the wider data engineering community through internal demos, documentation, and engineering blog posts.
6. Modernization & Delivery Velocity
Identify and eliminate sources of engineering friction — legacy patterns, manual deployment steps, inconsistent environments — and replace with automated, standards-driven equivalents.
Measure and report on delivery cycle time improvements attributable to framework adoption: pipeline build time, time to production, defect escape rate, and time to recovery.
Lead or contribute to data engineering modernization initiatives: migrating legacy ETL workloads, re-platforming to Snowflake, and adopting modern orchestration patterns.
لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.