الوصف الوظيفي
Key Responsibilities Design, build, and maintain scalable data pipelines (ETL/ELT) Develop and optimize data architecture, data lakes, and warehouses Ensure data quality, reliability, and integrity across systems Collaborate with Product, Engineering, and Analytics teams to define data needs Build real-time and batch data processing systems Optimize database performance and query efficiency Implement data governance, security, and best practices Mentor junior data engineers and promote engineering excellence 5+ years of experience in data engineering or related roles Strong proficiency in Python and SQL — not just writing queries, but designing reusable, tested pipeline code Hands-on AWS experience (required): Redshift, S3, Athena, ECS, EventBridge Experience building and maintaining data warehouses — Redshift experience is a strong plus Familiarity with workflow orchestration tools such as Airflow or equivalent, including ECS-based scheduling patterns Experience with multi-database environments: PostgreSQL/Aurora and MySQL/MariaDB Strong understanding of data modeling, dimensional design, and schema evolution Experience with streaming technologies such as Kafka is a plus Comfort working in a fast-moving product company where priorities shift and pipelines must be resilient Nice to Have Experience supporting machine learning pipelines Knowledge of data governance and privacy best practices Experience in fast-paced startups or product companies Exposure to BI tools (e.
g., Metabase, Tableau, Power BI)
لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.