الوصف الوظيفي
Design and implement scalable data architectures using databricks lakehouse, ensuring alignment with enterprise data strategy and business objectives Act as the resident databricks subject matter expert, driving best practices across delta lake, spark optimization, and workload performance tuning Build and optimize end-to-end data pipelines using pyspark and spark, including modernization and migration of legacy etl and pl/sql workloads to cloud-based platforms Implement data governance, security, and access control frameworks such as unity catalog to ensure compliance and data integrity across platforms Collaborate with business stakeholders, data engineers, and analytics teams to translate requirements into scalable solutions while mentoring junior team members and enforcing engineering standards 10+ years of experience in data engineering, data architecture, or big data platforms with at least 4–5 years of hands-on databricks and spark experience Strong proficiency in python, pyspark, spark sql, and sql with experience handling large-scale distributed data processing Hands-on experience with cloud platforms, preferably microsoft azure including azure databricks, data factory, and adls Proven expertise in lakehouse architecture, data modeling, performance tuning, and etl framework design Databricks certifications preferred along with strong leadership, stakeholder management, and mentoring experience
لقد تمت ترجمة هذا الإعلان الوظيفي بواسطة الذكاء الاصطناعي وقد يحتوي على بعض الاختلافات أو الأخطاء البسيطة.