YOUR ROLE
Generate value from data by increasing, integrating, enriching, and distributing Henkel’s data-asset
Design, develop, and operate scalable and robust data pipelines using state-of-the-art data engineering practices. Advice and consult business to use data efficiently
Develop and operate scalable, agile, quality-first data platforms (DataOps) and enable sustainable data science deployment solutions (MLOps)
Be part of a team of data professionals in the Data & Analytics department. Advance the team with your expertise and extend and deepen your knowledge by strong collaboration within the team
Actively participate in cross-functional teams using agile methods to deliver high quality data products at a fast pace
Establish yourself as a reliable, creative, and positive partner across business to enable and identify data potential
YOUR SKILLS
Master’s degree in a computer science or other STEM field; PhD is a plus
Minimum 5 years of relevant experience in a data engineering environment
Extensive hands-on experience in cloud data ecosystems; covering data ingestion, data modeling, and data provisioning to consumers and downstream systems
Excellent coding skills in relevant languages (e.g. SQL, Python, C#, Scala). Experience in ETL / ELT design, data and interface specifications, quality assurance and testing methods.
Deep knowledge of data pipeline orchestration (e.g. ADF) and distributed computation frameworks (e.g. Azure Synapse, Spark)
Experience in implementing DataOps and MLOps concepts
Proven track record of delivering value from data. Experience in building robust, scalable, high-quality data products in iterations and integrating them into existing and new data-pipelines
Experience with agile methodologies in a professional development environment (CI / CD)
Fluent in English; German language skill is a plus
Henkel is an equal opportunity employer. We evaluate qualified applicants without regard to gender, origin, culture, mindset, generation, disability, religion and sexual orientation.