English speaking Principal Data Engineer (d/f/m)

Big fish in a large Data Lake.

Our client is one of the 20 largest textile trading companies in Germany and looks back on almost 50 years and 18,000 employees in more than 40 countries. With a constant striving for individuality and the desire to experiment, the company has achieved a stable course of growth. This should continue: from Berlin or Brunswick to the whole world!

As Principal Data Engineer (d/f/m), you will co-lead the data team in cooperation with the Head of BI. Your focus will be on the leadership of data engineers and data scientists. In this manner you will be methodically and operationally responsible for ETL processes and the data lake. Further you will architect CI/CD processes and a Hadoop cluster and oversee their implementations.

Job Portrait


  • international textile trader
  • 18,000 employees in over 40 countries
  • modern technologies/methods
  • agile project teams
  • Location: Brunswick or Berlin


  • Python, Scala
  • SQL
  • APIs

Methods & Tools

  • data science
  • data engineering
  • E2E data solutions
  • machine learning
  • DevOps, CI/CD
  • agile programming
  • TDD, XP
  • Hadoop, Spark
  • Kubernetes or Mesos


  • a lot of leeway
  • 30 vacation days
  • company pension plan
  • special sports offers
  • employee discounts

Your Benefits

  • as part of an international team, you will bring the company forward worldwide
  • you can develop your expertise in meetups, courses and conferences
  • German language courses are offered for non-native speakers
  • there are 30 days of vacation for your personal relaxation
  • you will have free choice of hardware: MacBook or Windows laptop?
  • a company pension plan is also included
  • you can benefit from various advantages, such as employee discounts and sports offers

Your Key Responsibilities

  • in close cooperation with the head of BI, you will be co-leader of all business intelligence and data science operations – with a focus on machine learning
  • you will lead data engineers and data scientists and teach as well as mentor them on technologies, methods and processes of E2E data solutions
  • you will be responsible for the analysis of data sources and the creation and maintenance of ETL processes and data models
  • the data scientists will build machine learning pipelines under your supervision
  • you will also architect CI/CD processes and you will oversee the implementation and operationalisation of APIs and microservices
  • together with the DevOps professionals, you will ensure the development and ongoing improvement of a Hadoop cluster (Cloudera)

Your Profile

  • you have an academic degree in a MINT subject in your knapsack
  • within several years of work experience you have taken on the art of processing large amounts of data
  • therefore you are on eye level with ETL processes, relational databases, data models and machine learning
  • you like your processing systems scalable and distributed – that is why you know relevant tools like Hadoop, Spark or Flink like the back of your hand
  • you code Python and Scala fluently and you are proficient in agile software development and its methods (such as TDD or XP)
  • with your knowledge of CI and CD you help putting the “Ops” in DataOps
  • on the operations-side, you know how to handle Linux and containers (Kubernetes or Mesos)
  • your English is excellent

Your Nice-To-Haves

  • You already speak German? Welcome!

If you like what you see, just send me a quick message via phone or mail. In a first informal call we can then discuss the client and further details about the job.

Best regards

Lisa Paustian