Senior Data Engineer (f/m/x)
We are searching for resourceful and innovative Senior Data Engineer to join our fast-growing team in Berlin. You will be working with terabyte-scale quantities of data in near real-time, producing efficient, fault-tolerant and industrialised pipelines. You will work with a cross-functional team, closely collaborating with data scientists and analysts.
As an ad tech company, data is at the core of everything travel audience does. By joining us, you will find ways to solve complex technical challenges which directly impact our business and play an integral part in helping us become the world leader in the travel advertising domain.
What you will do:
- You'll be part of a team responsible for the design, development and maintenance of our data processing streaming pipelines in Google Cloud Dataflow developed using Spark;
- You'll work closely with the Data Scientists and Data Analysts, providing them with access to the data and support they need to analyse, train and deploy machine-learning models;
- You'll own the full development and operations cycle of a product, whilst helping to ensure our systems are running correctly and efficiently;
- You'll use industry-proven tools such as git, and practices such as CI/CD in an agile setup;
- You'll be joining a cross-functional team. Aside from data engineers, our teams have Go Engineers, Data Scientists, Data Analysts, DevOps Engineers, and Frontend Engineers.
Why join us?
As part of our team, you will work in a highly motivated environment with flat hierarchies and short decision-making processes. You'll have a lot of freedom to contribute your own ideas, implement them and work with a modern tech stack. We offer you:
A fast-paced industry where you handle new problems every day
An environment where you are encouraged to research, explore and try new ways of doing things
The opportunity to work with large amounts of data
An open and dynamic start-up culture that supports great work-life balance
You are who we are looking for
- You love Scala – either as Scala developer or a Python/Java developer willing to self-learn a new language;
- You have practical experience with one or more distributed data processing frameworks such as Spark, Apache Beam, Google Cloud Dataflow, Flink, Storm or similar;
- You care deeply about software best practices and are dedicated to ensuring quality via testing, benchmarking and peer review;
- You have experience of using a pub/sub or message queue technology such as Kafka;
- You understand the theory behind different database technologies and data storage practices enough to make informed choices and match appropriate technologies to a given use-case;
- You dedicate yourself to continue learning and improving and you enjoy helping others grow around you;
- You’re eager to understand the business logic of the company and the meaning of the data they work with;
- You can speak, write and express yourself in English – our company’s working language – in a professional context;
- You feel at home collaborating in a workplace which is international, diverse, evolving and continually innovating.
We are awaiting your application and looking forward to starting our journey together!