• Home
  • Careers
Mid Level Data Engineer
Your Role: We are looking for a Mid (2-3 years) Data Platform Engineer to join our growing data team. The ideal candidate has a deep understanding of both technical data processes and data analysis, with an emphasis on cloud-based solutions. You will be responsible for developing an efficient data platform that is easily extended by new data sources and pipelines, developing basic and complex data pipelines. 
  • Design, construct, install, test and maintain highly scalable data management systems.
  • Ensure systems meet business requirements and industry best practices.
  • Collaborate with data scientists and other tech stakeholders for their data pipeline/flow requirements.
  • Work with Product Development, Category, and Marketing teams to assist with data-related technical issues and support their data infrastructure needs.
  • Provide maintenance for analytical data stores and query engines.
  • Developing and maintaining modular and reusable ETL processes. This includes monitoring performance and data quality. 
  • Providing a functional BI platform and overseeing data access permissions within the BI platform.
  • Integrate new data management technologies and software engineering tools into existing structures.
  • Improve data foundational procedures, guidelines and standards.
  • Bachelor’s Degree from a top school with study in Computer Science, Electrical & Electronics Engineering, Mathematics or related fields.
  • Excellent command / highly proficient in spoken and written English.
  • Excellent verbal and written communication skills.
  • Experience with Public Cloud Providers, with an emphasis on data storage and manipulation. Experience within the AWS ecosystem (EC2, S3, EMR, Glue and Lambda) is highly preferable.
  • Programming experience in Python, with an emphasis on writing modular and reusable code.
  • Experience with code based scheduling and orchestration tools (Prefect, Airflow, Dagster)
  • Experience with writing SQL statements that processes analytical datasets, with an emphasis on performance and readability. (Snowflake, Redshift, Trino)
  • Experience in developing and managing containerized applications, understanding of Docker fundamentals and concepts, and deployment and scaling of such applications. (Docker, Kubernetes, AWS ECS, …)
  • Understanding of DevOps best practices, version control, Infrastructure as Code. (Git, Terraform, AWS CDK, …)
  • Experience with open-source technologies is also highly preferable. We value the innovative and collaborative nature of the open-source community, and our data tech stack is primarily built using open-source tools.
 Tech Stack:  
  • Cloud: AWS
  • Languages: Python, Scala, SQL
  • Data Warehouse: Snowflake
  • Data transformation: Snowflake, Spark, DBT
  • Orchestrator: Prefect
  • Streaming: Kafka, Flink, Debezium
  • Infra as Code: AWS CDK, Terraform, Github Actions
  • Version control: Git
 Culture and environment - What to expect: 
  • An opportunity to bring impact and change the service market dynamics, in a stimulating and international environment
  • Great, knowledgeable colleagues - we spend much time together and constantly learn from each other
  • Never-ending growth opportunities supported by learning & development fund
  • Beautiful central office in Milan, with in-office foosball and Xbox to let some steam after work
  • Flexible working hours and the possibility to work from home 
  • Lunch vouchers, free fruit & coffee
  • EVERYDAY casual dress code, with Friday beers on us!
Share this position

Can't find the role you are looking for?
Send a General application!