Job description:
Your Mission:
- Implementation of production ready data pipelines using PySpark running on Databricks based on specification provided by business analysts
- Interaction with other data engineers and solution architects to meet functional and non functional requirements
- Developing applications using an IDE such as VS Code or PyCharm
- Strongly support the team in dealing with billions of records and hundreds of attributes
- You put clean code principles into practice on large code bases
- Working in multi-dimensional agile teams (direct involvement from business subject matter experts)
About the customer:
For our renowned client in the Insurance industry, we are looking for an experienced Senior Data Engineer (PySpark) for a major transformation project.
- Start: ASAP or no later than July 2025
- Duration: 1 year contract, possible extension
- Location: Zurich City
- Pensum: 100%, 41.25h/week
- Contract via Swisslinx
Requirements:
Your Profile:
- 5+ years of proven experience developing complex software systems as a software engineer
- At least 3 years for hands on experience with Apache Spark (PySpark)
- Strong software engineering skills, including familiarity and application of design patterns, and engineering best practices.
- Strong understanding of data engineering techniques with high level programming languages (e.g., Java, C#, or Python) and analytical frameworks
- Experience of working with Delta Lakes / Databricks and optimizing Spark workloads running on it.
- Significant experience with projects using relational data models and SQL.
- Good analytical skills and the ability to understand complexity and break it down into smaller, achievable steps.
- Highly proficient in English (written and verbal)
Online Application (2 minutes)
If you are ready to make an impact today, we are looking forward to your online application with your latest CV and your availability. If your profile is shortlisted, our consultant will contact you for a first call to discuss the details.