In this role, you will play a lead role in the conception, planning, and execution of Brompton Bicycle's data infrastructure.
You will be working directly with diverse data sets from multiple systems, orchestrating their seamless integration and optimisation to enable our business to derive valuable insights. This process will encompass everything from the raw development of data pipelines to the management and optimisation of these pipelines using all tools available in the Azure Cloud.
A significant aspect of your role will be the migration of our existing on-premises databases to the Azure Cloud, a complex project requiring a deep understanding of cloud architecture and database management, as well as change management, ensuring the continuation of data as a product to your stakeholders through a smooth data infrastructure transition
As a vital part of our team, you will collaborate with diverse departments across our organisation (Finance, Planning, Commercial etc.), ensuring that the data solutions you architect is finely attuned to our unique business needs. Your work will directly support the creation of data-driven strategies that will yield pivotal insights, bolstering decision making and strategic planning.
You will have the chance to contribute directly to our mission of revolutionising urban living, by ensuring that our data management and analysis processes are as efficient, reliable, and insightful as possible.
We’re looking for a superstar that can partner with our Head of Data and Analytics to develop, iterate and articulate an excellent data engineering roadmap across Brompton Bicycles, aligned with the priorities of the business.
This individual will primarily be hands on, with experience of working with a range of structured and unstructured datasets, and different types of data pertaining to a number of departments across Brompton (Finance, Planning, Commercial). Experience of working with Azure Databricks, B2B/B2C/D2C business models and FiveTran will stand you in good stead. Our team work in 2-week sprints, so experience of delivering projects iteratively in partnership with technical and non-technical stakeholders is paramount. Prior line management experience may come handy in the future, as the business have a great interest in continuing to invest in the Data and Analytics team, evidenced by this brand new role in the team. We look forward to seeing your application!
Key Responsibilities:
- Develop, construct, test and maintain data architectures within large-scale data processing systems.
- Migrate existing on-premises databases to the Azure Cloud.
- Develop and manage data pipelines using Azure Data Factory, Delta Lake, and Spark, ensuring all data sets are secure, reliable, and accessible.
- Utilise Azure Cloud architecture knowledge to design and implement scalable data solutions.
- Utilise Spark, SQL, Python, R, and other data frameworks to manipulate data and gain a thorough understanding of the dataset's characteristics. This role requires the ability to comprehend the business logic behind the data's creation, with the aim of enhancing the data modelling process.
- Interact with API systems to query and retrieve data for analysis.
- Work closely with Business Analysts, IT Ops, and other stakeholders to understand data needs and deliver on those needs.
- Ensure understanding and compliance with data governance and data quality principles.
- Design robust data models that enhance data accessibility and facilitate deeper analysis.
- Implement and manage Unity Catalog for centralized data governance and unified access controls across Databricks
- Maintain technical documentation for the entirety of code base.
- End to end ownership of the Data Engineering Lifecycle.
- Implement and manage Fivetran for efficient and reliable ETL processes
Who you are:
- Excellent problem-solving skills and attention to detail.
- Excellent communication and change management skills, with the ability to explain complex technical concepts to non-technical stakeholders.
- Familiarity with agile methodologies.
Desired Experience:
- Bachelor's degree in computer science, engineering, or equivalent experience.
- Extensive experience as a Senior Data Engineer/Cloud Data Architect, or similar role.
- Deep knowledge of Azure Cloud architecture and Azure Databricks, DevOps and CI/CD
- Extensive experience migrating on-premises data warehouses to the cloud.
- Proficiency with Spark, SQL, Python, R, and other data engineering development tools.
- Experience with metadata driven pipelines and SQL serverless data warehouses.
- Extensive knowledge of querying API systems.
- Extensive experience building and optimising ETL pipelines using Databricks.
- Understanding of data governance and data quality principles.
- Experience with implementing and managing Unity Catalog for data governance.
- Familiarity with Fivetran ETL tool for seamless data integration.
You might not tick all the boxes, and that’s okay, we still encourage you to apply. Here at Brompton we are always looking for people that share the same values and attitudes as we do, as we continue to build diverse teams and a sense of community which is made stronger by each new individual who joins.