RBC Brewin Dolphin is one of the UK's leading independent providers of discretionary wealth management. We offer award-winning personalised wealth management services that meet the varied needs of over 100,000 account holders, including individuals, charities, and pension funds.
We specialise in helping clients protect and grow their wealth by creating financial plans and investment portfolios that meet personal and professional ambitions and aspirations. Our services range from bespoke, discretionary investment management to retirement planning and tax-efficient investing.
About This Opportunity
The role is that of a pragmatic and hands-on data modeler or data architect within an Analytics & Data Team. Strong preference for data and domain modelling collaboratively withing client-facing and internal Analyst Teams. Able to influence the implementation of analytical and data engineering data products through the hands-on implementation of supporting artefacts, tests and code.
Responsibilities And Activities
- Data Evangelist. Be a data & analytics evangelist who is passionate about data and using their expertise to provide insights and capability.
- Requirements. Define data-centric solution-focused requirements that are outcome driven using specific notations and tools (and are informed by the data).
- Data analysis & design. Identify business processes and systems that curate/maintain data to ensure that the data being sourced is trustworthy and reliable; make recommendations on data management and data quality improvements or produce targeted designs or supporting materials that deliver data products (reports, models, etc.).
- Data Exploration. Exploration and analysis of large volumes of complex data
- Hypothesis Testing. Creating and validating hypothesis using the data to answer business data questions
- Insights. Profile and analyze large volumes of data using modern data platforms using SQL, Power BI and Python.
- Data pipeline design. Review source data and inform data engineering of requirements for the acquisition of new data sources.
- Data modelling. Provide semantic and analytics models to conform source data to usable data models for reporting.
- Data management. Design appropriate structures and tools to improve the semantic understanding of data (Catalogues and Data Contracts), the protection of data (data classification) or the retention of data.
- Collaborate internally. Work closely with data engineers to specify solutions and deliver rapid insights and analytics to the business.
- Commercial. Work closely with client-facing and various business stakeholders (e.g. Operations, Finance, AML) to grow understanding of our business and identify opportunities with data [functional – bp, object models, etc. ]
- Visualization. Provide basic visualizations to aggregate or explain the data either past or predicative using ML/AI (working closely with Data Scientists or BI Specialists.
- Analytics applied Data Governance. Devise or align to several business-value driven data governance and information privacy initiatives, e.g. data privacy, security and access control, data retention, meta-data management (data catalogues). Use their awareness of what effective technical governance and ‘guardrail’ solutions around a Lakehouse solution are required.
- Agile Delivery. Comfortable working in an agile product data environment where data architecture & design is part of the delivery of an analytics capability to the business.
- Experience and Enthusiasm. Experience of modelling within a financial environment coupled to an enthusiasm for the business value which effective data distribution and data analytics can deliver for a market-leading wealth manager
- Flexible & Independent. The majority of the work will be within the Data & Analytics team who require accurate and certified datasets based on core data domain models and dimensional warehouse designs to drive insights.
- Integration and Migration. Support the definition of data migration strategies, integration, and master data management strategies to displace legacy systems incrementally provide OLTP integration or OLAP reporting solutions. Awareness of sourcing data in real-time or batch through various approaches and the impact on a data platform from a
- Educate and train. The technical data analyst should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements
- Communication. Communicate efficiently, prepare well and use a variety of representations to convey ideas effectively.
- Teamwork. Help your team develop, both individually and collectively, through development conversations, coaching and feedback.
- You agree to comply with any reasonable instructions or regulations issued by the Company from time to time including those set out in the terms of the dealing and other manuals, including staff handbooks and all other group policies
- Deliver incrementally, meet and exceed expectations, and realize SMART objectives set by your manager.
Skills/Qualifications
- A bachelor's or master's degree in computer science, software engineering, statistics, applied mathematics, information systems, information science or a related quantitative field [or equivalent work experience] is required
- The ideal candidate will have a combination of software engineering skills, data modelling skills, data architecture skills, and BI (or ML/AI) advanced analytics skills with a technical or computer science degree, or equivalent work experience
- Industry professional certifications and/or technical certifications in Azure data technologies and, specifically, Databricks, Azure Data Factory, and/or Power BI
Personal Skills And Attributes
- Demonstrated success in working with both IT and business while integrating analytics and data output into business processes and workflows.
- Ability to collaborate with technical and business personas –
- Experience working with popular data discovery, analytics and BI software tools like PowerBI, Tableau, Qlik and others for semantic-layer-based data discovery.
- Solid understanding of popular open-source and commercial data science platforms such as Python, R, KNIME, Alteryx, and others is a plus but not required/compulsory.
- Experience of growing a data management capability from a data architecture and data governance perspective (data catalogues, master data management)
- Experience of working with data science teams in refining and optimizing data science and machine learning models and algorithms is a plus but not required/compulsory.
- Experience in producing high-level and low-level requirements and designs for data in motion or at rest that meet business and data engineering needs in Entity Relationship or UML-based Class Diagrams (and similar models) and artefacts.
- Experiencing in defining appropriate target data models (from conceptual to logical to physical) to an established data pattern (Relational and Normalized, Data Vault 2.0, Kimble Dimensional) congruent with the consumers’ needs (e.g. MI Analytics, Master DB, Data Hub, Operational Data Stores).
- Experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
- Background or ability to design, build or spike data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management using various ‘database’ tooling (using advanced SQL, Python and Lo-code tools).
- Background in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies (using advanced SQL, Python and Lo-code tools).
- Strong Experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem –
- Strong experience with popular database programming languages for relational databases (SQL, T-SQL) and certifications or demonstrable knowledge on upcoming NoSQL/Hadoop-Spark oriented databases like CosmosDB/DynamoDB or Databricks.
- Strong experience of working with SQL on Hadoop style tools and technologies including HIVE, Azure Synapse Analytics (SQL Data Warehouse), Presto, and others from an open-source perspective and Azure Data Factory (ADF), Databricks, Snowflake and others from a commercial vendor perspective.
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.
- Experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Event Hubs or Service Bus, Amazon Simple queuing Service, and others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and stream analytics technologies such as Apache Kafka KSQL Apache Spark Streaming Apache Samza, others.
- Experience in working with DevOps capabilities like version control, automated builds, testing and release management capabilities using tools e.g. Git, Azure DevOps, Jenkins, Puppet, Ansible, or similar.
- Exposure to hybrid deployments: Cloud and On-premise/
- Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid], multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service and others.
- Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization
Interpersonal Skills and Characteristics
- Strong experience supporting and working with cross-functional teams in a dynamic business environment.
- Required to be highly creative and collaborative. An ideal candidate would be expected to collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly. The successful candidate will also be required to have regular discussions with data consumers on enhancing the design and with engineers or BI Analysts in improving the code.
- Required to have the accessibility and ability to interface with, and gain the respect of, stakeholders at all levels and roles within the company.
- Is a confident, energetic self-starter, with strong interpersonal skills.
- Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
To apply for this role, please click the "Apply" button below. The closing date for applications is 24/06/2024. Some of our vacancies receive high numbers of applications and we may decide to close this sooner, so we would encourage you to apply as early as possible .