Assignment Details
Candidate will assess requirements in support of business use cases and work with the platform and solution teams to architect, build and support pipelines on the Enterprise Data & Analytics Platform.
- Design and implement data pipelines using Databricks, Spark, and other Big Data technologies.
- Collaborate with data scientists, analysts, and business stakeholders to understand their data needs and build solutions that meet those needs.
- Develop and maintain ETL workflows, including data transformation, validation, and loading.
- Implement and maintain data security policies and procedures to ensure the confidentiality, integrity, and availability of data.
- Build and maintain monitoring, alerting, and logging solutions to ensure the health and performance of data pipelines.
- Optimize data pipelines for performance, scalability, and cost.
Skills & Requirements
- Experience with data engineering in both cloud and on premises environments
- Experience implementing solutions using Azure Data lake, Azure Data Factory, Azure Databricks
- Experience creating data ingestion and data pipelines using python / databricks
- Experience with pipeline automation & orchestration
- Experience with Python or Spark SQL development
- Experience supporting big data and analytics projects and/or products
The following are not required for this position but would be considered as a valuable asset in a potential candidate.
- Excellent communication skills verbal and written
- Experienced in facilitating meetings and working sessions
- Experience in Azure DevOps and working in an Agile methodology
- Able to lead the direction on design decisions
Required Skills : Data Warehouse
Additional Skills : Data Warehouse Engineer