Synergy Interactive is partnering with a digital design and tech consultancy to fill a Data Engineer position. This short-term contract position is remote with strong preference for candidates in the Kansas City Metro area. This role requires experience with Azure, Cloud native and previous experience replacing a legacy system and performing data migrations.
NOTE: NO 3RD PARTY CANDIDATES OR RESUME SOLICITATIONS WILL BE ACCEPTED
Responsibilities:
- Rebuild Legacy Systems: Analyze existing legacy data infrastructure and develop a comprehensive strategy to rebuild and modernize it using Azure-based technologies.
- System Design and Planning: Lead the design of a scalable, efficient new data architecture that aligns with business needs and objectives.
- Azure Integration: Utilize Microsoft Azure services (Azure Data Lake, Azure Synapse, Azure SQL Database, Azure Data Factory, Azure Databricks) to build and maintain the new data infrastructure.
- Data Migration: Plan and execute data migration strategies, ensuring a seamless transition from legacy systems to the new Azure-based environment with minimal downtime and data integrity.
- Data Pipeline Development: Design and implement efficient ETL/ELT pipelines to process large data volumes, ensuring data is cleansed, transformed, and loaded into the appropriate systems for analysis.
- Collaboration and Communication: Collaborate closely with stakeholders, including data scientists, business analysts, and software developers, to ensure the new data infrastructure meets their needs.
- System Performance and Optimization: Continuously monitor and optimize the performance, reliability, and scalability of the data infrastructure.
- Documentation and Best Practices: Create and maintain detailed documentation for system architecture, workflows, and data processes. Establish and enforce best practices for data engineering and cloud-based systems.
Qualifications:
- 3+ years of experience as a Data Engineer, focusing on rebuilding legacy systems and designing modern data architectures.
- Expertise in Microsoft Azure cloud services, including Azure Data Lake, Azure Synapse Analytics, Azure SQL Database, Azure Data Factory, and Azure Databricks.
- Strong understanding of data architecture principles, including data modeling, ETL processes, and database design.
- Experience with ETL/ELT pipelines, building scalable and efficient data workflows.
- Solid programming skills in Python, SQL, or other relevant languages for data manipulation and automation.
- Familiarity with Big Data technologies such as Spark, Hadoop, or Kafka is a plus.
- Experience with DevOps practices, including CI/CD pipelines, version control, and automated testing, particularly within the Azure ecosystem.
- Knowledge of data governance, security, and compliance best practices, especially in cloud environments.
- Ability to analyze complex business requirements and translate them into technical solutions.
Nice-to-Have Skills:
- Experience with Power BI or other data visualization tools.
- Familiarity with Azure Machine Learning and data science pipelines.
- Experience with NoSQL databases like Cosmos DB.
- Understanding of containerization and microservices architecture using tools like Docker and Kubernetes.