Job Title: Azure Data Engineer
Remote
Working Hours: Mandatory PST hours
Contract
This is a 100% hands-on Data Engineer role, not a Lead/Architect position.
Must Have:
- Data Engineer - 8-10 years experience
- Azure: Expert - 8 years experience
- ADF: Expert - 6 years experience
- Data Bricks: Expert - 6 years experience
- Spark (PySpark, SparkSQL): Expert - 6 years experience
- Azure DevOps CI/CD - 4 years experience
Job descriptions for the Sr data engineering for DSnA support:
- We operate in an Azure + Databricks Lakehouse. We’ll need a person with:
- Azure experience – ADF for orchestration, ADLS for storage, AzureDevOps for CI/CD
- Databricks experience – all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL)
- PowerShell experience – this is our scripting language of choice
- SQL proficiency – it’s used everywhere (TSQL, PostgreSQL)
- Proficiency with parquet and delta formats
- Additionally – they will need experience in:
- SDLC + CI/CD – we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices.
- Should have a deep understanding of indexes and partitioning.
- Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources)
- Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework).