- Data Engineer : 8-10 years
- Azure : 6 years
- ADF : 6 years
- Data Bricks : 6 years
- Spark (PySpark, SparkSQL): 6 years
Job descriptions for the Sr data engineering for DSnA support:
We operate in an Azure + Databricks Lakehouse. We’ll need a person with:
- Azure experience – ADF for orchestration, ADLS for storage, AzureDevOps for CI/CD
- Databricks experience – all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL)
- PowerShell experience – this is our scripting language of choice
- SQL proficiency – it’s used everywhere (TSQL, PostgreSQL)
- Proficiency with parquet and delta formats
Additionally – They Will Need Experience In
- SDLC + CI/CD – we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices.
- Should have a deep understanding of indexes and partitioning.
- Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources)
- Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework)