Role: Azure Data Architect
Duration: Contract
Location: SFO – Locals Preferred
Experience: Minimum 13 years – Max 23 years
Rate: Open but should be genuine
Experience working with customer Business/IT team
1. Mandatory to have end to end experience working in a Databricks Data environment (Dev, QA and Prod) with large data sets (Petabytes).
2. Experience with Python and deployments for data sourcing and loads into ADLS (Azure Data Lake), API integration [from third party]
3. Very strong knowledge with SQL & various Spark optimization techniques
4. Comfortable with the broader Databricks ecosystems
5. Experience working on Apache Spark creating data processing jobs and running them on Databricks.
6. Strong knowledge of Spark and Airflow integration and Orchestration
7. Basics of Unix commands.
8. Must have worked on several projects in the Big data space
9. Hands-on with experience with the following
1.Must Have - Databricks & SQL, PySpark
2. Strong in Python/Scala
Job Description of Role
The candidate is expected to play developer role who can interact with the customer, work with teams (local or remote) and lead the technical or business discussions in key meetings.
An ideal candidate must have been involved in this role, where she/he has led the design/implementation of several complex data-oriented solutions on Hadoop Big Data Platform.
The role is a combination of design/implementation and consulting with hands-on experience. It will require one to envision, design, and implement the technical solution working with customer and internal technical team.
Expierence Range of Primary skills
4 to 8 yrs
Secondary Skills (Good To have)*
Experience working with Azure Data Services
Soft skills/other skills (If any)
a) Good communication
b) Neutral accent
c) Positive atttitude, positive body language