Data Engineer with Databricks and AZURE
Location: Remote
Full-Time
Job Description:
PySpark
Databricks
SQL
Candidates are required to have these mandatory skills:
Strong hands on in Pyspark and Apache Spark.
Experience in Native Spark Migration to Databricks.
Implement Massive Parallel Processing Layers in Spark SQL and PySpark.
Implement Cost effective Infrastructure in Databricks.
Experience In extracting logic and from on prem layers like SSIS, Stored procedures, Informatica, Vertica, Apache Hudi, Filesystems. etc into Pyspark.
Hands-on experience with guiding the virtual data model definition, defining Data Virtualization architecture and deployment with focus on Azure, Databricks, PySpark technologies.