· Hands ondevelopment experienceand understanding indesigning
and developing dataprocessing patterns tosimplify the complexity of real-world data engineering architecture. Patterns which are cost- efficient and scalableprovide performance andreliability of a modern Lakehouse, with low latency of streaming.
· Understand the design and develop the standard framework modules, high performance services and client libraries for big data using one ormore tools from AWS,Azure, Kubernetes,Spark, DBT, Databricks and snowflake, AWS S3, Azure Blob Store.
· Hands onexperience inoperationalizing tensof hundreds of datapipelines, deploy, test & upgrade pipelines and best practices to
eliminate operationalburdens for managingand building high quality data pipelines.
· Experience in query optimizer and execution engine that's fast, tuningfree, scalability, ACIDtransactions and timetravel patterns using spark jobs, DBT on Databricks and snowflake.
· Quickly evaluatevarious technologiesand complete POCdriving architecture design for the applications.
· Work in complexenvironment with multilocation teams.
· Work in team of agiledevelopers.