Job Title: Site Reliability Engineer - Databricks & Snowflake
Job Summary: We are looking for a skilled
Site Reliability Engineer with expertise in optimizing Databricks jobs and Snowflake queries. As an SRE, you will play a critical role in ensuring the performance, reliability, and scalability of our data processing systems. You should have a deep understanding of Databricks and Snowflake platforms, with a focus on performance tuning and production support. This role requires a strong grasp of SQL, experience in query optimization, and the ability to conduct regression testing on ETL pipelines post-tuning.
Key Responsibilities
- Analyze Databricks jobs and Snowflake queries to identify performance bottlenecks and inefficiencies.
- Develop and implement strategies for optimizing query performance without compromising data integrity.
- Collaborate closely with data engineering teams to understand requirements and fine-tune complex SQL queries.
- Conduct regression testing on ETL pipelines after query optimizations to ensure consistency and reliability.
- Utilize your expertise in Spark to optimize Spark SQL queries running on the Databricks platform.
- Monitor system performance metrics and proactively address issues to maintain high availability and reliability.
- Document performance improvements, best practices, and guidelines for data processing workflows.
- Provide technical support to L2 teams and play a key role in resolving production issues related to performance.
Qualifications
- Proven experience as a Site Reliability Engineer or similar role with a strong focus on Databricks and Snowflake.
- In-depth understanding of Databricks architecture, Snowflake data warehousing, and their respective query languages.
- Hands-on experience in optimizing SQL queries for performance in large-scale data environments.
- Experience with regression testing methodologies for ETL pipelines and data workflows.
- Strong analytical and problem-solving skills, with a keen attention to detail.
- Excellent communication skills and ability to collaborate effectively with cross-functional teams.
- Bachelor’s degree in computer science, Engineering, or a related technical field.
Preferred Skills
- Certification in Databricks or Snowflake platforms is a plus.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Knowledge of DevOps practices and experience with CI/CD pipelines.
- This position offers an exciting opportunity for a proactive and technically proficient individual to contribute to the optimization and reliability of our data processing systems using cutting-edge technologies like Databricks and Snowflake.