At USEReady,
We empower people to succeed with data. USEReady is a data and analytics firm that provides the strategies, tools, capability and capacity that businesses need to turn their data into a competitive advantage. USEReady partners with cloud and data ecosystem leaders like Tableau, Salesforce, Snowflake and Amazon Web Services, and has been named Tableau partner of the year multiple times.
Headquartered in NYC, the company has 600+ employees across offices in the U.S., Canada, Singapore and India and specializes in financial services. USEReady’s deep analytics expertise, unique player/coach approach and focus on fast results makes the company a perfect partner for a cloud-first, digital world.
Job Title: Senior Databricks Data Engineer
Location: NYC
Job Type: Long term Contract
Experience
The engineer will implement monitoring solutions, optimize ETL jobs for performance, and provide comprehensive support from data ingestion to final output. With in-depth knowledge of the Databricks platform and strong analytical skills, this position will significantly enhance the department's ability to deliver high quality, data-driven insights and solutions.
Job Responsibilities
- Strategize, design, develop and work with a team of dynamic and passionate data engineers to deliver automated cloud infrastructure and DevOps solutions
- Design and develop ETL code to integrate various data sources
- Create and maintain efficient data pipelines on the Databricks platform
- Develop, maintain, and optimize ETL pipelines for both streaming and batch data processing
- Ensure data integrity and consistency throughout the ETL lifecycle
- Provide comprehensive support for ETL processes, from data ingestion to final output
- Write and execute unit tests and integration test cases for ETL code to ensure high-quality outcomes
- Implement monitoring solutions for ETL jobs to ensure timely and successful data processing
- Proactively identify, troubleshoot, and resolve issues in ETL workflows
- Optimize ETL jobs for maximum performance and efficiency through performance tuning and troubleshooting
- Mentor other data engineers in the team, cross-train and provides guidance
Minimum Qualifications
- Minimum of Bachelor's degree in Computer science, Information systems, Engineering, or Data Science
- Minimum 10 years of experience in designing, developing, and optimizing ETL processes
- 5+ years of experience in developing/supporting a data platform in Azure Databricks
- Proficiency in creating and maintaining efficient data pipelines on the Databricks platform
- Well-rounded in working in a DEV Ops environment supporting processes in data platform in supporting business units and their use cases
- Strong verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders
- Ability to troubleshoot and resolve issues in a timely manner
- Experience working in an Agile/Scrum environment
- Team player with the ability to work collaboratively in a cross-functional team
- Strong analytical and problem-solving skills
- Ability to work independently, handle multiple tasks simultaneously and adapt quickly to change with a variety of people and work styles
- Willingness to learn new technologies and continuously improve skills
Preferred Qualifications
- In-depth knowledge of Databricks platform and technologies, including Delta Lake, Databricks SQL, and Databricks Workflows
- Experience with Azure cloud platforms and Azure Data Lake cloud storage
- Knowledge of data warehousing, data modeling, and best practices
- Proficiency in programming languages such as Python, SQL, Scala, or R
- Experience with big data technologies such as Apache Spark, Hadoop, or Kafka
- Knowledge of working with Hadoop for migration projects from Hadoop to Databricks
- Familiarity with DevOps practices and tools such as CI/CD, Git, etc.
- Knowledge of infrastructure as Code (IaC) tools like Terraform
- Experience with implementing data governance and security measures in a cloud environment