At Cognizant, our global community sets us apart—an energetic, collaborative and inclusive workplace where everyone can thrive. And with projects at the forefront of innovation, you can build a varied, rewarding career and draw inspiration from dedicated colleagues and leaders. We are seeking someone who thrives in this setting and is inspired to craft meaningful solutions through true collaboration. Cognizant is right where you belong!
We Are Cognizant Artificial Intelligence
Digital technologies, including analytics and AI, give companies a once-in-a-generation opportunity to perform orders of magnitude better than ever before. However, clients need new business models built from analyzing customers and business operations at every angle to really understand them. With the power to apply artificial intelligence and data science to business decisions via enterprise data management solutions, we help leading companies prototype, refine, validate, and scale the most desirable products and delivery models to enterprise scale within weeks.
Job Summary: We are seeking a highly skilled Senior Data Engineer with 6 to 8 years of experience in Python, PySpark, Databricks, SQL, Databricks Workflows. This role involves developing and optimizing data workflows ensuring data integrity and contributing to the overall success of our data-driven projects.
Experience: 7 to 12 years
Technical Skills Required: Python, PySpark, Databricks, SQL
Domain Skills: Life and Annuities Insurance
What You Bring To The Role
- Possess a minimum of 6 years of experience in Python, PySpark, Databricks, Databricks Workflows and SQL.
- Exhibit proficiency in developing and optimizing data workflows.
- Show capability in writing complex SQL queries for data extraction and manipulation.
- Display experience in troubleshooting and resolving data processing issues.
- Have a track record of ensuring data integrity and accuracy.
- Exhibit strong documentation skills for technical specifications.
What You’ll Do
- Develop and optimize data workflows using Databricks Workflows to ensure efficient data processing.
- Develop data processing workflows and ETL processes using Python and PySpark.
- Implement and maintain data pipelines using PySpark to handle large-scale data sets.
- Utilize Python to develop robust and scalable data solutions.
- Write complex SQL queries in Databricks SQL to extract and manipulate data as needed.
- Ensure data integrity and accuracy by implementing best practices in data validation and quality checks.
- Conduct performance tuning and optimization of data workflows to enhance system efficiency.
- Troubleshoot and resolve issues related to data processing and workflows promptly.
Certifications Required
Certified Data Engineer Databricks Certified Associate Developer for Apache Spark
Hybrid Working Arrangements
We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 3 days a week in a client or Cognizant office in Toronto or Kitchener location. Regardless of your working arrangement, we are here to support a healthy work-life balance through our various well-being programs.
Note: The working arrangements for this role are accurate as of the date of posting. This may change based on the project you’re engaged in, as well as business and client requirements. Rest assured; we will always be clear about role expectations.