Job Title
Data Engineer
Relevant Experience (in yrs)
- 2 or more years’ experience developing, deploying and supporting high-quality, fault-tolerant data pipelines (leveraging distributed, data movement technologies and approaches, including but not limited to ETL and streaming ingestion and processing)
- Experience on batch/streaming data pipelines using AWS services like S3, EMR, DynamoDB, Kinesis, Lambda, Glue, Redshift, Cloud Watch, Data pipeline and Step Functions.
- 1 or more years’ experience utilizing Linux scripting (Python, Shell, etc.)
- 1 or more years’ experience in SQL development and/or NoSQL databases
- 1 or more years’ experience with cloud-based data offerings (Amazon AWS, Redshift, Snowflake, Google GCP, Microsoft Azure)
- Experience in P&C Insurance Domain
Technical/Functional Skills
Snowflake, AWS,DBT, ETL Informatica, SQL, Insurance Domain Knowledge
Experience Required
2 to 4 Years
Roles & Responsibilities
- Analyze, develop, refactor, fix, test, review and deploy functionality, and bug fixes in ETL that moves data between Snowflake data layers.
- Database and Query tuning, diagnosis, and resolution of performance issues leveraging ELT and push-down if required.
- Use and improve ETL frameworks, continuous data quality frameworks and other automation in data pipeline.
- Service data availability SLOs and attend triage meetings to engage with Security, Infrastructure and Workload management teams to issue resolutions.
- Compliance to Agile Jira SDLC controls, Service Now Change & Incident management, and Data Ops Gitlab CI/CD pipeline.
- Participate in daily standups, lead design reviews and offshore coordination.