Role Description
This is a full-time remote role for a Data Engineer with 4+ years of experience in Python, GCP, HL7, and IRIS. The Data Engineer will be expected to work with design, development, and implementation of data models, ETL processes, data analytics, and data warehousing. They will also develop efficient and scalable data pipelines, integrate with external systems using APIs, troubleshoot and fix technical issues in collaboration with development teams.
Must have :
- 3+ years working experience on IRIS for Health Data Platform coding and creating IRIS components using Object Script
- 2+ Years working on H7L, CCD/CDA, XML and FHIR JSON Data
- 1+ years working on GCP Environment (GKE, Statefull Sets, PV’s, Cloud Storage, BigQuery, Data Proc, Cloud Storage, Dataflow, Pub/Sub, cloud composer etc.)
- Hard Core IRIS for Health Engineers who have experience troubleshooting Production IRIS servers.
- Work and identifying IRIS issues on Production Cluster’s, If need be do Purging, bringing IRIS Node’s up.
- Support IRIS for health Image upgrades
- Support L1 Ops team to identify and flag issues via Observability Dashboard (Grafana, Logging)
- Having Good knowledge of GCP and GKE’s
Qualifications:
- Data Engineering: In-depth understanding of data engineering concepts, methods, architectures, and tools.
- Extract Transform Load (ETL): Experience in designing, developing, and optimizing ETL processes, data pipelines, and workflows.
- Data Warehousing: Knowledge of data warehousing concepts, architectures, and technologies.
- Data engineering skills including data pipeline development, data ingestion, processing, transformation, and data architecture ,Analytical Queries using SQL ,Hadoop ,Hive and ability to solve challenging analytical problem.
- Excellent problem-solving, analytical, and communication skills.
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related field.