Data Engineering
Looking to build the data infrastructure side from the ground up
Tableau - reporting tool
Data pipelining, data lake, data warehouse, ETL then connect to reporting tools like Tableau, Qlik Sense
Contract - 6 months to end of year 2024 (max contracting for 6 months)
Top skills: Data modeling, data warehousing, ETL pipeline, Scripting in Python/Scala, Java, AWS (lambda)
Years of exp: 3-5 years of experience
Work set up: Hybrid - 3 days a week (Redmond, WA) (will consider Remote candidates)
Interview: 2 rounds
Culture fit: Someone who is a self-starter, who can adjust to a new environment easily, and someone coming from a start-up would be ideal.
Basic Qualifications
- Experience with data modeling, warehousing, and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience with AWS database/ETL tools including Lambda, Glue, Redshift, DynamoDB, EMR
- Experience building data products incrementally and integrating and managing data sets from multiple sources
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
Description
We are seeking a Data Engineer to provide data engineering services and solutions to enable programs to scale with growth and complexity. Role requires deep expertise in the design, creation, management, and business use of various datasets, across a variety of data platforms. You will be responsible for designing, developing, and operating a data service platform using Python, SQL, etc. to build the various ETL, analytics, and data quality components. You will design and implement data models and build the end-to-end infrastructure for reports and dashboards to be created. The ideal candidate will be an expert in sourcing semi-structured and complex data types and normalizing them into consumable data models that are accessible to other users. You will develop data products, infrastructure, and data pipelines leveraging AWS services (Glue, EMR, Lambda, Redshift, Quicksight) and internal tools.
About the team
Our vision is to build best-in-class analytics solutions to support our internal stakeholders and partners for making data-driven decisions org-wide. On our team, you will have the opportunity to dive deep into complex business and data problems and drive high-impact and large-scale data solutions.
Preferred Qualifications
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
- Proficiency in the DevOps style of software deployment (infrastructure-as-code)
- Proficiency with AWS technologies including SNS, SQS, SES, Route 53, Cloudwatch, VPC
- Background in Big Data, non-relational databases, and Data Mining is a plus