What you will do is why you should join us:
• Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets
• Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate
• Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact
• Explore relevant technology stacks to find the best fit for each dataset
• Pursue opportunities to present our work at relevant technical conferences
• Project your talent into relevant projects. Strength of ideas trumps position on an org chart
If you share our values, you should have:
• At least 7 years experience in software engineering
• At least 2 years experience with Go
• Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach
• Experience with stream processing using Apache Kafka
• A level of comfort with Unit Testing and Test Driven Development methodologies
• Familiarity with creating and maintaining containerized application deployments with a platform like Docker
• A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform
• Experience data modeling for large scale databases, either relational or NoSQL
Bonus points for:
• Experience with protocol buffers and gRPC
• Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes
• Experience working with scientific datasets, or a background in the application of quantitative science to business problems
• Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation
#6305