Dice is the leading career destination for tech experts at every stage of their careers. Our client, Jobot, is seeking the following. Apply via Dice today!
3 days onsite- Hybrid Opportunity
This Jobot Job is hosted by: Heather Burnach
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $170,000 - $210,000 per year
A bit about us:
We are a company that is a different kind of private investment firm. We invest in technology and technology-enabled companies that innovate and disrupt existing industries - from biosciences to transportation to food to health and wellness.
Our mission is to invest in and work side by side with companies that make the world a better place. These companies include SpaceX, Anduril, GoPuff, HackerOne, Cloud9, and others. We've had the honor of serving some of the world's greatest entrepreneurs and companies.
Why join us?
- High Salary + Bonus
- PTO and Sick Pay
- Health insurance, dental, and vision coverage
- 401k
- Hybrid Work after starting
Job Details
About the Role:
- Create, design, and deploy ETL and ELT flows for consuming from disparate sources of 3rd party data
- Build tooling to monitor data pipelines and perform troubleshooting and debugging to identify and resolve data quality and performance issues
- Build and maintain the data and computing infrastructure that powers the deployment of highly valuable products that support the Firm's investment process
We're excited about candidates that have...
B.S. and/or M.S. in Computer Science, Engineering, Mathematics, or related field
2+ years of proven data and/or backend engineering experience designing and implementing data infrastructure solutions, with significant contributions that you can talk to
Experience owning projects at every step: ideation, infrastructure, implementation, and monitoring
Exceptional coding skills in Python for data processing and automation
Additional skills in RESTFUL API design, especially Flask, Django, or FastAPI are good to have
Highly proficient in SQL, with working knowledge of multiple database types
Experience with the following:
Modern cloud platforms (Google Cloud Platform, AWS or Azure) and related services (e.g. BigQuery, Elastic, Snowflake)
Big Data processing (Spark or PySpark)
Architecting, deploying, and managing infrastructure using Docker
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Software Engineer- Data Pipelines