Job Description:
Data Engineering Intern
Blockhouse is focused on real-time machine learning and data engineering, building scalable infrastructure for high-frequency ML models that redefine how organizations extract actionable insights from data. Our systems drive the future of real-time analytics, leveraging cutting-edge technology to deploy machine learning pipelines with sub-second level response times. If you’re passionate about building the future of MLOps and want to work with a world-class team, this is your opportunity.
Role Description:
We are looking for an exceptional
Data Engineering Intern to join our team and help architect the data systems of the future. In this role, you will build and scale real-time data pipelines and analytics infrastructure, powering high-frequency machine learning models. This is not a typical internship – you will be working on mission-critical projects that process millions of data points per second, collaborating closely with machine learning scientists and MLOps engineers.
Your work will directly influence the performance of trading models and real-time decision-making engines. You’ll work with cutting-edge technologies for event-driven streaming and OLAP analytics, delivering insights at scale and speed.
Key Responsibilities:
- Real-Time Data Pipelines: Design, develop, and optimize real-time data pipelines that feed high-frequency machine learning models. Ensure seamless data ingestion, transformation, and storage for analytics and machine learning at scale.
- Advanced Data Integration: Collaborate with MLOps engineers and machine learning teams to ensure real-time data flows between systems, enabling models to continuously learn from and react to new data streams.
- Performance Optimization: Work on optimizing the performance and reliability of data architectures using technologies like ClickHouse for high-throughput OLAP querying and Redpanda for low-latency event streaming.
- Real-Time Monitoring & Diagnostics: Implement robust monitoring and diagnostic tools to track the health and performance of data pipelines, ensuring real-time models are supplied with accurate, up-to-date data.
- Cloud Infrastructure: Build and manage scalable cloud infrastructure to support data pipelines in production, leveraging AWS or GCP services to ensure fault-tolerant, cost-efficient deployments.
- Collaborate with Elite Teams: Engage with top-tier engineers, data scientists, and quantitative researchers to build scalable solutions that bridge the gap between data engineering and machine learning.
What You’ll Need:
1+ Years of Data Engineering Experience: Hands-on experience building and scaling data pipelines, especially in high-throughput, low-latency environments.
- Mastery of Real-Time Data Systems: Expertise in real-time data streaming and processing, with strong hands-on experience using technologies like Redpanda (or Kafka) and ClickHouse (or similar OLAP databases).
- Proficiency in Data Engineering Tools: Strong command of Python, SQL, and other tools commonly used in data engineering. Experience with frameworks such as Apache Spark, Airflow, or similar is a plus.
- Cloud Expertise: Proven experience with cloud platforms such as AWS or GCP, including services like S3, Lambda, EKS, or other tools for building scalable data infrastructure.
- Data Architecture & Integration: Experience architecting systems that handle both streaming and batch processing, integrating real-time pipelines with machine learning workflows.
- Monitoring at Scale: Familiarity with monitoring and alerting tools such as Prometheus, Grafana, or CloudWatch to ensure seamless operation of real-time data systems.
Ideal Candidate Profile:
- Passion for Real-Time Systems: A deep interest in building data systems that operate in real time, optimizing for performance, latency, and throughput.
- Experience with High-Frequency Systems: Familiarity with the challenges and complexities of handling large-scale, high-frequency data.
- Self-Motivated & Results-Driven: You thrive in a fast-paced environment, are self-driven, and have the ability to work independently on complex tasks.
- Collaborative Mindset: A team player with excellent communication skills, who can work effectively across teams to drive innovation and problem-solving.
This is a part time role (20 hours / week). Candidates must be able to attend standup meetings at 10am EST and be available during the following working hours. This role is not for winter / summer positions - we are looking to fill this role as soon as possible.
Why You Should Join Us:
- Innovative Environment: Be part of a team that is pushing the boundaries of real-time data engineering, solving complex challenges in financial technology and beyond.
- Expert Team: Work alongside some of the brightest minds in data engineering, machine learning, and quantitative research.
- Professional Growth: Blockhouse fosters a culture of continuous learning and development, ensuring you gain hands-on experience with cutting-edge technologies and best practices.
- Cutting-Edge Projects: You’ll work on transformative projects that directly impact the future of trade execution, real-time analytics, and financial technology.
- Compensation & Perks: Equity-only compensation. NYC-based employees enjoy daily free lunch and weekly company bonding events.
How to Apply:
If you are passionate about real-time data systems and eager to apply your skills to solve complex engineering challenges, join us at Blockhouse. Together, we will redefine the future of data engineering and real-time analytics.