Apron was started by a group of people who'd spend years building products for global fintech companies. But there was one big problem no one was solving. Business payments. The kind that buy tomatoes, tools, and till rolls. The kind that keep suppliers happy and business booming. The kind that should be super simple to make and manage, and yet, aren't. Payments eat up valuable hours every week for both businesses and the accountants and bookkeepers who help them.
This is a problem that's affecting entrepreneurs. Florists and financial analysts. Brewers and brand strategists. The kind of people who build things, break things, change things. Imagine what they could do with this time instead. What would they come up with? What would they create?
That's why we built Apron as a payments powerhouse. We flip the payment experience from blocking business to boosting it. Apron pulls all things payments together - weaving into your workflow, collating conversations, turning hours into minutes. So you can put those hours to better use - plan the future, take a walk, call your mum.
We are backed by Index Ventures and Bessemer Venture Partners.
Who We're Looking For
We are building a product that allows our clients to upload all invoices and receipts and get them automatically processed and ready to be paid. Our goal is to make this product number 1 on the market by the end of year.
To achieve this, we need to develop a document recognition service that operates with exceptional quality, speed, and high availability.
We are looking for an engineer to help us build the infrastructure for training and deploying the models for such a service.
The other important area is to make sure that the relevant data for model training, inference and analytics are present. This would require skills for building data pipelines, making data quality checks, database and infrastructure management.
The ideal candidate should have extensive experience and a broad understanding of the MLOps and Data Engineering field. They should be capable of identifying which technologies will be beneficial and which might be excessive for our needs, ensuring the solution remains efficient and uncomplicated.
What You'll Be Doing
- Organising model serving ensuring high performance on high loads. Setup monitoring dashboards and alerts
- Suggesting appropriate architecture and tooling for serving models, for example serving on GPUs if needed
- Introducing MLOps tools for model development and serving. Dataset and models storage and versioning, reproducible models training, model estimation and metrics visualisation
- Setting up documents labelling tools and model retraining based on online feedback
- Ensuring data security in service and training pipelines
- Developing data pipelines, managing and optimising data infrastructure to ensure the relevant data is available for model training, inference and data analytics
- Contributing to development best practices such as tests in the team
Requirements
- 5+ years of experience in areas related to MLOps, machine learning
- Extensive knowledge of Python and SQL (PostgreSQL preferred)
- Experience in serving machine learning models and using MLOps tools (mlflow, dvc or similar)
- Basic knowledge of machine learning algorithms, models, and statistical concepts
- Experience with cloud computing platforms (we use GCP) and containerization technologies (e.g., Docker, Kubernetes)
- Experience with data pipeline development, data management skills (airflow or similar)
- Knowledge of Kotlin is a plus - all backend code except ML services is written in this language
- Experience in running AB tests / AB testing platforms is a plus
- Experience in building infrastructure for online metrics monitoring is a plus (kafka, grafana, etc.)
Benefits
- Competitive salary and stock options
- Fully expensed tech
- Health insurance via AXA
- Flexible holidays and WFH