Learn more about Poorna on her LinkedIn.
I work on the machine learning team at Upstart. Upstart is an online lending platform whose mission is to “enable effortless credit based on true risk”. The phrase “true risk” is a reference to ML, which is used at the core of Upstart’s product to predict loan outcomes, identify prospective customers, catch fraud, and more.
I joined Upstart in 2017 as an IC, and worked on several different project areas for a few years, eventually trying out management. Currently, I lead the ML team that works on fraud detection and verification models.
I studied electrical engineering as an undergrad in India. After undergrad, I felt pulled in multiple directions and was confused about what to do next. I took a year off, and ended up deciding to go to Stanford to do an MS in Statistics after that. The idea to study statistics took root after a friend with similar interests chose to pursue that path. I had enjoyed my undergrad classes on probability and statistics, and I found it appealing that statistics was a tool that could be used to study problems in different domains.
My time at Stanford really opened the doors of machine learning to me. I learned a lot, partly because statistics was quite new to me at the start of the program and partly because I signed up for several machine learning classes which required more sophisticated coding than I’d encountered before. I also TA’ed a few classes (including Andrew Ng’s ML class), and worked on a side project for which I had an RAship.
I decided to go into industry after my MS. I had enjoyed studying ML in school, and ML was a common career path after a stats degree, so becoming an ML practitioner in industry was not particularly radical.
As for what factors helped me get here, to be clear, there was some luck involved—in being born into a family that had access to and encouraged education, getting into Stanford, etc. But in terms of things I could control, I’ve found the following helpful:
No two days or weeks look quite the same, but looking back at the last 6 months, I’ve spent time supporting my team, providing technical review and guidance, defining our goals and roadmap, identifying infrastructural needs and advocating for tooling and platforms (“MLOps”) to enable the ML team, and working on IC projects. Part of my job is doing whatever needs to be done to provide business value in the product through ML, which includes responding to urgent business needs, picking up IC projects, or anything else.
To some extent, I’ve taken this for granted at my workplace... ML is considered to be at the heart of Upstart, and there’s been strong conceptual alignment from well before I started.
To successfully leverage ML for a business, I think there needs to be alignment between the ML team and product or business decision-makers. Product teams should understand the abilities and limits of ML. ML teams should understand business goals and tradeoffs, and be familiar with the product and user experience. Communication, curiosity, and transparency really help to build this kind of cohesion. Also, hire high caliber team members on both sides (ML and product) who ask the right questions and demonstrate good judgement in their selection of problems to work on.
If you’re trying to get the business to buy into ML in the first place, measure and communicate the business value brought about by your ML models.
It’s not as cookie-cutter or linear as my response might suggest, but here you go:
On the engineering side, my team collaborates with data engineers, ML infra engineers and product engineers. I can’t understate how much my team depends on these partners for success! Scaling happens through better tooling (more on this later), and we lean heavily on our partner teams to enable ML by building platforms that support our workflows. Scaling also happens through good technical writing and code quality, which helps new team members to ramp up faster, reuse prior work, etc.
I hesitate to answer this question, since I’m not a guru by any means. It seems valuable for an organization to support collaboration, and align incentives, between ML scientists and all their cross functional partners (infra and data engineering teams, product engineers, PMs, etc.). This could be true of multiple org structures, I guess, and as a company grows, you’ll probably re-evaluate org structure, so be nimble and open to change. The processes and structure that work for a team of 5 will probably change as the team doubles or quadruples.
Upstart is investing in infrastructure to speed up and systematize different parts of the ML lifecycle. For example, the infra and DE teams are in the process of building a feature store to enable ML scientists to quickly discover data sources and build the right datasets for model training. Our infra team is trying to increase automation in model training and research workflows (by introducing tools like Metaflow and Airflow). We use MLFlow in places to track research experiments and make research more reproducible. That said, Upstart is still early in our journey here and learning the best practices, and the MLOps industry itself is evolving very quickly, so the tooling of choice might change. Beyond any specific tool, it’s important to know (measure!) how much time ML scientists are spending on which parts of their workflow, and ease the bottlenecks.
Personally, I’ve also found it valuable to invest in efforts to understand the data that’s available to the ML team, know where it’s coming from, and measure its quality. These investigations have been quite insightful in revealing areas where we need to improve. Poor quality data may not visibly slow the research process down, but it can hobble ML systems and experiments. Relatedly, if your product is not instrumented to collect some types of data, or is collecting it and not making it accessible to the ML team, that’s an opportunity cost in terms of ML advancement.
It also helps to have well-architected production systems, where you can easily swap out one ML model or decision system for another, run live experiments, and capture data to track key metrics. You want to have interfaces that support reusing code between research and production. Upstart’s product engineering teams are working to refactor parts of our production codebase to support this kind of iteration.
Internally in the ML function, you can unlock faster iteration by thinking critically (especially in the early stages of a project), making pragmatic choices, and trying to get feedback fast. For example, before trying out a complicated solution for a new use case which would take a long time to build and validate, test out a simple baseline in production to get quick feedback.
More than any specific process or tools, I’d want to make sure the team was constantly trying to improve:
This is my opinion, and not all of it reflects what Upstart is doing today, but I think Upstart has bought into these ideas and is moving towards them.
Currently we monitor models through a combination of dashboards and ad-hoc analysis, but we are discussing internally on how to make this more seamless. As I’ve alluded above, I think model monitoring is a very important part of the ML lifecycle.
We don’t currently retrain models automatically, but that’s a direction I think we’ll move towards. (I think the ideal is to have nearly automatic retraining, but a human should still review a report on model performance and metrics, and have the opportunity to intervene or dig deeper, before approving a new model to go to production.)
It’s easiest to deploy a retrained model with no API changes. But when the model API changes (usually because of new types of inputs to the model) we do have to update our production pipelines. With the feature store that our infra and data teams are building, we hope to make this process seamless.
Scrappiness; the ability to ask the right questions; technical depth; code quality; defensive thinking (an interest in checking assumptions, sanity-checking results, testing in general); curiosity about data; good judgement; communication skills.
I learn quite a bit from encountering problems at work, thinking about them, and trying to find out how ML practitioners or researchers elsewhere approach similar problems. I also learn from my colleagues, especially from their technical feedback on projects. Some of my colleagues write exceptionally thoughtful and elegant ML code, and just reading their code and understanding how they approach problems is illuminating.
Outside of work, it’s hard to keep up with ML developments because the field is evolving so quickly, and there is so much content out there! I appreciate having someone send me a trickle feed of good stuff to pay attention to (plugging MLOps RoundUp by Nihit and Rishabh!). Pre-COVID, I would attend talks and conferences (my favorite is WIDS), and I loved to learn about others’ work in machine learning. Since the pandemic, things have gone virtual, and although that’s lowered the barrier to attendance, work has been busier, so ironically I’ve attended fewer events… I hope to attend more going forward. I would also like to make more time for structured learning outside of work, for example, by working my way through a textbook or doing the occasional course (I took a class last year and it was quite rewarding, but challenging to juggle alongside work).
Read more mentor interviews?
© Eugene Yan 2024 • About • Suggest edits.