Ray Use Cases

How Snorkel Builds Interactive Enterprise ML Products Using Ray

September 18, 4:00 PM - 4:30 PM
View Slides

Snorkel helps the world's largest organizations solve their toughest ML challenges. To continue our growth and achieve our product initiatives around foundation and large language models, we needed to fully redesign our interactive ML systems so that our products stay performant under increasing data and model scale.

However, building low-latency ML products for enterprises is challenging. Some enterprises are on-premises only and have limited compute resources. Others have very large datasets that need to be processed interactively. Still, others want to use the latest and greatest large language models. How do you go from sprawling requirements to an architecture that is performant for everyone?

In this talk, we share how we went from the customer requirements all the way to using Ray to design and build the interactive ML system that now powers our flagship enterprise product, Snorkel Flow. We'll dive into:

  • Distributed data/task parallelism to run ML workloads of any scale for resource-constrained customers.

  • Scalable in-memory processing for blazing fast ML workloads for resource-abundant customers.

  • How we combined the above two approaches into a single architecture deployable on any customer.

  • Lessons learned from using Ray to build performant ML systems.

About Will

Will is a software engineer at Snorkel AI and tech lead on interactive ML infrastructure, enabling enterprises to run heavy ML workloads over large datasets at low latencies. Prior to Snorkel, Will was a cofounder of include.ai, a Sequoia-backed company. Previously, Will spent time at DeepMind and Google Brain, where he was a coauthor on the reinforcement learning for chip design effort. Will studied computer science at Stanford.

About John

John Allard is a Member of Technical Staff at OpenAI, where he works on the fine tuning product. Prior to OpenAI, he worked at the intersection of infrastructure and backend systems as a staff engineer at Snorkel AI. A UC Santa Cruz Computer Science graduate, John is passionate about API design, distributed systems, and large-scale inference.

Will Hang

Software Engineer, Snorkel AI

John Allard

Member of Technical Staff, OpenAI
Photo of Ray Summit pillows
Ray Summit 23 logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Photo of Ray pillows and Raydiate sign
Photo of Raydiate sign

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.