Ray Deep Dives

Ray Data Streaming for Large-Scale ML Training and Inference

September 19, 1:00 PM - 1:30 PM
View Slides

Some of the most demanding ML use cases involve pipelines that span both CPU and GPU devices in distributed environments. Most frequently, this situation occurs in batch inference, which involves a CPU-intensive preprocessing stage (e.g., video decoding or image resizing) before utilizing a GPU-intensive model to make predictions. It also occurs in distributed training, where similar CPU-heavy transformations are required to prepare or augment the dataset prior to GPU training. In this talk, we examine how Ray data streaming works and how to use it for your own machine learning pipelines to address these common workloads utilizing all your compute resources–CPUs and GPUs–at scale.

Takeaways

• Ray Data streaming is the new execution strategy for Ray Data in Ray 2.6

• Ray Data streaming scales data preprocessing for training and batch inference to heterogeneous CPU/GPU clusters

About Eric

Eric Liang is a software engineer at Anyscale and TL for the Ray open source project. He is interested in building reliable and performant distributed systems. Before joining Anyscale, Eric was a staff engineer at Databricks, and received his PhD from UC Berkeley.

Eric Liang

Software Engineer, Anyscale
Photo of Ray Summit pillows
Ray Summit 23 logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Photo of Ray pillows and Raydiate sign
Photo of Raydiate sign

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.