Large Language Models

Python-centric AI Application Building in Minutes with Lepton and Ray

September 19, 2:30 PM - 3:00 PM
View Slides

Despite the huge success and adoption of AI, it continues to be a huge challenge to bridge the research flexibility and infra scalability. This is even more so with the recent explosion of AI generated content (AIGC) and Large Language Models (LLMs). Different AI models require different software runtimes, but container tech's user-friendliness falls far short. There is a silver lining: the software stack of AI are more Pythonic than ever before, leading to much potential to simplify AI application building. In this talk, we present approaches to bring AI research to AI applications efficiently and within minutes with Lepton's python SDK, and show how one can utilize Lepton and Ray to bring such solutions to scale.

About Yangqing

Yangqing is the co-founder and CEO of Lepton AI, a startup helping enterprises to run AI applications efficiently, at scale, and in minutes. Before this, Yangqing served as VP of Alibaba on Big Data and AI. He led AI infra at Facebook, and before that did research at Google Brain. He created and contributed to open source libraries like Caffe, ONNX, and PyTorch. Yangqing graduated from UC Berkeley with a PhD degree in EECS.

Yangqing Jia

Co-Founder and CEO, Lepton AI
Photo of Ray Summit pillows
Ray Summit 23 logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Photo of Ray pillows and Raydiate sign
Photo of Raydiate sign

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.