Large Language Models

Advances in Foundation Models — Technology, Society, and Applications

September 18, 2:30 PM - 3:00 PM
View Slides

Foundation models, trained on broad data and adapted to a wide range of tasks, represent a paradigm shift in AI. But open questions remain: How do we make training much more efficient and accurate through technical advances? How do we improve social responsibility through documentation and thoughtful benchmarking? What are novel applications that can benefit from the power of foundation models? In this talk, I will discuss recent projects that advance each of the three pillars above, and discuss the promise of efforts to connect them.

About Dr. Percy

Dr. Percy Liang is an Associate Professor of Computer Science and Statistics at Stanford, with a focus on Human-Centered Artificial Intelligence. Dr. Liang directs the Center for Research on Foundation Models (CRFM) and has more than a decade of experience in machine learning and natural language processing. He is actively leading research around Foundation models and their impact on AI system design and user interaction.

Dr. Percy Liang

Associate Professor of Computer Science and Statistics, Stanford
Photo of Ray Summit pillows
Ray Summit 23 logo

Ready to Register?

Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.

Photo of Ray pillows and Raydiate sign
Photo of Raydiate sign

Join the Conversation

Ready to get involved in the Ray community before the conference? Ask a question in the forums. Open a pull request. Or share why you’re excited with the hashtag #RaySummit on Twitter.