Large Language Models (LLM’s) are starting to revolutionize how users can search for, interact with, and generate new content. Some recent stacks and toolkits around Retrieval Augmented Generation (RAG) have emerged where users are building applications such as chatbots using LLMs on their own private data. This opens the door to a vast array of applications. However while setting up a naive RAG stack is easy, there is a long-tail of data challenges that the user must tackle in order to make their application production-ready. In this talk, we give practical tips on how to manage data for building a scalable/robust/reliable LLM software system, and how LlamaIndex + Ray provide the tools to do so.
Bio: Jerry is the co-founder/CEO of LlamaIndex, an open-source tool that provides a central data management/query interface for your LLM application. Before this, he has spent his career at the intersection of ML, research, and startups. He led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora. He graduated from Princeton in 2017 with a degree in CS.
Jerry is the co-founder/CEO of LlamaIndex, an open-source tool that provides a central data management/query interface for your LLM application. Before this, he has spent his career at the intersection of ML, research, and startups. He led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora. He graduated from Princeton in 2017 with a degree in CS.
Come connect with the global community of thinkers and disruptors who are building and deploying the next generation of AI and ML applications.