Training 8: GenAI - Fine-tuning and Deploying Stable Diffusion Models with Ray and Anyscale (3 hours)

1:00 PM - 4:00 PM
Level: Intermediate
Hands-on Lab
Software Engineer, ML Practitioner, ML Engineer
Ray Data, Train, Serve

Text-to-image models (like Stable Diffusion) have revolutionized the landscape of AI-based applications by introducing the ability to synthesize incredibly realistic and coherent images. However, using these models is difficult due to a number of challenges including:

- Compute requirements, including mix of CPU and GPU instances,
- The need to stitch together fine tuning with inference and model deployment for more end-to-end MLOps experience,
- The requirement for a capable infrastructure to successfully deploy these models.

This hands-on training aims to address these challenges and demonstrate, in a practical manner, how to fine-tune stable diffusion models, execute batch inference to generate additional images, and ultimately deploy the model within a production-ready environment.

Learning Outcomes

  • Use Ray, Anyscale and HuggingFace to operationalize stable diffusion models.
  • Identify challenges and trade-offs when working with GenAI at scale.
  • Scale all components of the workload: fine-tuning, batch inference and model serving.
  • Extend the experience to your own specific context, applications, stack and scale.

Prerequisites

  • Basic familiarity with computer vision tasks, including common challenges with training, and inference.
  • Intermediate programming skills with Python.
  • Basic understanding of text-to-image use cases.
  • Participants are encouraged to attend the morning session on “Introduction to Ray AI Libraries for deep learning use cases” if they have no prior experience with Ray.
Photo of a woman giving a talk at a conference