Applying Open Source methods to build & train Large Language Models (LLMs)
327 | Thu 31 Jul 3 p.m.–3:45 p.m.
Presented by
-
wesley chun
@wescpy
http://cyberwebconsulting.com
WESLEY CHUN, MSCS, is a Google Developer Expert (GDE) https://developers.google.com/experts in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling "Core Python" https://corepython.com series, co-author of "Python Web Development with Django" https://withdjango.com, and has written for Linux Journal & CNET. By day, he's an AI Technical Program Manager at Red Hat https://redhat.com/ai focused on upstream open source projects that make their way into Red Hat AI products; at night, he runs CyberWeb https://cyberwebconsulting.com specializing in GCP & GWS APIs and serverless platforms, Python & App Engine migrations https://appenginemigration.com, and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for serverless migration http://bit.ly/3xk2Swi and GWS developers http://goo.gl/JpBQ40. He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide. Follow he/him @wescpy & dev.to/wescpy.
wesley chun
@wescpy
http://cyberwebconsulting.com
Abstract
Large Language Models (LLMs) are a key element in generative AI. Adding evergreen skills & knowledge to open models is often desired without having to fully fork and train them on your own. Until recently, fine-tuning open models have been challenging and time- and resource-consuming. In this session, learn about the InstructLab open source project from Red Hat and IBM which allows users to fine-tune models by contributing skills & knowledge to LLMs in a more user-friendly and open source way. This technique allows for the establishment of an upstream community built on contributing acceptance workflows for models, making open source AI more approachable. In this session, attendees will learn about InstructLab as well as how communities and individuals can contribute domain knowledge to models incrementally, in a unified and open way while reducing model variations, resulting in LLMs fine-tuned with your organization's skills & data customized for your users!
Large Language Models (LLMs) are a key element in generative AI. Adding evergreen skills & knowledge to open models is often desired without having to fully fork and train them on your own. Until recently, fine-tuning open models have been challenging and time- and resource-consuming. In this session, learn about the InstructLab open source project from Red Hat and IBM which allows users to fine-tune models by contributing skills & knowledge to LLMs in a more user-friendly and open source way. This technique allows for the establishment of an upstream community built on contributing acceptance workflows for models, making open source AI more approachable. In this session, attendees will learn about InstructLab as well as how communities and individuals can contribute domain knowledge to models incrementally, in a unified and open way while reducing model variations, resulting in LLMs fine-tuned with your organization's skills & data customized for your users!