Join us for the first session in our Python + AI series!
In this session, we'll talk about Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We'll use Python to interact with LLMs using popular packages like the OpenAI SDK and Langchain. We'll experiment with prompt engineering and few-shot examples to improve our outputs.
We'll also show how to build a full stack app powered by LLMs, and explain the importance of concurrency and streaming for user-facing AI apps.
π This session is a part of a series. Learn more here: https://aka.ms/PythonAI/2
Explore the slides and episode resources: https://aka.ms/pythonai/resources
Check out the demos: https://aka.ms/python-openai-demos
Chapters:
00:06 β Welcome & Housekeeping
01:03 β Meet the Presenter & Series Overview
02:45 β What Are Large Language Models (LLMs)?
07:09 β How LLMs Work: Tokenization & Prediction
13:26 β Determinism & Temperature in LLMs
16:10 β Exploring Model Options: Azure, GitHub, Local
23:03 β Using LLMs with Python: OpenAI SDK
30:01 β Streaming Responses & Chat History
36:49 β Building Multi-Turn Conversations
44:44 β Improving LLM Output: Prompt Engineering & Few-Shot Examples
50:59 β Chaining LLM Calls for Better Accuracy
53:05 β Building LLM-Powered Apps with Python & Quart
58:28 β Async Frameworks for Scalable Backends
1:00:02 β Wrap-Up, Office Hours & Whatβs Next
#MicrosoftReactor #learnconnectbuild
[eventID:26292]