Back to Browse

Build Code Interpreter for Large Language Models (Local + Docker)

2.2K views
Nov 13, 2024
18:04

Adding Code Interpreter to an application using Large Language models/Generative AI models unlocks a world of possibilities. In this vide, we'll learn how to implement code interpreter that runs on your local machine as well as on Docker container for safely executing the code. --- Code Example: https://github.com/yankeexe/llm-code-interpreter-demo Generate API Keys for Gemini AI: https://aistudio.google.com/app/apikey Alternative Chat Models for LangChain: https://python.langchain.com/docs/integrations/chat/#all-chat-models --- ⚡️ Follow me: - Github: https://github.com/yankeexe - LinkedIn: https://www.linkedin.com/in/yankeemaharjan - Twitter (X): https://x.com/yankexe - Website: https://yankee.dev -- 🎞️ Chapters 0:00 Intro 0:16 Demo: App UI 0:52 Demo: 1 1:00 Demo: 2 1:07 Demo: 3 (Fun) 1:36 Demo: 4 (Image) 2:00 Demo: 5 (Running Server) 2:33 Code: Env Setup 2:56 Code: App Skeleton 4:54 Code: Adding LLM 5:42 Code: Getting API Key 6:36 Code: Local Execution Env 11:11 Code: Docker Execution Env 17:44 Outro

Download

1 formats

Video Formats

360pmp428.4 MB

Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.

Build Code Interpreter for Large Language Models (Local + Docker) | NatokHD