Download, choose an LLM, configure, run, and search using a hybrid RAG running in a container with its own localized vector database.
This GitHub repository https://github.com/NVIDIA/workbench-example-hybrid-rag can be run as a containerized server NVidia AI Workbench on NVidia GPUs.
This video records the first time I tried downloading, running, and using this RAG project. Some mistakes were deleted. Total time was probably 45 min.
Download
0 formats
No download links available.
Run the NVidia hybrid rag workbench example in a container using AI Workbench | NatokHD