In this video guide we show you how to perform file-level and chunk-level retrieval with LlamaCloud using a custom router query engine and a custom agent built with Workflows (https://docs.llamaindex.ai/en/latest/module_guides/workflow/). File-level retrieval is useful for handling user questions that require the entire document context to properly answer the question. Since only doing file-level retrieval can be slow + expensive, we also show you how to build an agent that can dynamically decide whether to do file-level or chunk-level retrieval!
Notebook: https://github.com/run-llama/llamacloud-demo/blob/main/examples/advanced_rag/file_retrieve_workflow.ipynb
LlamaCloud: https://cloud.llamaindex.ai/
For enterprise usage, come talk to us: https://www.llamaindex.ai/contact