This guide provides instructions for setting up and running the Document Q&A Retrieval-Augmented Generation (RAG) Streamlit app using Ollama for local LLM support - the typical RAG 101 project.
- Operating System: macOS or Linux (Windows users need WSL).
- Python: Version 3.8 or higher.
- Streamlit: Make sure Streamlit is installed (
pip install streamlit
).
-
macOS: Install via Homebrew:
brew install ollama
-
Linux: Download from the Ollama GitHub Releases page, extract, and move it to your PATH.
tar -xzf ollama-linux-x86_64.tar.gz sudo mv ollama /usr/local/bin/
-
Windows (via WSL): Install WSL and follow the Linux instructions above.
Download the mistral
and the text-embedding-ada-002
models:
ollama pull mistral
ollama pull text-embedding-ada-002
Navigate to the app's directory and install the required Python packages:
pip install -r requirements.txt
Ensure that you have streamlit
, langchain
, and chromadb
installed.
After setting up Ollama and downloading the required models, run the app using Streamlit:
streamlit run doc-qa.py