Show HN: Airgapped Offline RAG – Run LLMs Locally with Llama, Mistral, & Gemini

github.com

8 points by koconder 4 days ago

I've built an airgapped Retrieval-Augmented Generation (RAG) system for question-answering on documents, running entirely offline with local inference. Using Llama 3, Mistral, and Gemini, this setup allows secure, private NLP on your own machine. Perfect for researchers, data scientists, and developers who need to process sensitive data without cloud dependencies. Built with Llama C++, LangChain, and Streamlit, it supports quantized models and provides a sleek UI for document processing. Check it out, contribute, or suggest new features!