Show HN: Airgapped Offline RAG – Run LLMs Locally with Llama, Mistral, & Gemini
github.comI've built an airgapped Retrieval-Augmented Generation (RAG) system for question-answering on documents, running entirely offline with local inference. Using Llama 3, Mistral, and Gemini, this setup allows secure, private NLP on your own machine. Perfect for researchers, data scientists, and developers who need to process sensitive data without cloud dependencies. Built with Llama C++, LangChain, and Streamlit, it supports quantized models and provides a sleek UI for document processing. Check it out, contribute, or suggest new features!
hey, we were ready to biuld something smilar for a "shaddow client". What s the main language used ? we r all about cpp https://github.com/docwire/docwire