A developer built a local AI assistant to help new engineers understand a complex codebase. Using a Retrieval-Augmented Generation (RAG) pipeline with FAISS, DeepSeek Coder, and llama.cpp, the system indexes project code, documentation, and design conversations so developers can ask questions about architecture, modules, or setup and receive answers grounded in the project itself. The setup runs entirely on modest hardware, demonstrating that teams can build practical AI tooling for onboarding and knowledge retention without cloud APIs or expensive infrastructure.
found this here:
https://hackernoon.com/i-built-a-project-specific-llm-from-my-own-codebase?source=rss