🧠Second Brain
Search
Second Brain Assistant with Obsidian
Harnessing the profound capabilities of my Second Brain, I’ve always envisioned the integration of an OpenAI model directly into my local files. The universe of open-source solutions beckons.
# Open-Source Endeavors with LLM Integration:
- LLM:
- Open-source marvels like Dolly, Open-Assistant, and AutoGPT are paving the way. A deeper dive into Memory - Auto-GPT might offer some captivating insights about memory pre-seeding.
- Private Power with GPT:
- Interact seamlessly and securely with your documents, ensuring your knowledge remains confidential. Dive into this groundbreaking approach, privateGPT.
- Built atop the strengths of LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.
- gpt4all: An open-source LLM chatbot that you can run anywhere.
- Simon Willison’s Innovation:
- Simon’s creativity led to the birth of a solution that could potentially intertwine with Dogsheep, Datasette, or even Personal Data Warehouse.
- Comprehensive Solutions:
- Khoj: An AI personal assistant for your digital brain: An AI copilot for your second brain. Search and chat with your personal knowledge base, online or offline.
- A unique, fully offline chatbot, tailored by your Vault is discussed on r/ObsidianMD by Pieces. Yet, tread cautiously, there’s a discussion warranting attention. Delving deeper, the cloud version is detailed here.
# Direct Obsidian Integrations:
- A revelation: GPT Assistant Within Obsidian, Trained on Your Knowledge.
- Integrate OpenAI into Obsidian
- Obsidian Copilot: ChatGPT in action
- Obsidian Ollama: Allowing you to use Ollama within your notes.
- Obsidian Smart Connections
-
GitHub - obsidian-Smart2Brain: With a offline integration and training your notes in a RAG:
# Potential Integrations:
- You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills. Introducing GPTs
- Plugin for AI trained on local notes? : r/ObsidianMD: LLaMa has been run on modern Macbooks, e.g. Simon Willison explains in Running LLaMA 7B and 13B on a 64GB M2 MacBook Pro with llama.cpp.
- Another approach: I’m waiting on enough weekends to duct tale some sqlite with vector extensions (with SentenceTransformers embeddings) as RAG over my Obsidian vault with Mistral7b as chat interface, all local.
Tweet
- Or I could also build on top of LlamaIndex, it does most of what I want to do.
- Using GPT4All can index local Obsidian Vault, as of 2024-01-09, it’s not yet super useful. See a how-to on this video: Obsidian AI and GPT4All - Run AI Locally Against Your Obsidian Vault - YouTube.
Origin:
References:
Created 2023-07-18