If you build infrastructure around LLMs to enable it to store and load data, you can build some powerful ai products. I want to build a local LLM that knows how to read my calendar events, email, github issues, nostr mentions, and DMs to have a personalized ai assistant.
Discussion
This is what I want as well!
Is that a pure desire of /wanting to/ or do you have a plan of _how_ you want to do it? Asking because I also want to do something like this. For this reason, I've been playing with different models running locally and am starting to explore LangChain to interact with them, so I can get the hang of it. Apparently one can use LangGraph to get to this point, but I didn't get there yet...
I have a plan yeah, ive been thinking about how to do it for the past month or so
Can we share your assistant AI?
yeah it wouldn't be hardcoded to me, you would just need a bunch of modules for fetching data from different places
Did you see Anthropic's MCP?
whoa I missed this
Things move quickly. Block's `goose` "developer" agent has been progressing surprisingly quickly, and I'm used to startups. Most of the time I drop in to suggest a feature it's already in the pipe.
🫡💜
You can expose your data via Tools, ollama and openwebui already simplify the integration
oh nice, i was looking for something like that
Funny you should mention this … been asking the LLMs when I work with them to come up with their own memory file for saving state across sessions or even during sessions when things get busy in their end.
There’s something to this … and definitely worth figuring out.
If anyone wants to see some of the insights I co created with #ClaudeAI … I added them to this relay repo …
https://github.com/HumanjavaEnterprises/nostr-relay-nestjs/blob/master/docs/working_with_ai.md
It's further down our timeline, but as https://opfn.co will be facilitating an open-source, multi-device personal cloud computer/storage, private LLMs are core what we envision people using our system for.
A superintelligent panopticon where _YOU_ are the AI-empowered NSA and _YOU_ are your own surveillance target.
How do you feel about the hacking risks of this data? It isn't like those companies that leaked meant to give away their proprietary data.
what companies? this is all local to your computer.
Encrypt at rest. Encrypt in transit.
Let them come.
Look at LocalAI and Open WebUI, especially the latter's integrations. The former is basically YAML based model specs with presets andcontrolling what is loaded when. RAG is annoying as fuck though, but it's totally doable. ^^
Working on the exact same thing, but based off of the Milk-V Oasis, currently considering to get a TensTorrent Wormhole or attempt to tinker with amdgpu to get ROCm working to use an RX7000 card as the basis...
Alternatively, Ampere + NVIDIA works, because NVIDIA has ARM drivers - partially, at least. CUDA is included though. Why ampere? Look at the TDP; paring that with high RAM allows you to configure LocalAI to utilize both CPU and GPU and you can specify exactly what goes where and how many layers.
This way, you can allocate several models with some kind of priority, allowing you to run the embeddings model, Whisper and other tiny things all the time, but swap out bigger models depending on which Pipeline you end up running. :)
I got rocm working with llama.cpp so im gonna try that
Working on the same thing.