another cool thing is if you're using ollama locally on your macbook on a plane, and you have no wifi connection, this will still work.

Everything in notedeck is offline first. The question gets sent to the local model, the model then responds with a tool query ({query:cats, limit:3}),

notedeck parses this query, understands it, queries the local notedeck database, formats a response and send it back to the local ai.

the local ai then takes those formatted notes and replies with a summary.

this all happens back and forth locally on your computer.

how neat is that!?

nostr:note1r3ar3urwnq27ldcq88zs8vj53fn24edr0yzcety9kkzmqymnfr8s4ptchs

Reply to this note

Please Login to reply.

Discussion

Very neat.

Nice. Would regular users put the API end point address into settings?

yeah ux would be just select ollama provider since it is a standard endpoint

What model are you running and what’s the performance like on the Mac?

I’m using deepseek-r1:14B at the moment but need to try more out. Currently running it concurrently against GPT-4o (which queries OpenAI API) in Open Web UI to compare output (which is a nice feature) I think I need a bigger GPU!

Also been building a custom model (prompt) for Nini.

(・∀・)イイネ!!

What AI model is integrated into Notedeck and what made you choose it?

you can use any model that has tools support

So is there a default one that comes with it, or do you have to manually import it in the settings?

still deciding on that. maybe would be 4o funded with zaps

Please no AI by default. Make that an option in the settings for easy import.

While I do appreciate the feature. I am one of those type of people that AI still creeps me out a bit and I am not the only one.

When I do use it. It is isolated inside of a virtual machine.

Having it native on Notedeck by default and not able to be removed would be a bit of a non-starter situation for me. Purely from a security and privacy standpoint.