LLMs are effectively just functions. The prompt is an argument. You could build a stack frame from the prompt, LLM model, an HTTP endpoint as the return address, and a distributed vector DB for the heap.

With this in place you need a runtime. You could probably build the runtime on Nostr.

A system like this could be ran by millions of people in a decentralized way and although would have higher latencies than centralized models like ChatGPT it might be more powerful.

Prompts could be paid for with #bitcoin via #lightning or #ecash. Miners can supplement their revenue by providing access to high performance compute and still earn bitcoin.

#ai #mining

Reply to this note

Please Login to reply.

Discussion

No replies yet.