Yeh, I’m down the LLM rabbit hole lately, but check this out. You can now run LLAMA in a low-end Digital Ocean droplet.
https://github.com/ggerganov/llama.cpp/discussions/638
Sounds like Silicon Valley
Please Login to reply.
Pied Piper