what's the best LLM to run from your local machine without the need for a heavy GPU?
Llama et al. all seem to require a chucky GPU, but surely we're at the stage (3 years later) that we have some local LLMs?
If you're on a Mac, 75% of your memory is a GPU. If you really are CPU only, your options are pretty limited.
Please Login to reply.
No replies yet.