If you have a recent macbook M1 or M2 with enough RAM make sure to try out GPT4ALL.

It's easily good enough to be useful especially if you don't have access to OpenAI's API. It's free and it's private. No data leaves your computer. This is how we should be using AI.

https://gpt4all.io/

Reply to this note

Please Login to reply.

Discussion

thanks

I can run quite a few models on gpt4all on my dated amd64 system without any problem. Everything above 7B is a problem though.

Can't wait until all processors include ai accelerators. Holding if until next year building my new rig.

GPUs are AI accelerators :)

Yes of course 😬 Doesn't mean that they can't be specialized for AI more though.

Or run llama2 locally. There's many implementions to make it easy to use.

This includes llama2

Keyword was β€œlocally”. You can be damn sure that site is logging all your prompts and responses.

Nah gpt4all also runs locally. It implements multiple llms. Just saying it's a bit more complicated to setup for some people compared to other solutions.

GPT4ALL is more complicated than other solutions? I'm curious, what are these other solutions?

Really easiest for end users I know is:

https://ollama.ai/

This is not a site. Have you even clicked the link?

Sorry I got it wrong. Seems like some kind of frontend for various stuff.

Have you tried the replit model? Is it a complete ass to you too? For me the model refuses to generate any usable code and always gives snarky comments 🀣

I've only tried the Wizard and Hermes models (based on LLAMA) they seem to be ok-ish. GPT4 seems to be better than anything else.

Thank youπŸ‘

When Android version?

I’ve had a great time using Llama2.

I’ve been using it for some months on a M2! It works very well! πŸ€™

I am regretting not getting 32 gigs of ram

☹️

I have it installed.