M4 macs are becoming an interesting (and surprisingly cheaper) option for running local LLMs. They have lots of unified memory, integrated gpus and neural cores that are pretty good for running local models.
https://youtu.be/GBR6pHZ68Ho
might as well build a cluster
Please Login to reply.
No replies yet.