Global Feed Post Login
Replying to Avatar jb55

M4 macs are becoming an interesting (and surprisingly cheaper) option for running local LLMs. They have lots of unified memory, integrated gpus and neural cores that are pretty good for running local models.

https://youtu.be/GBR6pHZ68Ho

Avatar
Slurix 1y ago

might as well build a cluster

Reply to this note

Please Login to reply.

Discussion

No replies yet.