GM Nostr. I got a new Mac mini m4 pro with 64 gb of ram and everything feels fast now.

Reply to this note

Please Login to reply.

Discussion

I need to do this too.

Gm

GM. Have you tried running Llama or another open source LLM locally? Whatโ€™s the experience?

nostr:npub1er0k46yxcugmp6r6mujd5qvp75yp72m98fs6ywcs2k3kqg3f8grqd9py3m my hope is to try it, iโ€™ve got ollama installed but mostly iโ€™m just setting it up still.

Please let us know how it goes...

I was thinking of getting one of these to run ai models locally

I do. It's pretty good. But my HDMI port and my monitor don't line each other. That sucks.

I was thinking of getting one and running holesail or something like that on it so I could access it from my laptop and even when Iโ€™m not at home

You can run 70B models but they are a little bit slow. But everything below works great.

64 gb is around the minimum to have these days huh?

Nice! Run ollama + deepseek r1 32b, should be smooth on that metal

haha what a hero shot. I love it.

nostr:npub16zsllwrkrwt5emz2805vhjewj6nsjrw0ge0latyrn2jv5gxf5k0q5l92l7 yeah all these early morning calls means that i wake up early even when itโ€™s a Saturday and i get to watch the sunrise.

Beauty!

I remember 486 feeling really fast.

I got so excited about 16k of ram!

384k of RAM was HUGE

Yes! I always love new hardware! Glad everything is faster for you now!

wait a minute that will change ;-)

nostr:npub1qrk4592x99sjdjhjn6ktvyrqcqlzrpqt5ysxqpn0drz2k34yl7yqvp3w6q i mean my M1 macbook air felt amazing when upgrading from a pre-apple silicon macbook.

nostr:nprofile1qqs8d3c64cayj8canmky0jap0c3fekjpzwsthdhx4cthd4my8c5u47spzamhxue69uhhyetvv9ujumn0wvh8xmmrd9skctcpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qarngpt after 3 months of using this, how do you like it? Anyone else have a loaded mac mini m4 pro? I think Iโ€™m finally gonna go get one this weekend, so that I can finally start using AI models locally.

nevent1qqs9x5fcmqwgr40rl603ztkmzwvj2mhmzu7gwr5kh86lgg29ulyxs0gpz4mhxue69uhk5atw0p5kuemhv9hxwtn0wfnssnpp4n

LUCKY