Avatar
Tekkadan, ใ‚ฒใƒญใ‚ฒใƒญ! ๐Ÿธ
50809a53fef95904513a840d4082a92b45cd5f1b9e436d9d2b92a89ce091f164
can't be verified, trust.

Do you stare at screens all day like this?

๐Ÿ‘๏ธ๐Ÿ‘„๐Ÿ‘๏ธ

https://justgetflux.com/

Replying to Avatar heatherlarson

Did people realize nostr:npub1z0lcg9p2v5nzg5fycxq0k56ze6snp42clmrafzqpn5w6u74v5x9q708ldk is like the #nostr answer to zoom and we can just do yoga there? #yogastr

I was today years old when I realized!! Thank you for the session, it was wonderful!

I rarely consider that when prompting. Thanks for the tip! I'm always chasing some "digital artwork" or "cartoon style" bs ๐Ÿ™„๐Ÿคฃ

3090 or 4090. Avoid AMD gpus. You can run them on less but these are the standard. I bought a 4080s for gaming and it gets the job done, but LLM's are not my primarily use-case. My 3080 Ti did great work too. I'd say 12Gb cards are acceptable for casual use and 16Gb are good. It depends on how fast you need answers and how much cumulative time you'll spend in the LLMs. And your budget.

Models are quite large and if you want to have multiple installed, start at 2Tb and work your way up. M.2 of course. 1Tb if you don't think you'll be exploring many models, or already know how large the models you desire are.

I'd crank up the ram to whatever fits your budget. 32Gb minimum imo. No reason to avoid the fast stuff these days either.

CPU I can't say makes enough difference as long as you pick something generally capable, unless you are aiming specifically for cpu-based inference. But that's Apple land for the most part, to my knowledge.

I aim to be the least popular person on Nostr.

Unfortunately, there is significant competition.