What models are you running? Am building my own machine running running local LLMs too (AMD/nVidia based).
Discussion
mainly llama atm, but been playing with others. I want to try qwen nostr:nevent1qqs88zs80vrrndpns2l88hxdgaumg4hstnttth9jfzhxcejww7tjyzcpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qzrthwden5te0dehhxtnvdakqz9rhwden5te0wfjkccte9ejxzmt4wvhxjmcpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhgmvsqle