yeah, i'm about to suffer with windows so i can get full AI performance on my RX 7800XT. from AI benchmarks it looks like it's about half as fast a 4080 but that's probably bearable token output rate. it should be fast, it's got 16ghz of GDDR6 memory 256 bit bus, and a bunch of other specs i'm not that bothered to learn about. point being that it's in my house, and blasting it on LLM processing has gotta still work out competitive with the 21eur/month i pay for jetbrains AI.
the other thing is that there is all kinds of general glitches with the generic amdgpu driver on most of the versions of linux i have run it on, and i'm just like. ugh.
i can make the AI assistant, which i mainly use for documentation, use a local ollama server with the ai assistant. junie doesn't do that yet, unfortunately, currrently stuck with claude 3.7 and 4.0 on that front, but i expect they will probably not too far in the future open up using local models, in which case i can dial back my subscriptions by a lot and probably get better performance.
i saw mention that claude was being raped by users today, and it certainly was running slow on my machine. if i can get at least that performance for the cost of a couple hundred watts of electricity bill it's probably same same, but probably will be substantially faster.