Llama 3.3 70B is impressive
Discussion
He can run it on his 48GB RAM laptop 😂
I’ll wait another year or two before models this good run on 16GB RAM 🤔
Just drop an RTX 5090 into it
This is pretty cool. Running an LLM to filter events for your local relay or client is getting far more practical.