Test new Llama 3.2 1b/3b on any device locally with the web app.
I have multiple apps that use this WebGPU implementation. Decent small LLM models were the only constant, but now we finally have the model.
https://chat.webllm.ai
*constraint
Please Login to reply.
No replies yet.