Global Feed Post Login
Replying to Avatar jb55

I noticed llama.cpp supports ROCM (amdgpu) now! I can sample the 8B parameter llama 3 model with my 8GB VRAM graphics card! It's fast! Local ai ftw.

https://cdn.jb55.com/s/rocm-llama-2.mp4

Avatar
Linus Phoebus 1y ago

Nice bro!

Reply to this note

Please Login to reply.

Discussion

No replies yet.