Yeah but it’s a model right and inferencing is done over Torch which already supports ROCm - through a special repo hence why you need to pass some special commands.

I’m curious on why the author mentions it doesn’t work with 5.4 and also why they set stuff like skip cuda when the whole point of having rocm is enabling coda-like api on amd.

I’ll test it later and see what happens.

I have been running both Stable Diffusion and Llama on an RDNA2 card since earlier this year. From my experience things are becoming easier not harder.

Reply to this note

Please Login to reply.

Discussion

No replies yet.