Global Feed Post Login
Replying to f849d88a...

I found ollama too awkward to set up for models not pre-provisioned by the ollama team. I turned to #koboldcpp to run any model I want with single-binary installation (text gen, image gen, TTS) and super simple configuration. And it comes with webuis for text and stable diffusion; or use any ui that supports koboldcpp-api or openai-api or automatc1111-api. CUDA, ROCM, Vulkan, CPU-only all supported.

Avatar
deimos 10mo ago

I'll check it out. Thanks!

Reply to this note

Please Login to reply.

Discussion

No replies yet.