Replying to Avatar Ape Mithrandir

Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc.

You can check model VRAM usage here:

https://apxml.com/tools/vram-calculator

Also people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWebUI to create a chat frontend for Ollama and then add knowledge base to the chat to help direct the conversation with additional data.

Thanks for your reply & the info!

Reply to this note

Please Login to reply.

Discussion

No replies yet.