#asknostr

Does anybody have experience with #Ollama and its myriad of models? I‘m looking to run an #AI #chatbot locally and train it with additional data. What model(s) would you recommend? Do you have any general suggestions regarding my aim? Thanks in advance for your replies & help 🙏🏼

Reply to this note

Please Login to reply.

Discussion

Pls share what you find out

Would love to build something similar

I‘ll be glad to!

“I help people recover lost Bitcoin using ethical hacking techniques and then professionally trade those assets to help rebuild their financial position.”

Depends on ur pc setup and what it can handle. Afaik you can run any model w ollama. I'd suggest try a lightweight one like Phi and go from there.

I cant help w the data part tho, that depends on what u want to achieve.

Depends alot on the machine specs, VRAM, RAM, CPU, GPU etc.

You can check model VRAM usage here:

https://apxml.com/tools/vram-calculator

Also people mostly run already trained models locally, since training a model from scratch requires far too much resources. You can however use tools like OpenWebUI to create a chat frontend for Ollama and then add knowledge base to the chat to help direct the conversation with additional data.

Thanks for your reply & the info!

I don't know how experienced you are, but training a model with additional data is gonna cost you time and money. Maybe RAG System is what you are looking for.

When you compare a llm whit a library, a rag system is a bookshelf with your own documents in front of the library. After a prompt, the system first looks for a proper answer in the bookshelf, then in the library.