Replying to Avatar S!ayer

I have a PC - 6GB card and 16GB RAM.

I mean, for the life of me I can't seem to find the right model to use. Llama seems great, in fact, imo it runs better than ChatGPT but that's because they have more nlp from Facebook/Meta platforms...

Is there a way to run lllama on local without the need for AWS or Azure or a 4090 GTX

Avatar
ynniv 1y ago

You should be able to run https://ollama.com with Microsoft's `phi3:mini-128k` or `llama3.1:8b-instruct-q3_K_M`. You can see the pre-configured models and their rough sizes at https://ollama.com/library.

Reply to this note

Please Login to reply.

Discussion

No replies yet.