I've been playing around with running #deepseek R1 (the 7 billion parameters model) locally on my computer.

While it's cool that it can be run locally, the answers it provides are not particularly great.

How many billion parameters does it take to start getting consistently good answers?

#asknostr #ai #llm

Reply to this note

Please Login to reply.

Discussion

I don’t get great answers either. I find that it hallucinates and contracts itself quite often. I’ve had much better results with phi4 or llama 3.1

Thanks. I'll check those ones out.