Dolphin-mixtral seems much less effective than chatgpt ;(

It doesn't understand a lot of the things I tell it, whereas chatgpt seemed to do a better job reflecting on the mistakes and making adjustments. I could make things work with chatgpt but this open source model struggles.

Reply to this note

Please Login to reply.

Discussion

Are you running the big model or the small one?

I’ll have to check in the morning

you understand that mixtral is 7 billion parameter model, ar best, as it uses 2 "experts" is a 13b parameter model (not quite sure how it works) and chatGPT 3 is like a +150 trillion parameters model!? GPT 4 is even bigger...

Ok and why should I care how many parameters it uses if the end result is not great? Do you think people who use ChatGPT4 every day are going to care what the parameter excuse is if the results suck?

The fact that it uses 7b params makes it less accurate, but allows for you to run it on your computer!! Also, system message, prompt, etc... plays a huge role on using these open source, small LLMs. You can try others like Llama 2 or openhermes!

Fiat and Ferrari are boths cars, even made at the same place, they perform the same basic function!!! yet, there's stuff a Ferrari can do that a Fiat can't.