Really interesting test comparing

@ChatGPTapp

to

@MistralAI

new 8x7B model

I made a fun little scavenger hunt for Christmas and decided to add a cipher to it for the final clue. To do that i had both ChatGPT and the new Mixtral 8x7B models write me a little cipher script.

Both of them worked out the gate except for Mixtral's single oddity that it placed the word "python" at the beginning of the script as if it was part of the code, which threw an error. I took that out and it worked great.

The funny thing is that the Mixtral app is actually much better in execution than the ChatGPT counterpart. While the ChatGPT version worked immediately, it also required me to open up the script and manually edit the text i wanted to encipher each time. The Mixtral code however, placed a prompt to enter the text and made a variable so i could just write whatever upon each execution.

So the Mixtral version is actually the more UX friendly script.

-- I ran this locally on my Mac using

@LMStudioAI

and I'm actually using one of the more heavily quantized versions of it.

(note: funny tho, neither model could do a simple cipher natively. If i said "can you replace each letter with the letter of the alphabet in reverse" both understood it, but neither could perform the task without it falling apart after just a few letters)

Will talk about this on

@ai_unchained

in this week's show, follow/sub if you are curious.

Reply to this note

Please Login to reply.

Discussion

8x7B rules.

Check it out here guys: Hf.co/chat