Do you have a GPU with enough vRAM to fit it? Or are you just running on CPU?

Also, if you’re in that size range already, consider the Dolphin line of models from cognitive computations. They’re trained to be unaligned and maximally compliant with user requests.

Reply to this note

Please Login to reply.

Discussion

Surely you can't train dolphins to do that. 🫤

How’s it for capability compared to llama 3?

For context, my usage was around brainstorming ideas for a sci-fi novel I intend to write. I tried llama3 briefly, then switched to dolphin-mixtral, a mixtral-based model that attempts to remove alignment and increase compliance.

I found that both had similar knowledge. For example, both were familiar with authors I wanted to reference like Robert Zubrin and Kevin Kelly.

Both seemed to think that climate change was a big issue that humanity needed to solve—even after I told them to ignore climate concerns for the purpose of the brainstorm.

Where they differed that I could tell was in bias around character design and in tone. Llama kept suggesting characters that were predominantly female doctors, mostly Indian/East Asian. Dolphin seemed more willing to suggest male characters.

Regarding tone, the prose llama3 produced tended to be chipper and light hearted, even when I prompted it to be dark and cerebral. Dolphin-mixtral was more willing to offer darker prose.

These are my general impressions so far.