🚨Announcing Satoshi 7B 🚨

The team and I are proud to release Satoshi 7B: the most “based” LLM in the world.

After almost nine months of experimentation, we’ve got something we’re proud enough of to open source & share with the world.

Tuned like no other model to date.

Satoshi7B is designed to produce responses that do NOT fit the current political overton window, or Keyensian viewpoints. We built a custom data-set from scratch, with a deep rooting in libertarian principles, Austrian economics and #Bitcoin literature.

The result is a model that excels where others fall short.

It knows that there are two genders

It knows that Ethereum is a shitcoin

It knows that inflation is economic stupidity.

The Satoshi 7B is ideal

for anyone who’s tired of using #mainstream models (whether open or closed source) that:

Avoid answering controversial topics,

Regurgitate wikipedia-esque answers,

Pre and post-frame responses with apologetic excuses, or

Flat out tell you the blue sky is green.

Size & availability

Satoshi 7B is #opensource and freely available for anyone to use, modify, and enhance.

As the name suggests, it’s a 7 billion parameter model, but despite the size, it outperforms GPT-3.5 Turbo and GPT-4 on a few key benchmarks related to Bitcoin, economics and what we’ve termed “basedness” which is a kind of non-woke-truth score.

(See links in comments to use / download)

First of its kind

This is the first of a whole suite of Satoshi models we intend to train & open source.

In the coming months, we’ll enhance the dataset further, and train a 30B model.

Finally…

All language models just represent a model of the world and are inherently biased. So I hope you appreciate Satoshi 7B’s intrinsic austrian leanings, Bitcoin maximalism and overall “based” cultural and philosophical bias.

nostr:note1nl58qrffjlchrf0d8dz3m4z8w3gy3cq8d656emc3eqc444uxznjqfvuezg

Awesome! Is Nash’s “Ideal Money” in the training dataset?

Reply to this note

Please Login to reply.

Discussion

Most likely