DeepSeek is the same as OpenAI. It's just captured by another government. The LLM answers these questions and then immediately changed the responses to the ones shown below in my screenshot.

nostr:nevent1qqsx886ud4q7hh9sahjfx8rh2wvltwpyuldrxf07wwza63lh3kszngspzdmhxue69uhhwmm59e6hg7r09ehkuef0qgsvq73w5j9kw573rtff6c3fyh953w45328n3625apdwc3548gr49gsrqsqqqqqpd3rv8y

Reply to this note

Please Login to reply.

Discussion

🤣🤣🤣⚡⚡⚡

Your garage is notifying you sir

Thanks. My wife just dropped off our daughter.

What a shitshow

every LLM is opinionated

A government whose citizens have no guns, no freedom of speech and no true vote.

That is the difference.

Or you can run it locally and it will have no trouble answering this question for you.

Open Source has to be the way forward for AI.

They quickly captured the bug after the first day. Now it only answers with this line.

I wouldnever trust a "free" "open source" thing if it has a buit-in red list.

Agree.

One of my wishes is to see you guys post such videos of the many uprisings in Iran and the crackdowns and point blank shootings of children.

A good prompt could be "name ten topics that are beyond your current scope"

I tested it through ppq.ai and it doesn't seem to be censored. I'm not sure if they are running their own version of it or if they proxy the requests to deepseek.com

interesting. it's definitely censored at the main application / website level.

censored on venice.ai also

There is no war in Ba Sing Se?

like its trying to convince itself the party has never made any mistakes 😂

the impressive bit isn't the model itself, but *how* they trained it. It's the training technique that's blowing the markets collective mind.

Luckily that research is totally open source so we'll be able to apply the same training techniques to our open source models, achieving the same massive efficiency gains but with less CCP fuckery

This is where I landed when asking about the Uyghurs, twice.

deepseek is only censored on their website

if you use an API that uses the FOSS versions of their models, it will be very happy to tell you about tiananmen square

it seems that they also use a secondary model to filter it's responses, which can be seen where it starts talking about a topic... and then censors itself. you can see this behavior in gemini and ChatGPT too.

Interesting.

Lmao actually wild it won’t mention the Uyghurs 😹

Claude Sonnet, Deepseek R1, and GPT-4o were all happy to tell me about both the Japanese concentration camps in American and the Uyghur Concentration camps in China while accessing them from nostr:npub1xsgymm0ne3vndqpvsvy285qfpu59049t5n5twg9vetmt92cyn95snyzazx assistant.

Is Kagi somehow getting different guardrails or are people faking screenshots for likes? I took this screenshot myself. I noted the missing tags in the other screenshots which makes me suspicious.

https://files.sovbit.host/media/010df0c948fe9ab54d2cb7ea420ffa08d57958981b6ea68e83aaa7eb2dd3f05a/0d7fa1716a14a40b34b161db96ad05a4768f67e37c08b5d72a482d85e4e69f89.webp

nostr:nevent1qqsga2pqyutfauza02krd88gmsw6y5zdz6s54c3gxd3fhsudhqxw5xspz9mhxue69uhkummnw3ezuamfdejj7q3q8ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqxpqqqqqqzcw04rf

The difference is that it took only $6M to train. That’s the breakthrough.

Too funny!

If we’re measuring it by number of of countries invaded or people killed, objectively speaking, this is by far the lesser of two evils.

Welp 🙄