deepseek gives nostr clients a huge advantage…if they choose to use it

Reply to this note

Please Login to reply.

Discussion

Great pod with Odell other day. GM

AI should not be run on NOSTR.

Why not?

Becuause users have not given permission for their data to be ingested and/or used by AI.

yes it does! would love to discuss with you on my podcast. how about in person in costa rica soon? https://www.youtube.com/@GreenCandle

🫡

Absolutely. Developers should be joyous.

Any tips for those newer to the space and how they can get started?

or directions on how they maybe able to use this?

Just start using the tools

Great advice. Use an ultra-centralized Chinese platform

This isn't the real JD

It is if you seek deep in your 🫶

If he isn't, he's still an asshole.

nostr:npub1hgvtv4zn2l8l3ef34n87r4sf5s00xq3lhgr3mvwt7kn8gjxpjprqc89jnv my continued giggling is on you! I cannot unsee it!! Damn deepsteak!

😂😂 I can't either and it's everywhere. Getting steak today, hopefully that will fix it 🤞🤞

I’ve given up trying to read any serious articles tonight. Too funny 😂

Time for sleep. At this rate I’ll be giggling in my dreams!

Night night steak dreams 😴🥩

GN Fren 🌃 🥩

Just tried to close my eyes and my brain took me straight to the “And don’t forget the gravy” cartoon. Super old cartoon the steak emoji has now linked to! 🙄 Oh God my dreams tonight!!!!

Market close: $NVDA: -16.91% | $AAPL: +3.21%

Why is DeepSeek great for Apple?

Here's a breakdown of the chips that can run DeepSeek V3 and R1 on the market now:

NVIDIA H100: 80GB @ 3TB/s, $25,000, $312.50 per GB

AMD MI300X: 192GB @ 5.3TB/s, $20,000, $104.17 per GB

Apple M2 Ultra: 192GB @ 800GB/s, $5,000, $26.04(!!) per GB

Apple's M2 Ultra (released in June 2023) is 4x more cost efficient per unit of memory than AMD MI300X and 12x more cost efficient than NVIDIA H100!

Why is this relevant to DeepSeek?

DeepSeek V3/R1 are MoE models with 671B total parameters, but only 37B are active each time a token is generated. We don't know exactly which 37B will be active when we generate a token, so they all need to be ready in high-speed GPU memory.

We can't use normal system RAM because it's too slow to load the 37B active parameters (we'd get <1 tok/sec). On the other hand GPUs have fast memory but GPU memory is expensive. Apple Silicon, however, uses Unified Memory and UltraFusion to fuse dies - a tradeoff that favors a large amount of medium-fast memory at a cheaper cost.

Unified memory shares a single pool of memory between the CPU and GPU rather than having separate memory for each. There's no need to have separate memory and copy data between the CPU and GPU.

UltraFusion is Apple's proprietary interconnect technology for connecting two dies with a super high speed, low latency connection (2.5TB/s). Apple's M2 Ultra is literally two Apple M2 Max dies fused together with UltraFusion. This is what enables Apple to achieve such a high amount of memory (192GB) and memory-bandwidth (800GB/s).

Apple M4 Ultra is rumored to use the same UltraFusion technology to fuse together two M4 Max dies. This would give the M4 Ultra 256GB(!!) of unified memory @ 1146GB/s. Two of these could run DeepSeek V3/R1 (4-bit) at 57 tok/sec.

All of this and Apple has managed to package this in a small form-factor for consumers with great power efficiency and great open-source (uncharacteristic of Apple!) software. MLX has made it possible to leverage Apple Silicon for ML workloads and exolabs has made it possible to cluster together multiple Apple Silicon devices to run large models, demonstrating DeepSeek R1 (671B) running on 7 M4 Mac Minis.

It's unclear who will build the best AI models, but it seems likely that AI will run on American hardware, on Apple Silicon.

nostr will come to secondary brain database

I would love to see more Nostr+AI.

Having something like DeepSeek on Nostr will be very useful. I became used to use Grok on X and it will be awesome to have our open version of it here

Viable business models are key 🔑

It already know more about Nostr then ChatGPT from my experience.

It's deanonymizing all of our interactions, as we speak, getting around encryption.

nostr isn't obfuscated, it's just looking, gpt is ignoring us because pepe and such

ChatGPT has been under heavy scrutiny and has to obey EU personal privacy laws and etc.

This can just do whatever.

Nostr devs will probably put it into the mobile apps.

i think it will become The Oracle if it just is fed on nostr

It needs to happen.

It was bound to happen. Nostr makes for great spyware, if you tweak it right.

Everything can be a spyware if you tweak it right. Everything. Especially cats. Never trust cats.

Cats are generally sus, agreed, but very cute.

Nah, they are very hard to train. No way they can be spies. And folklore has held for millennia they are household guardians. At least par with canines.

Mousers

catching rats is the opposite of spies, right?

IT needz

like the BLOB/old movie, follywood

blob/Blob & i live with A BOB*/)_____lolz*

If ChatGPT collects all tge information from users and is able to put together profiles on their users, I assume Deepseek does as well. Is giving China all that data a wise thing to do as an individual?

At least it's better than giving your data to US or EU

possibly true!!

Run it locally on your computer instead.

Sounds interesting. Could you please give an example?

Some examples off the top of my head, given a self-hosted Deepseek model:

- provide a summary of notes posted to your feed over the last X hours

- get notes about a specific topic

- generate a note about a topic, link or attachment

- language translation

- zap all notes based on some rule. “Zap everyone who’s shitting on Sam Altman 21 sats”

blah, blah/BLOB n_n*/

🧐🤔🧐🤔🧐Interesting. Yes. 🙌

Keep AI out of here and let it be a last place of heaven.

Short selling on nostr? 😂

Do any?

Like how?