“Some drivers”? Don’t worry It’s safe to mention Nvidia here.
It had a good run. Now please retire.
https://www.theregister.com/2024/06/21/x_window_system_is_40/
Please don’t pipe the curl to your bash guys.
I’m sorry.

A company that writes games for kids just accidentally opened a portal to the good timeline where browsers run Lua instead of Javascript.

I heard its hard to go beyond m.2 on M.2.
U.2 NVME supports more than 8TB but costly and enterprisey.
Spinning rust are probably in the 24 TBs. Rotational velodensity go brrr.
Maybe secretly a reptile.
Thats big if true. Do you have a source on the Wikileaks page?
Why is it controversial to point that the seed signer by using a raspberry pi is running on top of a GPU binary blob from Broadcom?
- gpu loads the firmware
- gpu then turns on the ARM core
- arm core loads the kernel
Without the proprietary firmware the Raspberry Pi can’t boot. Am I wrong here?
Why does pointing that out makes me a paid shill of the hardware wallet cartel?
People are laughing at Microsoft recall fiasco but if you think about it they just signaled to those crazy crooks in the EU government that device scanning is possible using NPUs.
People on lefty wing nut communities will discuss mental illness as some kind of group identity badge.
People on right wing nut communities will display obvious signs of acute mental illness while acting like the only sane person on the world and everyone else that is insane.
¯\_(ツ)_/¯
You can read about their entire stack, its not just belief, they are opening it to researchers:
https://security.apple.com/blog/private-cloud-compute/
Its extremely impressive
Did nostr:npub1az9xj85cmxv8e9j9y80lvqp97crsqdu2fpu3srwthd99qfu9qsgstam8y8 stole Will’s nsec?
Don’t you sometimes want to off ppl who go to the gym to sit on a machine to chat on their phones and watch tik toks?
Apple’s on-device ai model is 3B params, 2 to 4bit quantization:
“On this benchmark, our on-device model, with ~3B parameters, outperforms larger models including Phi-3-mini, Mistral-7B, and Gemma-7B. Our server model compares favorably to DBRX-Instruct, Mixtral-8x22B, and GPT-3.5-Turbo while being highly efficient.”
Interesting! This was the size of model I was considering for damus mobile. Looks like I can just use apple intelligence apis instead 🤔 . These small local models are pretty good at summarization, which I’m guessing why they showcased that a lot in notifications, mail, imessage, etc.
https://machinelearning.apple.com/research/introducing-apple-foundation-models
Apple loves to exaggerate their benchmarks
Can we stop writing code using a website disguised as a program please?


