Booted. Now he can focus on WorldCoin 🤣
https://openai.com/blog/openai-announces-leadership-transition
To think of the little donations I ask for via nostr:npub16fcy8ynknssdv7s487nh4p2h4vr3aun64lpfea45d7h4sts9jheqevshgh 🤣🤣🤣
2016: "By end of year BTC $1million"
.
.
.
.
.
.
2023: "By end of year BTC $1million"
Yes mine nostr:npub16fcy8ynknssdv7s487nh4p2h4vr3aun64lpfea45d7h4sts9jheqevshgh do 24x7 😉
A zapathon is a good stress test.
Relayable is fine and well.
Why would a client need stress testing? Can see relays and other services needing it.
Don't forget to share #nostr invites!
If you hold your own keys (which you should be) be sure to make a way for family/friends to get them in the event of your untimely death.
Or don't and make all our #bitcoin worth all that much more. 💀🧡😉
It's a morbid discussion but needed.
Do you have a basement? 🤣
How politicstr works it seems.
REMINDER: Encrypt everything.
She is a National Security risk.
Why we have not seen the Kool-Aid man lately. In hindsight it was inevitable.

Good morning!

Accurate.

I'm with Mick.

Tis almost the season...

👀🤣

Holiday must have: Snoop on a Stoop.
#weedstr

Good night!

This is what I do. 🤙
Test. Does pls work?
Everyone that lives in a area with many neighbors should run a Part 15 low power FM (or AM) station. Is good for sharing information and events. Can save lives in an emergency. If you are not in the US check your local laws. For 0.01 BTC or less can buy the equipment.
https://www.fcc.gov/media/radio/low-power-radio-general-information
Ramon Allones Superiors LaCasa Del Habano are top notch. 🤙
I partly agree. It really depends on what one is using the models to accomplish. With LocalAI and vLLM, it's pretty easy to use an open-source model as a drop-in replacement for the OpenAI API. Companies like OpenAI are working towards AGI, while many challenges can be solved using a Mixture of Agents (Experts) approach to AI agents without need for expensive hardware. With MemGPT, AutoGen, and many others leading the way toward autonomous agents. NousResearch beat OpenAI to announcing a 128k context window using YaRN. A year from now, I'd say it will be a non-issue, or we'll have context windows of 3M+. The Law of Accelerating Returns rings very true in the LLM and generative AI space. If one is looking for unaligned (uncensored) models, Dolphin is most likely the best in terms of size versus performance at 7B parameters. Many 7B models are now equaling the performance of much larger 70B models like LLaMA2. We can already overcome a lot of hardware limitations by quantizing models (see GGUF and AWQ). At the current exponentially growing rate, in a year, we'll more than likely have AGI.
My company, nostr:npub14hujhn3cp20ky0laq93e4txkaws2laxp80mfk3rv08mh35qnngxsg5ljyg (proudly on nostr!), is releasing some very cool stuff in the near future related to AI agents and domain-specific, fine-tuned, lightweight, and efficient models that will run on edge, mobile, and IoT devices. There is a lot of non-OpenAI projects out there that are Open Source and transparent and more by the day.
Time and space are figments of the finite mind.
Developers are becoming too dependent on the OpenAI APIs. They are the Apple App Store of AI.
🤣 Asked LLM it's suggestions:
Cards Against Conformity
Relays of Rascality
Notes of Notoriety
Decentralized Depravity
Pings of Perversity
Satirical Signals
Networked Naughtiness
Broadcasts of Blasphemy
Messages of Mischief
Communications of Chaos
First principals: Notes Against Humanity (NAH)
New keyboard buddy nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424 🤙
Telling me to clean this shit up. 🤣

Sounds like an Apple Dev to me. 🙉🙈🙊🤣
