Avatar
jascha
2479739594ed5802a96703e5a870b515d986982474a71feae180e8ecffa302c6
Run Relayable.org #nostr relay Bitcoin Class of 2009 🧡 Founder of @npub1tpy5sj0wc4txn8fdx02y7lrq33yxmwcrupfgw9jxzunmf9ypfhhs837gzc Cybersecurity pro with head in Clouds. By day I also build massively scalable and redundant data center/cloud architectures⚡🫂💜 9367 9961 90C9 B785 889D 276E A61B 8390 B08D CD40

2016: "By end of year BTC $1million"

.

.

.

.

.

.

2023: "By end of year BTC $1million"

Why would a client need stress testing? Can see relays and other services needing it.

Don't forget to share #nostr invites!

https://nostr.do

If you hold your own keys (which you should be) be sure to make a way for family/friends to get them in the event of your untimely death.

Or don't and make all our #bitcoin worth all that much more. 💀🧡😉

It's a morbid discussion but needed.

Replying to Avatar HoloKat

No

You're safe then. 🤙

REMINDER: Encrypt everything.

Why we have not seen the Kool-Aid man lately. In hindsight it was inevitable.

Good morning!

I'm with Mick.

Tis almost the season...

Holiday must have: Snoop on a Stoop.

#weedstr

Everyone that lives in a area with many neighbors should run a Part 15 low power FM (or AM) station. Is good for sharing information and events. Can save lives in an emergency. If you are not in the US check your local laws. For 0.01 BTC or less can buy the equipment.

https://amzn.to/3GcehV3

https://www.fcc.gov/media/radio/low-power-radio-general-information

I partly agree. It really depends on what one is using the models to accomplish. With LocalAI and vLLM, it's pretty easy to use an open-source model as a drop-in replacement for the OpenAI API. Companies like OpenAI are working towards AGI, while many challenges can be solved using a Mixture of Agents (Experts) approach to AI agents without need for expensive hardware. With MemGPT, AutoGen, and many others leading the way toward autonomous agents. NousResearch beat OpenAI to announcing a 128k context window using YaRN. A year from now, I'd say it will be a non-issue, or we'll have context windows of 3M+. The Law of Accelerating Returns rings very true in the LLM and generative AI space. If one is looking for unaligned (uncensored) models, Dolphin is most likely the best in terms of size versus performance at 7B parameters. Many 7B models are now equaling the performance of much larger 70B models like LLaMA2. We can already overcome a lot of hardware limitations by quantizing models (see GGUF and AWQ). At the current exponentially growing rate, in a year, we'll more than likely have AGI.

My company, nostr:npub14hujhn3cp20ky0laq93e4txkaws2laxp80mfk3rv08mh35qnngxsg5ljyg (proudly on nostr!), is releasing some very cool stuff in the near future related to AI agents and domain-specific, fine-tuned, lightweight, and efficient models that will run on edge, mobile, and IoT devices. There is a lot of non-OpenAI projects out there that are Open Source and transparent and more by the day.

Time and space are figments of the finite mind.

Developers are becoming too dependent on the OpenAI APIs. They are the Apple App Store of AI.

🤣 Asked LLM it's suggestions:

Cards Against Conformity

Relays of Rascality

Notes of Notoriety

Decentralized Depravity

Pings of Perversity

Satirical Signals

Networked Naughtiness

Broadcasts of Blasphemy

Messages of Mischief

Communications of Chaos

New keyboard buddy nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424 🤙

Telling me to clean this shit up. 🤣