We will not survive this without some sort of information dissemination network. However, the current one has been failing for awhile and this week it seems the shitty breaks are finally giving way. https://www.garbageday.email/p/yeah-it-s-probably-time-to-panic
The problem as I see it is that traditional news reporting requires a trust that has deteriorated. Absent trust we need reporting that's more transparent, without being a bag of data that everyone needs to assemble into their own incorrect takes. Experts have some context that's necessary to assemble the truth, but it's mixed together with other context thats been poisoned by various sources. Without formats and tools for filtering the valuable context from the poisonous, we have to choose between ignorance and deception.
If you want to see realistic prices, look at the bottom of the power law model.
What happens when this guy stops?
https://cryptoquant.com/asset/btc/chart/exchange-flows/exchange-reserve
Though you'll miss the experience of thinking in the language, which has its own value
this is an interesting tool. what if we installed a knowledge graph brain into dave so it can learn things from nostr over time?
https://block.github.io/goose/v1/extensions/detail/knowledge_graph_memory

This is the way
Every movie: AI destroys humanity
Real life: AI is cheap labor
If you have a documentation generator, it can read the output of that too. I'm trying this with nostr / Swift
People write servers for programming languages / project types, and clients for editors and tools
That's what I mean. Lots of environments support LSP: even emacs
Until there's LSP support (see VS Code), I have a new session take notes on key classes. Context management is key right now
It's called a Language Service Protocol extension. There was talk of integrating LSPs pre-1.0, but MCP was coming. Someone will probably start one soon.
https://learn.microsoft.com/en-us/visualstudio/extensibility/adding-an-lsp-extension?view=vs-2022
Ask it to write down its unfinished goals and then conduct a postmortem. Check that the output docs look good, then start a new session with those
I started with a P40, then added a 3090. 48GB enough to run 70B models, but it might be time to add a 4090 as well.
That's correct. Tough to run a 400B+ model locally though
Mistral small just dropped, and the 22B is supposed to support function calling
It's still not magical. I also think the reasoning part is wasted on goose since there's already a feedback loop
It's not magical though, you still have to ensure there sufficient context for it to succeed. If you called up John Carmack, would he have the tools to answer your question?
