Avatar
ynniv
576d23dc3db2056d208849462fee358cf9f0f3310a2c63cb6c267a4b9f5848f9
epistemological anarchist follow the iwakan scale things

You can branch in the ChatGPT interface by editing your message or regenerating the response. There's more flexibility in the API playground "chat" interfaces. I'm not sure how you'd merge conversations though. Obviously you can submit any sequence of messages as the conversation history, but there's internal state that won't transfer by simply merging the messages

Replying to Avatar fiatjaf

Comment needs to check for ECC RAM

Might want to brush up on your ConLaw: there's no mention of "citizens" here:

"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances."

I understand your sentiment, but that isn't how a nation of laws works. But perhaps we are no longer a nation of laws

Once you see that synthetic intelligence provides a near-universal incremental step, every functional programmer knows what comes next

AI is like brute force: if it isn't working, you aren't using enough of it

nostr:note1nhdv4yrn6ct6vdc34gtqd2ghdkmjnfcwdevr7uu8pxea5h7alc2qkk9ml7

Vibe coding is a universal Rails that can write its own tests. How you wield it is on you

Everyone who's used Cursor can't stand goose

... until they go back to Cursor

Freedom of Speech is highly uneven in the US. I have no doubt that you or I could do this, but others are not so well protected

Crazy? Could easily be said of Somerville or Manhattan

"Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal “language of thought.” We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them.

"Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so."

It's always been latent space

"We find that the shared circuitry increases with model scale, with Claude 3.5 Haiku sharing more than twice the proportion of its features between languages as compared to a smaller model"[https://www.anthropic.com/research/tracing-thoughts-language-model]

Evidence that capability comes from the number of things that they "grok", not more parameters. They have more room for memorization, yet they're actually using *less* space? Once we figure out how to get smaller models to grok more, we'll get Claude 3.7 level capabilities out of local models

The NSA already

- has your cloud photos

- knows everyone you meet with

- knows who Satoshi is

Know your threat model