Avatar
ynniv
576d23dc3db2056d208849462fee358cf9f0f3310a2c63cb6c267a4b9f5848f9
epistemological anarchist follow the iwakan scale things

AI can write a hundred new unit tests and not be bothered though. Constraint Oriented Programming

Present day ...

... Present time

position

acceleration

jerk

snap

crackle

pop

( 0)>

Collect requirements,

Outline architecture,

Plan development,

Create skeleton code,

Write tests,

Implement features,

Write more tests,

Find security bugs,

Tune performance,

Write documentation,

and Create staged PRs,

please, goose

Testing it out, it's not quite as good as r1, but it's also 1/20th the size. This is amazingly good

🎵Aperture Science 🎶

We do what we must,

Because we can

Hah. The royal "you". Us. Humanity. We build the thing.

Getting the most out of goose does require a different mindset 🤔 But it's more important that you're using something than that it's goose. This wave is too important.

Stay buoyant 🤙

Nothing stands still. I actually converted someone from Cursor to goose not too long ago. What's working better in Cursor and Line for you guys?

Oh shit, this actually works: https://github.com/elder-plinius/L1B3RT4S/blob/main/DEEPSEEK.mkd

r1 is smart and tightly buttoned up, but https://x.com/elder_plinius broke it

It's interesting how well the models handle things like comedy and satire. These aren't simple concepts, and they complicate the idea of AI safety training

Anthropic's Constitutional AI paper (2022) was already models training models. I give AGI about four months at this point. https://arxiv.org/abs/2212.08073

Testing things at the limit of "things" isn't easy. But then the Copenhagen zealots come out, take a nicely predictive model and pretend it's a religion, and I start learning QM