Avatar
⚡️🌱🌙
7560e065bdfe91872a336b4b15dacd2445257f429364c10efc38e6e7d8ffc1ff
Former Youngest Person in the World!! stuartellison.com fantactico.com knostr.io zairk.com 🪴

I don’t get this?

I’ve been using Chat-GPT on my phone for 6 months via the “save to home screen” feature.

But then again the web app and the mobile app are weak compared to the API.

Ive actually made a glaring math error here as it should be 1.135 lol.

In this little framework 1.35 would be compound learning rate of someone with 350 IQ.

But point stands, and the framework is a neat little rule of thumb that makes a solid point.

As AI is so topical just now, here is a neat little framework for understanding what intelligence is and how it works.

Your IQ is best thought of as your compound rate.

Divide your IQ by 1,000 and +1

eg 135 IQ = (135 / 1,000) + 1 = 1.35 compound rate.

Then compound to n events to get something analogous to “ability”.

eg 5 events = 1.35^5 = 4.48

10 events = 1.35^10 = 20.11

15 events = 1.35^15 = 90.16

High IQ people compound their learning faster than regular people.

But over shorter time periods you find that simply practice a lot or making a lot of attempts will dominate ability distributions.

Practice dominates over short timelines.

120 IQ = 1.12^10 = 3.10

180 IQ = 1.18^5 = 2.28

But after a large number of attempts smarter people can attain otherwise unassailable competence.

Intelligence dominates over longer timeframe.

120 IQ = 1.12^50 = 289

180 IQ = 1.18^50 = 3,700

So teach your kids that being smart is not enough. You have to be persistent too. Keep going, don’t give up. Trust this math. Plot the curve, follow your trend.

The world has no shortage of smart people who lack focus and spend their whole lives operating at average competence, because their activity completion is slow or truncated.

This is an important concept for early teenagers to understand.

It’s also a neat little framework for understanding the trajectory of AI, both in first order (training) and second order (deployment).

But on coding ability GPT4 has a much more stubborn competitive advantage.

I’m guessing OpenAI got the entire Git dataset via MSFT and this has given them a massive almost unassailable advantage for now?

GPT-4 just writes vastly better code than any other model. It’s not even close.

You can play with the parameters on the API (temperature, etc) and really nail down some highly deterministic stuff. You can generate code at high temp and then do QA at cold temp.

It’s very easy to produce very high standard code with a thoughtful automated process.

It’s also super cheap and efficient once you figure out what you are doing.

HellaSwag is “common sense” question test that is designed to be easy for a human and very difficult for a machine. Average human score is 0.95 so that’s an approximation for Turing Test.

Bloom-176b is the best performing Open model on this leaderboard.

Lots of models are rapidly closing in on GPT-4 which probably why OpenAI are out there begging Congress for legislation to bulwark their [very temporary] monopoly.

Chelsea are a mess and the owner is an interfering idiot. No world class manager would work for Boehly, he’ll have to take a punt on someone (like he did with Potter).

Can’t believe they sacked Tuchels, can’t believe they hired and fired Potter. But very predictable that they hired Lampard and skidded deeper into mediocrity.

It’s now a very different club v’s Abromavich years. Boehly is there for ego today, not future legacy.

This is a nice resource for tracking the LLM arms race that is now in full swing. 🤖

Open Source models are closing the gap on the world leading stuff, interestingly the gap with coding ability seems to be more stubborn to close as many of the Open Source projects focus on chatbot applications.

https://llm-leaderboard.streamlit.app

Chelsea have exhausted their FFP quota for about the next 3 seasons, they are screwed.

They have to sell players if they want to buy anyone, but all their players have depreciated massively on account of their terrible performances.

And now that they have stolen / stealthed access to all the training data and concluded their training, they are now begging congress to license all LLM’s ?

Outrageous behaviour really.

Exactly correct.

You don’t even need huge token capacity.

You can self assemble source code even for very complex things with a model that *only* handles 4k tokens.

1). Create assembler database

2). Check environment

3). Create code block generation process

4). Create task division process

5). Create QA / testing process

6). Receive mission prompt and acceptance criteria

LOOP

7). Divide task prompt into subtasks prompts

8). Generate 4k code blocks

9). QA / Test

REPEAT

Pretty quickly chug out 1m lines of code for <$5k, complete with searchable development and assurance records.

You could write something with 1 billion lines of code for ~ $2m, in about a week.

Nested self-assembly scales infinitely, it doesn’t need to be super intelligent.

OK, its not that hard to ask OpenAI to write code that creates other code. Or to ask it to create code that creates permissioned services.

It can create code that calls the API to create further code. It can create code to monitor memory and CPU and self manage its resource load and stage new requests accordingly. It can schedule itself.

It can be aware of token limits and can breakdown and execute tasks accordingly, rather than run into limits.

It can read its own errors and modify and retry.

This seems rather dangerous. The only limit is the cost of API calls. But Open Source LLM’s which are already out there, are more than capable of assembling very sophisticated things.

I’m not at all concerned about monolith intelligence of sci fi. It’s not required. Everything can be divided and subdivided into 4k token tasks and then execute as a collective.

A program that is able to take high level prompts and create numerous 4k work flows from them, can be nested and nested to create something that can execute infinitely scalable complexity.

Once you realise how to scale with nesting, it’s not even difficult.😨

What a time to be alive.

What is the best Open Source LLM people are using? Whats the best resource for keeping up to date with latest open source LLM’s?

#[0]​ something seems off with new zap feature, I was using WoS have set up a new Alby wallet but not yet funded it…

Seems I can zap people and Damus, client records a zap… but no sats are transacted?

I’m not claiming Levin has done anything new, I just wasn’t aware of the fact that there are visible pathways to regenerative medicine.

He isn’t meaning reattaching a finger or growing an ear on a mouse or growing a liver in a jar.

But genuine your body regrows a healthy liver inside you without any surgery. Or you lose your hand and a new hand grows out of your own arm.

I’ve just watched this Lex Freidman podcast with the talented biologist Michael Levin.

There are lots of new things I learnt from this which is great. But it was amazing to hear about the technological progress towards regenerative medicine, from studying lizards and worms that regrow body parts. He also talks about various immortal species living today that do not age at all.

Levin predicts regenerative medicine will be a thing later this Century, where medicine can literally grow you a new hand or a new organ from your body all pretty much from text prompts. He is (I believe) the leader in this field and is unusually sharp minded, he offers up plenty of evidence.

He says in his own worldview cells are essentially a pool of potential and you just have to give them the correct bioelectrical signal and they will develop a particular morphology, even producing radically different morphology all from identical genes.

IF this is true, big IF, then that means the end game for the human race is to become immortal amorphous shapeshifters.

Where we can demand our cells grow any organ / limb / form, basically on demand and whilst it might take weeks months or years to complete… you could go to bed one night and wake up as a 1,000 tonne dragon (you would require nutrition!). You could also morph into giant brain forms presuming you knew how to also create the required anatomy to support it.

But it does seem like there is a visible technology path to this! Which is absurd, and very new to me. Really changes how I consider the human / AI future.

Not difficult to imagine a complete flip to machine brains and biological bodies. Biology is a long way from over.

A pretty amazing 3 hours of learning…

https://youtu.be/p3lsYlod5OU

I think one of the toughest things about using the new AI API’s is breaking large complex projects down into digestible tasks.

GPT models all have various token limits and da-vinci is about 10x cheaper than GPT.

But I feel like I’m beginning to think of certain engineering activities in terms of tokens as a measure of complexity.

I’m breaking big complex things down into smaller chunks and using “high temp” models to make divergent hypothesise and then “colder” more deterministic models to do QA and eventually pull things back together.

What’e exciting for me personally is that my ability to do difficult engineering problems suddenly seems scalable to vastly larger more complex problems.

Managing AI is actually an entirely new skill set, and I think some tech people who can hold a big complex system in their mind, and then understand how to break it down and tackle it using the suit of models available to them… are going to prove to be the big winners here.