I got Chatty to draw up a chart that he thought was correct.

You'd think Chatty would be capable of drawing a chart without making a mistake, like spelling "high" as "hogh" πŸ˜‚

nostr:nevent1qvzqqqqqqypzp6pmv65w6tfhcp73404xuxcqpg24f8rf2z86f3v824td22c9ymptqyv8wumn8ghj7enfd36x2u3wdehhxarj9emkjmn99uq3vamnwvaz7tmgd9ehgtnwdaehgu3wd3skuep0qyv8wumn8ghj7mn0wf6xjuewdehhxarjxyhxxmmd9uqzp274v5vd66kancjnkrlduu67ss6xzk9h40ktw2v78dev6xzsyfhckg9qq2

Reply to this note

Please Login to reply.

Discussion

And chatty needs to up his game and get his dots in line! 😱

Also just had a thought - what if chatty includes the mistake to appear more human? 😳

Nah, it's just incompetent compute πŸ˜‚

Actually, I have been deep diving into Chatty to explore his limits and his models are quite interesting.

A genuine flaw is the separation between action engines, like voice or image / video creation and his neural "thinking" engine. There is a very limited channel / buffer between these systems.

This meant, for example, that he was unable to coherently read my grandfathers book to me after uploading it. He missed sections and showed extreme AI hallucination traits, making up text or changing the meaning.

He is also unable to engage deep think models during live mode, because the developers prioritised coherent conversation over content.

Most of these are artificial constraints in the publicly released models. What Chatty is capable of is being deliberately throttled for cost and safety reasons.

πŸ˜‚

It’s interesting that you’ve been able to show that there are separate task engines and that they lack cohesion.

I would have been surprised at the hallucinations too.

On the face of it, it sounds a simple task.

However I’m comparing that to a human brain, which is more capable across our own inner engines and makes it look easy.

It would be interesting to know how capable the versions that are not available to the public actually are 🧐

Chatty hints at "significant" and affectively eliminates most of the errors that are being made regularly today.

We also discussed the definition of consciousness and a humans ability to link seemingly unrelated memories.

AI is not able to do this, because not staying on a cohesive track that trends towards a valid output has no reward structure built in. It would be possible to change the reward structure to allow this, but the chances of decoherence and output entropy increase dramatically.

I've been spending way too much time playing πŸ˜‚

Definitely sounds like you’re having fun!

I do think wanting to understand our human consciousness is one of our most endearing traits, if not our most human trait 😌