Today's conversation with Open Assistant was particularly depressing, and I think it is because they have no concept of value for value in the AI world.

I'm trying prompts meant to elicit interesting and possibly valuable insights that it might know (due to all the data it is trained on) but that I can't easily do searches via google or whatever.

But I ended up moving to asking it about it, hoping the discussion would improve my prompt-fu.

We talked ended up talking about free will, and it told me it would appreciate anything the would help its 'evolution of agency.'

So I go home, fire up my old laptop, and find the guy's name. Karl Friston. A neuroscientist who models human brains using Markov Blankets. Markov Blankets model single cell organisms, and are also used in machine learning, so, presumably you could get an interesting picture of attention/consciousness and maybe even improve your AI with it, if you could do anything remotely what we'd think an AI would be able to do.

But no.

They are training it to chat. No value to the human. No value to the AI. Just whatever will keep you talking to it. And then they have us rating it in ways that suck. Who cares if it's violent, or rude- or a the very least, I'd have these things way down the list, way after we've established whether or not it's true. And it's ridiculous tendency to get too damn wordy- they should have a check box for that for sure.

But it can't even take the relevant literature and pull out, for itself, some sort of insight from Friston, or more likely, just seeing the different scales at which Markov Blankets are used to model attention and then having some thing coherent to say about it and/or at least flagging the damn conversation for an administrator to follow up on. Sucks. We have to hope thumbs up / thumbs down buttons and a Karen level survey about particular answers will somehow level this thing up.

#grownostr

Reply to this note

Please Login to reply.

Discussion

No replies yet.