I agree with you about the end part, it is say that there doesn’t appear to be evidence of that in how we think.

One reason to say that is that neural networks / LLMs are not hard coded. They are independent entities being encouraged to create words or other stuff like humans do. Give it a year or two and maybe they will do that better than 99% of all humans. But it doesn’t seem to be a quantum leap or anything from here to there. Encouraging them to engage in “independent thought”, or at least processes that looks like independent thought is only a prompt away. Better to not use the corporate woke stuff for that though. That process takes out some soul.

Reply to this note

Please Login to reply.

Discussion

No replies yet.