Thoughts and notes after using GPT 3.5 through ChatGPT, GPT 4 through Bing, Gemini and Palm through Bard for a few months:

Always experienced considerable Gell-Mann Amnesia every time I used them. As in, their responses related to things I know a good deal about were always inaccurate or unsatisfactory, so I found it difficult to find any response they gave credible.

I have an assumption that these models were trained by people who think they are training them with accurate and good data points, but in truth they aren't. Only experts of a particular field have the experience and understanding of what information is credible and what isn't.

Domain authority, number of backlinks and therefore SEO rankings being high do not mean a source is credible. Just because a piece of information made it to wikipedia doesn't mean said info is accurate. Journalists make mistakes and have biases all the time. Just because NYT said that the US is a moody, woke teenager personified doesn't make it true.

I suppose this highlights a fascinating insight into how our very own minds work.

Someone who aims to understand everything ends up understanding nothing at all. That sums up all these models.

Reply to this note

Please Login to reply.

Discussion

No replies yet.