the biggest problem with predictive algorithms and LLMs is "truthfulness".

it is impossible to know the accuracy of the output unless it is a mathematical query or directly judged by a human

without the ability to audit, it will be relegated to "cute" and "unimportant" use cases

Reply to this note

Please Login to reply.

Discussion

I don't think you've thought broadly enough about use cases to make a statement with such finality

My favorite use case at the moment is using ChatGPT as a sounding board for developing my own thoughts more quickly. It's like an unimaginably well-read thought partner that never tires.

And it occasionally has points that impress me, that I'd never thought of.

Hardly 'unimportant' I'd say.

i agree - i just mean without auditability no one is going to use it instead of a radiologist but it is a fun "assistant"

it is possible to audit the code of a boeing autopilot software and it is used in production as such

It's generative - i.e. forms an adjunct to creativity, not truth

Trust networks are needed: a graph of mutual endorsements. A trusts B, and B trusts C, so A can trust C. (Subject matter and degree dependant.)

The training data need to be acquired by "trusted" sources, which will vary per trust network.

Machines can't know truth without being able to make their own observations and so depends on the observations of humans and human adherence to truth. The same goes for goodness and beauty. They transcend humans, let alone machines.