Replying to Avatar Guy Swann

Yeah it’s not really designed to be accurate is the problem. It’s designed to “predict language,” that’s all it’s neural network of connections does. It maps the connections between words. It just so happens that this very often means it *is* correct when predicting an answer to something, but you have to usually get very, very specific or multiple layers into your prompting for it to reach that place where the predictions have been narrowed to a highly specific scope of conversation.

What LLMs are headed toward whoever, is as a means of *interaction* with a separate knowledge graph, math functions, collection of other software tools, etc.

It might not be able to answer your math problem, but it can recognize that it has been given a math problem, and thus can reach out to a math program to execute it, and return the answer.

This is where the next 10-100x improvement will come from Ai; broad, multi-layered integrations and curated, crowd sourced knowledge graphs.

Avatar
Komi_Hartman 2y ago

GPT output for 'a' x 1000 or 4000.

A leak or Layer Model error ??

Try it now! Only works on 3.5..

https://pastebin.com/JeQV1VAE

Reply to this note

Please Login to reply.

Discussion

No replies yet.