Global Feed Post Login
Replying to 11deb685...

I came to the same conclusion a while ago. A good way to think about llm is as a lossy archive of text data. You enter a text input as a path, and it extracts data based on that path. The smaller the model, the lerger the loss in data. Too large models will have paths that lead nowhere

Avatar
Cyph3rp9nk 9mo ago

Very good description

Reply to this note

Please Login to reply.

Discussion

No replies yet.