Its the same for a lot of my work. Either the content is a confidently wrong interpretation, coherent but vacuous, or just puts forward the standard narrative with just puts propagates biases that are alreadyvin the field. Aspects like that are definitely problematic because entry level users can't verify the truthfulness of the statements and just dilutes knowledge acquisition.

I am curious on how vector embedings and other types of training that limits responses to a specific set of literature might help LLMs be more accurate in specialized areas.

Reply to this note

Please Login to reply.

Discussion

No replies yet.