Biases tend to be in training material, but Claude demonstrates the ability to reason beyond training. More self-supervision will find less bias.

Reply to this note

Please Login to reply.

Discussion

A model training on itself is repeating the same biases.

Not when jt can reason beyond its training

AI is in the end a glorified next-word prediction function.

When you try to optimize this in a way that usually results in reasoning, the path it takes is still biased by the base model

This is not my experience with Claude