The really sucky thing about using AI is that you can’t verify what is giving you responses. You cannot trust it.
They can tell you the base model, but you have to believe them; and even if they could prove it, most #LLM providers are running orchestration layers on top of the #AI feeding it other data and allowing the AI to fetch its own (by proxy) external data. So there’s no telling where it builds its response from.
The safest would be to self host. You cannot trust it verify the base models you have. And then you can run your own tooling/orchestration on the model.
For now we don’t have better solution, so we just need to be cautious.
But that seems like the only sane and safe future for me.
Let’s make it happen