Ahh. Got it. So because it (chatGPT) depends on public info/access to make decisions, it’d be more difficult for it to be entirely secure?
Essentially wanting to vet out information - so I think it’s possible but obviously want to keep security at the forefront. So maybe finding ways to obfuscate the data from something serviceable outside of our system?
I imagine a private LLM is a much larger undertaking (to put it mildly)