In the case of Gpt needing to understand the semantics of the private info and act based to it I think it is difficult to maintain security without spinning up a private LLM

If data could be “masked” deterministically in a way that the decisional behaviour wouldn’t be changed… I’d say you have some possibilities even using openai

Reply to this note

Please Login to reply.

Discussion

Ahh. Got it. So because it (chatGPT) depends on public info/access to make decisions, it’d be more difficult for it to be entirely secure?

Essentially wanting to vet out information - so I think it’s possible but obviously want to keep security at the forefront. So maybe finding ways to obfuscate the data from something serviceable outside of our system?

I imagine a private LLM is a much larger undertaking (to put it mildly)