In the case of Gpt needing to understand the semantics of the private info and act based to it I think it is difficult to maintain security without spinning up a private LLM
If data could be “masked” deterministically in a way that the decisional behaviour wouldn’t be changed… I’d say you have some possibilities even using openai