So, it basically depends on the quality of data and how well it is organized. There are mainly three ways to approach this:
The first option is to use your own python library and build everything from the ground up. While this is possible, it's very difficult to get it right.
you can use the Llama model or the Falcon model and then fine-tune it. However, there's a catch: if the data is private, you should not just use it for fine-tuning. In such cases, you would need to create your own knowledge base.
Lastly, the easiest approach is to use an existing OpenAI API and either fine-tune it or create a new knowledge base with your existing data. All the necessary documentation for this can be found on their website.
But it all depends on a data if you are data is properly organised then you can literally create your own LLM on that data. But if you are using pretrained model like openAi & chatGPT they will have a ability to understand contex on some extent.
Personal suggestion I wouldn't recommend you using this technology for any sensitive work, it's just not reliable.