Hey nostr:npub1cmmswlckn82se7f2jeftl6ll4szlc6zzh8hrjyyfm9vm3t2afr7svqlr6f and anyone else familiar with LLMs… nostr:npub1hudtuvkqr970j7s4lgsqf937wjme7khntgt99zdzrsxd05ax3lyqzlvgug and I were interested in setting up a tool that allows us to query our current local policies.

For example, if I have a patient that needs to stay intubated from the operating room (breathing tube), it would be helpful to ask: “Am I allowed to bring this patient to the Post Anesthesia Care Unit, and if so what are the specific monitoring requirements that we have in our policies?”

Is this something very challenging to setup? We have all the policies available, and it’s always extremely time consuming going through them to find the answer. #asknostr

So, it basically depends on the quality of data and how well it is organized. There are mainly three ways to approach this:

The first option is to use your own python library and build everything from the ground up. While this is possible, it's very difficult to get it right.

you can use the Llama model or the Falcon model and then fine-tune it. However, there's a catch: if the data is private, you should not just use it for fine-tuning. In such cases, you would need to create your own knowledge base.

Lastly, the easiest approach is to use an existing OpenAI API and either fine-tune it or create a new knowledge base with your existing data. All the necessary documentation for this can be found on their website.

But it all depends on a data if you are data is properly organised then you can literally create your own LLM on that data. But if you are using pretrained model like openAi & chatGPT they will have a ability to understand contex on some extent.

Personal suggestion I wouldn't recommend you using this technology for any sensitive work, it's just not reliable.

Reply to this note

Please Login to reply.

Discussion

No replies yet.