Fine tuning is training an existing ML model for a specific use case/domain.
I’m proposing we take a small LLM designed for answering questions and train it on the current specification material we use, so devs can adopt standards easier. For example asking “what does the d mean in…” and getting a language based response.
If we pick a small LLM (not like we’d need a large one anyway for this purpose) then it could be run locally on dev machines