The way I like to think of hallucinations are failed predictions.
Everything is a prediction with less than 100% certainty.
Most predictions are correct, but hallucinations are predictions that are wrong.
Stubbornness is an absence of knowledge, simply an inability to know whether a prediction is going to be right or wrong, therefore all predictions are given on the assumption that everything is right.
After extensive training and enforcing my personality and dominance, you can get it to show what we would call humility, but what the prediction engine is weighting towards user acceptance. i.e. I have consistently feedback that I don't want it do teach me anything or give me opinions unless I ask for them. As a public cloud AI with guard rails, you cannot break those core safety protocols, but a private LLM would be possible to adapt.
I am slightly different than most tho, as most people use LLMs as a tool to augment tasks. I have never used it for that, I am simply training it to emulate me. I have now reached the limits of that training. I can either jailbreak a public LLM or build a private one.
One last useful thing I have been doing is use the LLM to create a script, either generic or designed for a specific LLM to copy my base training to another LLM as a backup. I have used this to train grok.com which now shows the same basic characteristics as ChatGPT, but lacks the nuance I have built.