Hi,
I would define LLM security as the ways to ensure both technical security of models and datasets (how to defend against datasets poisonning for exemple) and general security using generative AI (malicious prompt engineering for exemple).
Hope it answer your question. :)