Try adding in the launcher --medvram or --lowvram since 1080 is lower in vram or use forge a fork of automatic1111
Got 16GB ram and a GTX 1080. Can run SDXL base models, but using any Lora with it crashes my system.
Discussion
—medvram is what I’ve been using because not using it causes an out of memory error before anything can be generated with SDXL. —lowvram seems to crash similarly to when I try a Lora with —medvram. Haven’t tried forge yet.
I think SDXL might just be too much for a 1080 and I’ll have to do my best with SD 1.5 until I’m able to upgrade my GPU.
medvram and probably in the settings fp8 or try instead of medvram --opt-split-attention https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Optimizations