1 - they use only premium high quality photography for training their AI. While others use very bad quality like:
https://drive.google.com/drive/folders/1d2UXkX0GWM-4qUwThjNhFIPP7S6WUbQJ
Bad photography, illustrations, 3D = Bad AI.
2 - they continually correcting any errors for better results. It is all built-in. Stable Diffusion users are using external Textual Inversion (952 helpers) to compensate that.
https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer