1 - they use only premium high quality photography for training their AI. While others use very bad quality like:

https://drive.google.com/drive/folders/1d2UXkX0GWM-4qUwThjNhFIPP7S6WUbQJ

Bad photography, illustrations, 3D = Bad AI.

2 - they continually correcting any errors for better results. It is all built-in. Stable Diffusion users are using external Textual Inversion (952 helpers) to compensate that.

https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer

Reply to this note

Please Login to reply.

Discussion

I see that makes sense. You have to load these helpers into your stable diffusion? My one click installer can't do that. It can just load models.

So it might be worth it to pay for midjourney then unless you want to go very technical?

Yes - your AI software must support to load these helpers (Textual Inversion).

Yes - Midjourney users donโ€˜t need that tech understanding. Just for fun it's too expensive. But if you want to become an Prompt Engineering professional Midjourney is a good choice.

Or create here high quality AI images for free with one-two clicks:

https://huggingface.co/spaces/phenomenon1981/DreamlikeArt-PhotoReal-2.0

Yes it may actually help me with generating assets for work so its good to learn. For now I just have Diffusionbee and some custom models. Building up my personal prompt library and things are looking quite good already. You comr a long way with a good custom model, some prompts that boost the quality and negative prompts that gets rid of the junk and errors.