LLMs made Pinterest mostly useless. I used to go there to get ideas for interior design and to get a sense of where architecture is at any given time but now it’s completely polluted with generated garbage. Sometimes it’s impossible to tell if the image I’m looking at is something that actually exists or totally made up.

What happens when the new models train on this “fake” data? It’s gonna be fakes all the way down. This can’t be good.

Reply to this note

Please Login to reply.

Discussion

Good for content, bad for reality?

Good if you’re fine with fakes, but I think it only ruins those platforms.

We all stay on nostr, where we are in control of the algorithm. Any hint of regularly AI generated content will mean I remove you from my WoT.

Filter bubbles are going to get a lot worse, though...

I’m not sure how much wot would help me in filtering niche content most people probably don’t care about.

Yes

AI INBREEDING

For AI this is not going to be a problem. It is essentially a reinforced training. Only good pictures/content will end up getting attention so for future training this (human curated) content will be as good as any other user generated content.

Uhhh I don’t think so.

Why not? Our attention is separating good from bad pretty effectively. So I can really see the nonsense, garbage, hallucinations, badly generated images, and all of the other things that can potentially poison future models will be filtered out by humans.

Humans already unable to tell the difference between generated and real. You get compounding of training on ever more fake data. You don’t see this as problematic?

I don't see it problematic for future AI training, not at all. And for us as people? You are talking about data as real and fake. What does it even mean? There was so much unreal data before AI ever went mainstream. For example in architecture randers are used for decades.

I really don't see a problem in the source of the data. If it is good, inspirational, sane, realistic, creative, pretty, etc. enough I'm going to use it and I really don't see a reason why I shouldn't. Just because it was generated by AI?

Fake = generated. Even architectural renders are based on real objects. AI generated fakery can do all sorts of unrealistic things that have no parallels to what exists. I already see issues with various searches and seeing results that are unlikely to be real. You get a bunch of this idiotic data into your model and get idiotic output. Not limited to images either. Written word will be even worse. Trying to identify what’s factual and what’s hallucinated must be a nightmare job.

even architect plans change when applied to actual build on ground/they just can't see all builder

Probably such models are just bad and will therefore be used less. It is a race on who can filter real from wrong the best within the LLM competitors I think.

True for text content too.