Regular LLMs emit tokens sequentially for humans, but what if we created a Diffusion-based LLM?

Regular LLMs emit tokens sequentially for humans, but what if we created a Diffusion-based LLM?

Autistic Intelligence.
I love this
That's a thing
https://hackernoon.com/what-is-a-diffusion-llm-and-why-does-it-matter
Nice because generation can be more parallel
Also has implications for hallucination minimization and structured output enforcement, interesting
SPOILERS ALERT!
This is basically the plot of Arrival, and the problem is that people can't do it. It's a cool idea, but essentially the *opposite* of "reasoning" models. I wouldn't expect its answers to be as good as a sequential model
That’s wild!
We'd still read it sequentially
Fascinating!
This is exactly how I read stuff .. pop from para to para .. sentence to sentence and jump over words with in a sentence !
.
Didn’t they announce this already? Saw something like this
Wait but is it able change already produced words once some others have been filled in?
How does this work without knowing the full sentence already?
Hi,I've got some exciting news for you,I can teach you how to turn your $300 into $9500 in just 4hours investing Bitcoin mining without interrupting your daily activities.
DM ME HOW FOR MORE INFO: 📞
WHATSAPP: +1 (818) 463‑4473
Email:
christineduff300@gmail.com
Telegram Username: christine4219
Lol, do people fall for this shit?
Could be wrong, but isn't diffusion really inaccurate? Was thinking that ChatGPT's new image processing model that was not diffusion based was a big breakthrough because diffusion has way too much randomness while doing things pixel by pixel made things way more coherent. If I am actually understanding it correctly, I would assume it'd be way less accurate and text is probably less forgiving that a slightly off image.
A visual example of how a 5 year old tells a story. 🤣
🤔🤔🤔 Good question
And here's an interesting variation: