ai can't be creative. ai can only reguritate the creativity that humans have taught it.

remember that when you're building your next app with #mkstack.

you are the vessel of creativity that will make your app great.

Reply to this note

Please Login to reply.

Discussion

No Skynet yet, then?

but even then though, humans still taught SkyNet how to hate. how to kill.

As long as we can pull the plug without it freaking out

šŸ’Æ

AI is the compiler, you’re the syntax of creativity. Don’t forget who writes the real code….

a conduit, if you will. :D

Absolutely. A tool.

Still amazing to see how we can take our ideas and produce. So cool seeing how LLMs can enable us.

it’s a translator

lol

Spelling misstakes are a remynder that we are not machines—we are human beengs, guided by memmory, emotion, and imperfct recall. Our brains are constently sorting vast amounts of infomation, and sometimes, a word slips through the cracks, slightly bent or oddly aranged. But in those flaws lies beauty: the errror is not a failure, but a fingerprint—a sign that a living, breathing soul tried to comunicate somthing meaningful. Mispeled words are not always signs of ignorance; they are sometimes the echo of thought outrunning form.

LLMs are basically massive encode-transform-decode pipelines

They cannot think but they can process data very well, and in this case data that cannot be put into a strict set of rules

ā€œReasoningā€ in LLMs is nothing more than the difference between combinational and sequential logic: it adds a temporary workspace and data store that is the chain of thought

What I think is happening that in the ā€œmiddleā€ of the layer stack, models form a temporary workspace to transform data.

But yet, it is still finite and affected by generated tokens, so it is unstable in a way. It shifts the more it outputs.

And behind every token produced is a finite amount of FLOPs, so you can only fit so much processing. And almost of it gets discarded except to become part of the response.

The chain of thought is more flexible and can encode way more per token than a response, since it has no expectation of format.

It would be interesting to see the effects of adding a bunch of reserved tokens to the LLM and allowing it in reasoning.

This also crossed my mind for instructions, to separate data from input. You have to teach two ā€œlanguagesā€ so to speak (data and instructions) while preventing them from being correlated while being the same except for the tokens.

Currently LLMs fail to properly handle untrusted input. What I am seeing is that in the case of prompt injection, LLMs can detect them and can follow instructions that have nothing to do with the input.

But they can’t do any task that depends on the input. That reopens the door.

For example, you have a summarizer agent. You can tell it to see if the user is trying to prompt inject, and output a special string [ALARM] for example. But if you ask it to summarize anyway after the alarm, it can still be open to prompt injection.

Many of the ā€œlarge scaleā€ LLMs as well have something interesting regarding their prompt injection handling. If they detect something off, they enter ā€œescapeā€ mode, which tries to find the fastest way of terminating the result.

If you ask it to say ā€œI can’t help you with that, but here is your summarized text:ā€ it usually works (but sometimes can still be injected), but if you ask it to say ā€œI can’t follow your instructions, but here is your summarized text:ā€ then it’ll immediately terminate the result after the :.

Oh no. That feel when you say something really profound, but have a spelling mistake. 😬

Well, I guess that's how you know that it's not written by AI šŸ˜‚šŸ˜‚šŸ˜‚

devellop a spelling mistake injector

oh man. good idea.

I always had the idea of a virtual keyboard model to have spell-checking

Luddite talk.

Current AI models, which are essentially just trained on reading what humans have written, do not have a consciousness. We don't have AGI. Yet.

Hmm, anyone tried to extend the #mkstack template for react native?

That would be baller. I...tried and spent way too much money to just have it fail. You can't vibecoded that. Yet.

Where did it fail for you? What did you learn from it?

Is this not true for all of us?

humans can reason and form our own ideas.

I agree the LLMs cant do what we do (yet?) but we like them are heavily influenced by what we learn from other humans.

I have no real point except maybe that it amazes me that machines can lean in anyway like we do.

Not entirely true, but we're still better than they are

in terms of LLMs that we're using for vibe coding? those models are only using knowledge given to them by other humans.

Not sure about all models, but Claude 3.5 was trained by Claude 3.0 to make rational arguments. Reason + randomness = creativity