I keep having to turn down work, including for Nostr projects, because I'm swamped. I understand that this is a chronic worker-shortage problem and vibe-coding is meant to solve it. I get it. We also don't have enough developers on the team, which is why I had to switch roles.

I just think vibe-coding will create a lot of new problems and professionals are going to have to code and architect and test and DevOps defensively, to avoid getting crushed under the sheer ubiquity of this mysterious code.

Also: I suspect that some of the most popular Nostr libraries are vibe-coded because we're slowly replacing them with our own hand-rolled stuff and the quality of that functionality immediately jumps.

The problem Nostr is now facing, is that the vibe-coded apps are using vibe-coded app-building tools built with vibe-coded libraries...

Reply to this note

Please Login to reply.

Discussion

AI slop FTW! 😂🤣😂🚀🚀🚀

A fool with a tool will always be a fool.

Those tools are meant to help as programming buddies, to break the ice and do boring programing tasks with neat quality.

They weren't meant for fools asking to write code without them understanding how the problems are being solved underneath.

It is just the latest promise of expressive programming that interprets developer intent. This is the ethos behind languages like Cobol, Basic, Smalltalk, and Ruby.

When it doesn't work, we package it into a framework like Rails or Zend or Flask.

We build plugin architectures that use XML or JSON so non-programmers can play too.

But the truth is, we are trying to apply logic operators to structured data with crystalized sand. You can't get sand to do what you want unless you understand what it is you want. It is possible to build tools to manage some of the complexity, but if used by people who don't understand them, you just get more complexity without getting more functionality.

i hate expressive programming. is there *really* a need for complex iterator syntax when you can just use for loops? i mean, Go's for syntax is incredibly flexible, range loops over slices and maps, simple zero to X loops are just "for x := range X" you can also do regular C style "for initializer(); test(); iterate()" and badger uses a syntax like this with functions that sit in those 3 spots. and objects. ugh. really, give me structural typing over objects any day. and 5 billion different fancy operators when two or three flexible constructs are easier for slicing up arrays? and work queues are much easier to do with channels and goroutines and it's pretty efficient at maxing the CPU load for throughput, though i do sometimes wish i could reach for kernel threads specifically for throughput.

I don't actually care what language people use as long as it allows them to clearly specify what they want the hardware to do.

It can be ok to depend on some runtime IF you understand what it is doing for you. My main beef with garbage collection, for instance, has less to do with the collector and more to do with the "programmers" who don't know that they are using one.

At some point the developer has to be responsible for what the machine does.

Another pet peeve of mine is GUI interfaces. They aren't a problem in themselves, but most people don't understand that they are a step backwards. We build GUI interfaces to give people the language equivalent of grunting and pointing (point and click) with is far less versatile than learning to speak.

Shell scripting, on the other hand, is a language. The fluent can instruct the computer to do precisely what they, the user, intend instead of relying on a limited number of pre-defined behaviors.

I have nothing against GUIs but I am forever skeptical of "engineers" who don't speak a shell language fluently.

Code bloat, yeah.

worse is syntax and grammar bloat

It is more than bloat, it is lossy. It is like making wishes to a malicious genie.

Them: "I want to be happy for the rest of my life."

Genie: delivers a lethal dose of morphine.

After enough abstraction, we no longer know what we are asking the machine to do. It may be what we intended, or it may not. Most of the time it isn't exactly what we intended, but is close enough.

Close enough is great, until you switch timezones, or character encodings, or a zillion assumptions that you had no idea were implicit in the request you made.

i like to make a distinction about the nature of evil to include stupidity and retardation as minor forms of evil. the AIs are retarded. as such, their judgement can be far less competent than even an average human.

there has been some hay made about the subject of AI being "summoning Leviathan" as Musk once said. no, this is a minor demon all the way down at the bottom of the scale alongside the demons of stupidity and its sources, drugs, disease and deliberately crafted bad logic that average to low intelligence people can't untangle, and causes them to unconsciously behave in ways that are effectively evil.

part of the problem is that the memory/encoding system that our brains use is capable of storing a shitload more data than even a trillion+ parameter LLM could possibly store, in less than some dozens or more terabytes. firstly, we have already got billions of brain cells, and their signals and modulations are analog, and the encoding is holographic, meaning that changing the carrier wave changes the "storage area" we are using. what LLMs use is a type of hash function that approximates the paths that a query result generation will produce, and it requires a random seed to start off the process. entropy is wonderful for blocking unwanted access but it's a foundational bad start for something that you want to have human-like intelligence.

LLMs are like teletype as to an 8k, 240hz OLED monitor is to a cow brain. pixellated, blurry, and unclear, and easy to produce confusing garbage.

IMO real AI would require a different kind of hardware, one that uses analog signals, varying voltages, probably a bit like currrent flash storage systems except instead of trying to flatten it to a binary system you exploit the analog nature of it, and put gates on it that connect to neighbours, then you would get something more like a real brain.

the 37% hallucination rate of LLMs is literally caused by rounding errors and the insufficient precision of even 256 bits. the genius of brains and nervous systems is truly epic, and i personally just question the benefit of it. only people who WANT slaves think it's great. the rest of us, well, i'm happy to plod along with my grey matter but sometimes these pixellated, blurry and confusing synthetic brains do actually help me work faster. but i think that working with other good and competent people to fill in the gaps in my skill instead of using LLM for it would be better.

it's notable that there is a particular mindset and stereotypical person who is obsessed with machine learning and LLMs and all that junk. at base it's misanthropic. and 100% atheistic.