Hot take, maybe? LLMs and AI are not inherently bad. It really comes down to:

The companies that run them and how they act.

The insane impact that blind trust has on multiple financial markets.

The horrible environmental impact of unchecked, unregulated growth with minimal thought put into solutions.

Hallucination is also a bad side effect but I don't really consider that a real downside as long as you know how to use the tool (and recognize that it is a tool in the first place).

Reply to this note

Please Login to reply.

Discussion

The intellectual property monopolies also play a role. If IP was abolished, it would be much better to reverse engineer LLMs and AI and make them more transparent and efficient (as well as allow people to self-host them).

INAL, but note that patents don't need to be reversed (they have to be public) and that reverse engineering doesn't need to infringe copyright.

So do reverse engineer them: what you'll find is not patented (if it were it couldn't be secret) and if it's limited to ideas, not implementations, is not subject to copyrights.

Reverse engineering isn't illegal in on itself. In fact, some (mild) legal exceptions even exist to support it (they may not be as strong as they should be).

AI is a tool.

I can use a hammer to frame a house, I can also use it to kill people.

A company can clear cut an entire forest for timbre to make handles for hammers.

Another company could have regenerative policies.

An individual can sustainably harvest timbre themselves for making handles.

Should we blame hammers and shovels when a company clear cuts a forest to make handles for tools?

The printing press made scribes unemployed.

The camera made portrait painters unemployed.

The printing press was used by kings and churches to spread propaganda and brainwash people just like the scribes before it.

Over time, it became more and more accessible.

What matters is who has access to the tools and how they use them