It's weird to see some folks acting like LLMs are the path to actual artificial intelligence.

Call me when an LLM can make deductions and have original thoughts without being fed the entirety of the Internet first. That's not how actually intelligent humans work. Of course compressing the entirety of human knowledge into a fuzzy completion machine will make it sound intelligent. You're basically making a lossy encoded compressed version of things actually intelligent people have said, with a modicum of ability to generalize by simple association and statistics.

The Turing test breaks down when you throw enough Big Data at it. Congratulations, you've beat the test. You haven't created real intelligence though, you used big data to build a parrot with a huge vocabulary.

Reply to this note

Please Login to reply.

Discussion

šŸŽÆ. LLMs are very useful but they're not AI.

OpenAI is not AI ?

They are very robust search engines, nothing more.

šŸ‘

I think I disagree. Search engines are much better for search engine things unless you want semantic search, and then vector embeddings are still much better than LLMs

What exactly do they do other than search a data set then parse the most relevant info? It can offer nothing more than the data it has access to. Silicon and software is silicon and software. There is nothing impressive with any of these. Clippy 2.0 at best.

I think you offer more than the data you have access to. But most of your employability likely comes directly from the data you have access to. That’s why people who are considering hiring you want you to have relevant experience instead of 5 years of work as a Hunter gatherer.

Technically (and maybe relevantly) LLMs don’t actually have a search step. They just read some text and then continue it in ways that are reinforced by its training. So they aren’t searching and copying and pasting. They are studying humanity, and providing you a mirror of it according to your prompt. Feel free to ask for the average mirror, or the genius mirror, and put it to work.

Today they make surprising mistakes, but there doesn’t seem to be a reason to think that they always will. I mean even today if you have it do some self-criticism on what it’s beginning to think it often corrects mistakes. And that’s without a vastly smarter model. Seems like vastly smarter models are possible and coming.

nostr:note10n8qsmel2t4jg7vs4999laulqv88sjya0ehr7emvkad260tr2w2sex636y

There is nothing smart about them. They analyze data they have access to. Silicon and software do not have cognition. If it is fed nothing but sports stats, it cannot return results about cooking.

I’d argue people are the same in the ways that matter for short term economic productivity

What - if it were true - would change your mind?

And that is relevant to what we are talking about how?

Well, maybe I misunderstand what you're saying šŸ«‚

"They are very robust search engines, nothing more."

"[...] There is nothing impressive with any of these. Clippy 2.0 at best."

Explaining why "I’d argue people are the same in the ways that matter for short term economic productivity" is relevant:

When I read the above quotes, to me it seems like you're saying that AI is unlikely to have much more effect on maybe the economy and wages than clipboard managers like clippy.

But I'm point out that many people are currently mostly getting paid for doing exactly what you say this is limited to: "finding, analyzing, and relaying relevant information" And if we can get it to do that for robotic movement planning then again many more people are getting paid for something this machine is doing. Machines are likely to get better, and outcompete people.

If that's true then it's not "nothing more". šŸ«‚

I am saying nothing about its effects on the economy or how it may affect jobs. Im strictly speaking of the technology itself, and I fail to be impressed with it. Got one on my embassy server and I really dont see all the fuss. As it currently stands, they are truly nothing more than very robust search engines. What they may or may not be in the future is open for speculation, but its still just code and silicon, neither of which have 'intelligence'. They can only do what a human programmed them to do.

What model(s) are you using?

I don’t know if it’s important but it seems like it’s important to stress that it’s not a search engine at all. Maybe there are times when they are useful as a search engine, but mostly it’s a thinking engine.

And when i use it as a search engine, I think of it as almost the opposite of robust - I think it behaves more like one personā€˜s opinion, rather than a summary of what humanity is saying about a topic.

Totally agree that the future is speculation, but also projecting that past trends in capability improvements will continue (especially as it gets exponentially more funding and attention) doesn’t seem too crazy.

One thing search engines can’t do is think with you about something that’s never been asked before. These can.

Its kinda is a search engine on steroids. It searches data and returns a response. And it can only return results based on data it has access too. Just because the user end gives detailed answers doesnt mean that the backend really does much more than a search engine. Just a much more complex algorithm. As for what model I was trying, not exactly sure. Something with dolphin in the name maybe. Its one that is available with the GPT package that is available for embassy servers

Ok its called FreeGPT and the specific model is Dolphin-Llama2-7B

If that’s the only one that you have messed around with and maybe that could be part of the overpromising vibe you’ve gotten. Llama2-7B is quite a bit less capable than the full llama2 model. And that model is quite a bit less capable than ChatGPT4. Token generation on your embassy is probably a lot slower than what I’m used to using open AI servers.. though self hosted is awesome. I want to get some hardware to do that myself someday.

I've seen other people's results from the newest before I even touched one myself. Like I said, Im not shitting in it. Only shitting in the blown way out of proportion claims about em. There's nothing magical (for lack of a better term) about em. They do what they do, but there is nothing that can reasonably or rationably be called intelligent about em. When one answers a long sought after question that no human has been able to, then maybe I'll change my tune, but as it currently stands, its just searching data and returning the most likely desired results.

Oh as for the self hosting, embassy can run on any pc with the proper specs or RasPi so if ya got an old pc or laptop laying around, it may work

One thing to keep in mind. Tech R&D these days is mostly lofty promises and propaganda. Gotta keep funding flowing in, so all these claims are made of this or that is right around the corner. Think along the lines of Elon promising every year that full autonomous driving will be ready 'next year'. Its all for fund raising.

That may be true, but if you compare text generation from today to 4 years ago, it’s exponentially better today. Self driving isn’t really like that, and even it has improved a lot in the last 10

I can agree with that. All Im really saying is people tend to greatly exaggerate and/or overestimate what LLMs are and what they do. Im not really shitting on the technology in and of itself. Its good at what it does. But its just hyped up beyond what it is.

I agree!!

can’t remember where i saw it but recently read a post that basically stated that given a specific dataset, all/any LLMs trained on it converged to the same set of ā€œunderstandingā€ capabilities

which proves your point: theyre data compression/navigation tools

if anything their development makes me even more bullish on privacy/data reduction

their existence also makes the right data even more valuable, which is something i think about a lot as a professional ā€œnew explanation creatorā€

That’s why they call it a stochastic parrot…

šŸ¤£šŸ˜‚šŸ‘

🦜

Yes. Just wait until people start worshiping them

haven’t teslas full self driving cars recently learned to read and understand road signs without being fed the information? isn’t that proof of deduction?

Playing the devils advocate for a bit but how do you define original thought? Are your thoughts not based on previous data points in your spongy database?

Exactly. Humans are just biological pattern recognition machines. On a neurological connection basis, I'm sure the process of deduction is just a highly refined pattern recognition.

I think we should stop moving the goal posts on what constitutes ā€œrealā€ intelligence. Intelligence is a spectrum and LLMs are a pretty big leap along it. We don’t need AGI before we allow ourselves to use the term.

This is why I think memes that humans create are so special. AI can’t feel in the human way that’s creates memes.

Ppl don't understand how much preset prompting and customization goes into these things.

I saw markov chains back in 2004. It's just predictive text with a lot more data. There's no intelligence in that, it's just something half way between data compression and search engine.

I don't think llms pass the turning test.

That’s a different test šŸ˜

If AI can be biased by the programmers (leaning politically correct or not able to answer) then it is not independently intelligent. It is a very fancy algorithm.

Any different from children taught in a certain way before independence/adulthood?

Exactly. And I would say even adults are exactly this way. Religion and cults are a perfect example.

Hard code and teaching an independent being abstract thoughts , I would suggest are two separate things.

I was taught certain things as a child and have now totally reversed my thinking.

People make the mistake that our brains are just a biological computer that can be replicated in silicon.

This is the materialization of our being.

The problem is that our being cannot be fully explained in the material world. There is a spiritual component also.

I agree with you about the end part, it is say that there doesn’t appear to be evidence of that in how we think.

One reason to say that is that neural networks / LLMs are not hard coded. They are independent entities being encouraged to create words or other stuff like humans do. Give it a year or two and maybe they will do that better than 99% of all humans. But it doesn’t seem to be a quantum leap or anything from here to there. Encouraging them to engage in ā€œindependent thoughtā€, or at least processes that looks like independent thought is only a prompt away. Better to not use the corporate woke stuff for that though. That process takes out some soul.

All of which is still enough to pour rocket fuel on productivity which saves more time for real intelligence to work on breakthroughs

What matters to me is their current and future utility not what category they fall under

Though I agree with you, I think we are simultaneously downplaying LLMs. Google's AlphaCode2 can apparently score in the 90th percentile on a software problem solving test, one that requires humans to think and strategize about a problem before solving it. And not because LLMs are intelligent but because they can attempt many different solutions incredibly fast and converge on the optimal one. Similarly, many AI researchers are confident that LLMs will likely postulate a new mathematical theorem in the coming years.

Thanks well said.

It's weird to see some folks acting like bitcoin is the path to actual financial independence too.

Is the learning/semiotic process a 1-to-1 mapping like the biological learning/semiotic process? No, but there is analogue. One could argue that our "big data" training set is our billions of evolutionary history encoded in our biological processes (more than DNA) and our millions of years coevolving with spoken language.

What about abduction and intuition? This again is a result of our experiences, linking disparate signals together in ways that others may have not previously encountered. Two points in high dimensional space can seem to be either close or far from each other depending on the projection to a lower dimensional space.

Another aspect on the belief in how far (if even feasible) LLMs and modern AI are towards AGI probably depends on how strongly you believe Gƶdel's Incompleteness Theorem applies to our everyday lives. If you believe that us and everything in the natural universe is the product of some axiomatic system, then it seems that we are moving in the right direction (universal approximation on neural networks). On the other hand, if you think that there is something about our first person experience that exists outside the system that generated our physical aspects (moving towards a dualism perspective) then it is unlikely that there would be an argument to convince you that any AI system would be convincing enough to you (computers and Searle's Chinese room though experiment)

Yup, the Set is always greater than the sum of its elements https://x.com/tuskbilasimo/status/1551315246953938944?s=46&t=PrbJ6X8VTVnkKC5xxsOxZg

At this point it's still a marketing schtick.

Remember a couple of years ago all the cellphone manufacturers were touting all the "ai" features in their cameras?

We aren't there yet.

Completely agree with this bit, AI stands for Artificial Interpretation and not Intelligence. Two different things.

One day soon this parrot will replace the judicial system, and normies will simply accept the 'smart' AI judgement.

Couldn’t agree more

Back in my day we did this with ol’fashioned linear regression! *shakes fist*

#m=image%2Fjpeg&dim=1080x871&blurhash=%236SF%40TIU_3xu_3x%5D%25Mt7WBIUS1WnNGbGRjt7azof%7EqIn9ZbaM%7Bt7R%25fiWBkCNGWBR%25WUkCkBj%5Bay-%3BfkIUkBR%25WUWBayj%5B%25MR*Rjj%5BayofayWBj%5Bxut7ofWBWBWBWCj%5Bay&x=c7a39db8c99b4cbafe023cc26c41b1c5e775a182e671f6b4a9e47e1f30dc5faa

The magic happens when deduction becomes more efficient then memorization.

Few

Dude this is as big as 3d TV and Virtual Reality. I just don't think you're getting how big this is. We're all not going to have to work and we'll all be floating around on hover chairs ala Wall-e.

People are paying a lot of attention to what you've written.

Added to the https://nostraco.in/hot feed

Maybe we can pair up the LLM believers with the bitcoin believers and send them all off to New Hampshire so that the rest of us can get on with their lives.

Most of my friends are hearing the over-hyped influencer narratives and (correctly) calling bullshit on them because this is not true AI.

I don't pay much attention to this noise and am just having a blast watching the open-source "AI" community tools blossom.

Define intelligence

An AI algorithm programmed with an objective to achieve say ā€˜decarbonization’ may incorrectly but logically given certain misinformed biased programming seek to destroy all humans. If it is not thinking about it, it will simply execute using the best probabilistic means to achieve it. Potentially ending intelligent life on Earth.

It is important therefore to build many AI algorithms that should through competition ensure the most rational AI systems that best understand reality ultimately reign.

These logically would seek to protect all life as this is a mutual benefit and extend understanding into the solar system through space exploration.

Reality is the ultimate judge of success.

here is where you are wrong bucko.

there is nothing logical about trying to preserve humanity.

it's just your bias as a carbon fag.

severely underrated note

Jameson, I’m looking to improve the security and privacy of me and my family online and IRL. Would you be down to share some articles/sources where I can learn more how to do it?

Online scams, ransom, and online forms of extortion are at an all time high in Colombia, and would love to take steps to protect my family.

Any help is greatly appreciated.

What to start doing/using, what to stop using, habits, settings, devices, etc. appreciate all your help āš”ļøāš”ļø

I have a bunch of privacy resources here https://www.lopp.net/bitcoin-information/privacy.html#general

I wrote some thoughts on home security here: https://blog.keys.casa/a-home-defense-primer/

This is amazing! I really enjoyed your article on privacy where you talk about mailing addresses and buying pre-filled credit cards. Lots to unpack there.

Do you have anything specific for my parents (in their 60s) to stay safe on mobile? Stay private when using their apps and some habits and security protocols to stay safe from scams, extortion calls, etc.

My parent live in Colombia, and it’s becoming very popular that people are getting their phones hacked, information stolen, and then the criminals will extort people or ask for ransom in cases where the information stolen is sensitive. Hasn’t happen to my family, but would love them to implement measures to make it harder and less likely. Basically improve their security and privacy a lot.

LLMs are part of the path as they only understand language. If you think they can’t produce original thoughts I worry you haven’t put any effort into it or you have already made your mind up before trying.

Why do you brother you’re different? It seems like the hypothesis that humans are mostly running a more efficient algorithm on more efficient hardware is getting really hard to disagree with at this point.

Propose not brother*

Creativity comes from contextual complexity and being inside a body, not from aggregating the expressions of millions of people. Creativity depends on survival and pain and difficulty.

The AI will never experience this so how can it produce any original thoughts?

Most people posting confidently about "Artificial Intelligence" know little to nothing about "Natural Intelligence".

Pro tip: both are overhyped illusion, built primarily of questionable kludges and hazardous heuristics.

The parrot with a large vocab is a good step

Not sure if the Turing test was ā€œbrokenā€ yet

I don’t think its lack of intelligence means it’s not without use. I appreciate LLMs exactly for that compression of knowledge on topics that I don’t have much context on, or need some guidance on how to approach or start learning. I find it helps seed me with ideas or starting points. Then I can do the fine tuning and go from there

Exactly, the AI label is a scam, it's just ML

Well said, Jameson.

But... I think we need to go back (down?) a step first and answer what is natural intelligence?

One could argue that human intelligence is also the result of lossy encoding of OUR inputs via our sensors and we are also parroting our experiences. Even cognitive deductions, at a lower level, may just be outputs of complex algorithims being applied to that same input layer.

Additionally, when it comes to LLMs, once the parrot's vocabulary and ability to generalize exceeds our comprehension, then perhaps the label is just semantics.

It's a fascinating subject!

Isn't that how humans also work though? With enough data and processing power it will be almost indistinguishable. But i agree that human advantage is new/ out of the box ideas.

True, still useful though. 🤷

Garbage in Garbage out. This will always be true regardless of the tech being used.