a big part of why i'm skeptical about AI's actual utility is a fairly simple one

i am something of a specialist in data encoding, and secondarily, computational cryptography, and in these areas you get to learn a lot about combinatorial complexity

creating highly compressed representations of data that fit historical records is easy

jpeg and mpeg and aac and similar media encoders precisely perform an analysis on a data set, and can massively reduce the amount of bits to represent a given piece of raw binary encoded image or sound data

many of the new swarm of AI trading bots use these new AI systems and the mathematics of training predictive text and machine learning neural networks and data compression are very closely related, and both have the same kind of issue of a ratio of error

but on top of this, you simply cannot encode a pattern that you have never seen before

you can literally feed every bit of data about price movements and related information tied to it into an AI system, and it's not gonna help you in that 0.001% of the time when something that has never before happened, happens

for perspective, this amounts to about 1 second every week that this system will not see coming

so, long story short, AI tech is not going to substantially change the real effects of "the hand of God" or "Fate" on our lives, in those moments that He throws a new thing into the mix

a machine is less likely to be able to account for this than a human who occasionally gets a whisper from Him that gives the hint how to work with the novelty

a society that entirely depends on the function of thinking machines will become frozen in time, and will inevitably decay into entropy without the ability to account for the new entropy that flows into our universe, no matter how fast it can accumulate the novelty it will always be the defender, not the attacker, and that is all assuming you can keep feeding it the energy it needs to do this

GPT models are good at speech to text and text to speech. I read GPT was designed for those kinds of things.

Reply to this note

Please Login to reply.

Discussion

the name literally tells you: Generative Predictive Text

it's a type of path discovery algorithm that are created by grammar trees, it uses a special type of hash function that produces an approximation of the hamming distance of two nodes on the semantic graph that changes as you walk it, based on a random seed

the technology is really primitive, it's just a moderately functional natural language processing system, i prefer to call it "text mangler" because it literally eats a pile of input and then does things to it that let you pull a spaghetti string out of it that makes sense

its predecessor was the Markov Chain and back in 2004 in my customary IRC chat at the time the sysadmin of the chat deployed a markov chain bot that fed on our messages and spat back mangled versions of our text, it was quite funny because i could recognise it, and because i was one of the most prolific writers, it regurgitated a lot of expressions taht i used

that's also why if you look at the list of npubs that have muted me among them are two GPT bots :D

Generative Pre-trained Transformer, but your point remains.

And also to your point, anything it generates depends on its pre-training. So the more content that gets created with GPTs the more pre-training that happens with later GPTs, and a lot of noise results.

yeah, and it's already bad, as it is

i can pick out a typical 3 paragraph output from it just from the first words of each paragraph, it's that fucking repetitive, anyone who can't see this is just dumb

Surely a model can be trained to recognize text from the most common models out there. Then we can run this model in our browser and use it to auto-block pages that were written by models.

i'm not sure you fully comprehend how much energy budget you need for such inane nonsense

I can hear my fans whirring as the model runs haha. I do have an idea because I run ollama locally. It is quite laggy and whirry.

And that's just running the already trained model. I assume it's something like millions of times more energy to generate the training data and then train.