I think this one is serious. AI is really not something we should be racing to develop...prudence and caution are warranted.
"Just because you can doesn't mean you should...."
We only have 3 years left until AI destroys or severely harms humanity, according to a former OpenAI employee. https://futurism.com/the-byte/openai-insider-70-percent-doom
I think this one is serious. AI is really not something we should be racing to develop...prudence and caution are warranted.
"Just because you can doesn't mean you should...."
I just don't think extremely good predictive test can become sentient. I'm not an expert by any means.
Why would you assume that?
Seems clear to me that a) humans process information, and use what they already know to make assumptions on new situations b) AI does *exactly* the same...
Therefore...
In their current forms, these "AIs" are only as good as the data that they've been fed.
Agree 100%, and yet they still IMHO present a significant danger...
Just like virus research, AI is being created without really understanding the risks (or perhaps better stated "ignoring the risks").
It's that arrogance that gave us COVID, and that same hubris is quite evident in the AI field.
Recklessly chasing profit margins is not compatible with maintaining safety margins...