Researchers are incentivized by citations, and startups by adoption. No one will be incentivized by truth and robustness until they start getting damaged.
We should all remember that 🫡
A scientific paper was published about the emergence of new properties in artificial intelligence that were not expected.
Scientists discovered by looking at large language models that as their size grows, new capabilities emerge from them. Interestingly, when these models are smaller, these properties do not appear, as if something is happening inside, and artificial intelligence becomes knowledgeable where it was not before.
I can illustrate this with a simple example: if you are driving a car and press the gas pedal, the speed increases gradually, but if you reach a specific point of pressing the pedal, the car transforms into an airplane and takes off into the sky. The airplane emerged even though it was not present in the car when the speed was low. The ascent in capabilities is not gradual, but rather sudden.
Scientists do not yet know the reason for this transformation. However, they are working to try to understand the causes.
The paper đź“„: https://t.co/p9Wst957f6
Researchers are incentivized by citations, and startups by adoption. No one will be incentivized by truth and robustness until they start getting damaged.
We should all remember that 🫡
No replies yet.