Replying to Avatar jb55

The strong scal­ing hy­poth­e­sis is that, once we find a scal­able ar­chi­tec­ture like self-attention or con­vo­lu­tions, which like the brain can be ap­plied fairly uni­formly (eg. “The Brain as a Uni­ver­sal Learn­ing Ma­chine”⁠ or Hawkins), we can sim­ply train ever larger NNs and ever more so­phis­ti­cated be­hav­ior will emerge nat­u­rally as the eas­i­est way to op­ti­mize for all the tasks & data. More pow­er­ful NNs are ‘just’ scaled-up weak NNs, in much the same way that human brains look much like scaled-up pri­mate brains⁠.

Avatar
Kevin's Bacon 1y ago

And its truth depends on what "sophistication" means.

Reply to this note

Please Login to reply.

Discussion

No replies yet.