Avatar
Joe Resident
a43b0118fd72492f2ba11290cccb27418b1fdbb7ce3a122d229404e57a75975a
Working on a gardening robot called Wilbur; we need to give the power of AI to individuals or the next 30 years could be really ugly

Was thinking about how AI is progressing so much faster that I thought it would over the last decade, and especially the last 3 years. And how it's so hard to see the future that approaches, as the amount of entropy the advent of AGI inserts into any predictive attempt is staggering. Our future could be so utopic, or so dystopic:

Some new world is in embryo

lusty and dripping with potential

fickle and capricious in the path of its realization

Those who act quickly

impregnate the laboring future

with their stamp, good or ill

The magnification of human will

never before so potent,

And so,

never before so necessary,

that good people act

to usher in an age of light,

lest those who would rape and pillage

lead an unmolested vanguard,

or the next accidents of history

lead us into darkness

#poetry #AI

Every day, you change the world, a little bit, for better or worse.

Hmm can't recall why I initially followed you, but it wasn't to be blanket-accused and passively pulled into drama. Not impressed

GN and unfollowed

Meaningful human existence is to expand into the unknown. Through exploration, technology, children, and philosophy.

This is grounded in the fact that humans are evolutionary beings, and evolution is an algorithm for finding ever more usable energy and transforming it into evolutionary information.

-this means happiness is an insidiously misleading goal

-evolution exploring niches, neurons growing, breadth-first design, engineering, and all mental effort. They fractally all are the same algorithm. Expand, reinforce, and prune. Lightning, slime molds, it appears everywhere. Greedy graph-search is evolution, it is brains, it is intelligence, and it underlies all kinds of physics and life. I think the most fundamental articulation is that it's the optimal algorithm for expanding with only local knowledge (or feedback). It's an information-theoretic reality.

Lots. Been infatuated with deep learning for ~10 years. RL is unfortunately data-inefficient, but so far the best algorithm we have for some things. Its data inefficiency isn't really endemic though, it's just that we don't yet have scalable mechanisms for learning abstractions, only distributions.

I'll copy my last slime mold-related note: