“Superintelligence” by Nick.Bostrom is a dense read on AI. Reading through most of it had me scratching my limited ability to grasp complex ideas.
It did get me pondering about BackPropagation & Gradient Descents that allow Neural Networks to “learn”
Neural Networks learn by adjusting weights & biases to align its Predictions with the Target. The models don’t keep a log of the mistakes that it makes due to risk of overfitting.
Wouldn’t an AI Model benefit from keeping a history of its mistakes for autonomy learning?
Another AI rabbit hole to go down🫶
My journey into playing around with code the last few years, has caused me to jump from topic to topic.
Once I find an area of interest, I study it. But while studying it, I come across another topic that causes me to continue exploring.