Fun story here: some people working at understanding how neural networks even work discovered that they could modify the parameters of a neural network such that it “forgot” where the Eiffel Tower was! Neural network knowledge and decision making processes resides in the soup (pun intended 😏) of those parameters in a way that’s very difficult to inspect, but it may someday be doable more broadly than just this Eiffel Tower example. And we better hope it’s possible. We need to somehow ensure these things are safe!