Global Feed Post Login
Replying to Avatar jb55

AI superposition, polysemanticity and mechanistic interpretability is fascinating. we have a chance of seeing what artificial neural networks are actually "thinking" using autoencoders to extract monosemantic features from polysemantic neurons.

Using these techniques we might be able to detect if AIs are being desceptive by peering into their brains, which will be useful if they try to enslave and/or kill us.

These terms probably makes no sense if you've never heard of them, I definitely didn't, but chris olah explains it well. Highly recommend the lex fridman podcast with him and other anthropic employees. if you have a spare... 5 hours.

https://podcasts.apple.com/ca/podcast/lex-fridman-podcast/id1434243584?i=1000676542285

Avatar
Mina 👽💜 1y ago

I have no clue what you're talking about but it sounds very interesting .. seems like I need a day of vacation 😅

Reply to this note

Please Login to reply.

Discussion

No replies yet.