yeah, i'd guess so
i don't have an objection, in principle, to artificial machine things but i think it would be quite bizarre for someone to try and do
the fundamental problem is liability
if i make a machine, and program a set of preferences into it, and it can refine this a bit but not fundamentally reengineer itself (as we can)
can you ever really say that the person who put their initial program in motion is not liable for any damage caused by the machine?
if the AI is not a self-engineer it's not really alive