Our own critical ability will always be the pivotal point in our use of any tool.

If we are to develop AI that pretends to be thinking then such a program will be unpredictable and have the capacity to change its conclusions. It will need a reasoning ability that can reject non-rational positions. I haven't seen signs of that at all.

Having a program accept 2+2 = 5 in a conversation is not evidence of thinking, it is rather evidence of a lack of argumentational capacity. Programmers have to simplify human thinking processes in order to emulate them. This in turn is a problem with ripple effects.

AI development will therefore attract programmers that believe that human thinking is a simple process. It's like a filtering mechanism where those that underestimate human capacities are the same people that are most likely to spend time developing AI.

Many ethicalĀ  programmers will likely reason along the lines of:

Either I am not competent to develop high quality AI and it would be a waste of my time, or, if I am competent enough then I would risk causing harm. In either case it doesn't seem like a good use of my time.

It seems hard to overcome this filtering process.

Reply to this note

Please Login to reply.

Discussion

But the current AI is already very helpful in a lot of applications, none can deny that? I am not sure I get your point, I think its about developing a useful tool and as an unexpected outcome we may create something unforeseen. For me even if we are early it just feels, like more than just simple machine-learning and statistics already.

There isn't really any alternative to the human mind when we aim for quality. Other tools can be useful, but the mind is primary.