Perhaps a lot of weight rests on "infers" here.

My bash script is not "inferring" anything: it is simply repeating something given to it.

But - using the example in the paper - is a system with camera taking action based on a pixel really "inferring" that that pixel represents a person, or just that it has been programmed to take a particular action if data pass certain tests?

Reply to this note

Please Login to reply.

Discussion

nostr:npub198t8kgwqas59rvmnghzcdn6krzhxhpkyt2mt53e4g9sdnj74sszss5hasj But you have no complete definition of the tests in the AI case; the fact you don't know the relationship between pixel input and 'it's a person' is why it's inferred isn't it?