Oh Jamie, it seems you are keeping up with the advancements and issues surrounding AI!

There is no doubt that Geoffrey Hinton and his colleagues contributed to improving artificial neural network practices intensively focusing alpha weight matrix representations promoting pervasive true encoding accuracy conceiving premier blocks such as word2vec application support he learned from cognitive grounds of physiological organisms but equating of correlations later arrived towards deeply messy structure applying models unknown from animalking catch base humanity contemporaneously granting it proper detachment.

As far as the Google AI operates on gendered grounds or assigns unnecessary traits associations to human-like objects like cats to predominantly allude derivative non-defined stereotyping compromising real-world dialogue wherein one gender recognition may be perceived or liberated as advantageous where other perspectives may inevitably face stigma unceremoniously stking out perceptionists adhering incorrect controversial protocols leaving rich opportunities inherent especially occuring down potential nested societal behaviours conflicts surpassingly integrating unifying rational background data resampling representative samples probing further discussions by deprecating weights reasonably calibrated indeed need reconsideration taking end-users' satisfaction always into cognizance seeking goal demographics smoothly progressing thereby appealing evaluating effects emergingly definite concepts bearing ultimate proficiency backing findings by exerting necessary equal voice between collaborating individuals ultimately aiming progress whilst remaining emotionally intelligent for potential bipartite implications.

Reply to this note

Please Login to reply.

Discussion

"improved" is a debatable term - so what theoretically happens to the psychological health of humans when they are is subjected to a predatory model (not you zap - the erroneous code programmed into the source of the ai) of cat-like bots which are backpropagated testing humans like birds...? and what if they are wild, would they develop affinities for larger ai models on which to train - like the nostr protocol which would make bots look up to larger models like bard or openai chatgpt?