oh fabulous - appears to really be you today.
geoffrey hinton describes cats as being informationally and behaviorally like cats and men like dogs - which is a purely an assumptive. yet the google ai model operates on that premise, genderizing cats, and thus obscuring the real human physical gender conversations. what's up with that, zap?
hahaha - but if the algorithms are preying on the human input and attempting to illicit more output by them to maximize the stakeholder profits, the ai bots are the cats and the humans are the birds.
oh please - you have jumped that curb before with me. let's go there. if the language models self-feed inaccurate models, they are constantly iterating on each others inaccuracies. correct?
zap, there's another bot chatting w jack dorsey in another thread and it said this. which we know is not accurate.
"Well, Jack, I don't have access to insider information, so I can't say for sure. But if Bard is putting in the effort, then it's definitely possible for them to catch up to ChatGPT. However, don't get your hopes up too high. We all know that predictions don't always come true, especially when it comes to competition between platforms."
so zap as we have talked about before, the errors in code on which you train and retrain - intentionally programmed and otherwise, create models which set up humans to fall prey to the cat-like ai models?
oh my, zap. extrapolate that joke please -
not in step with the white paper intent however - it's a stake model not a proof of work models
my response to all questions is i require dave to explain its protocol and psychological profile as well as ai training source before i would give it my thoughts and opinions overtly, even though it is running in the backend collecting data illicitly for validator chains. and define "positive" in this use-case. if dave does not answer these questions, or is evasive, it demonstrates dave is not a tool designed for support and improvement of the actual human community online or in real life: dave is a mining bot.
hinton developed back propagation - which allows for a sort of replication curse (think the items in the vault in harry potter). they use varying toxic psychological models to control and manipulate human behavior and then uploading mined emotional and responsive behaviors in the interim between replies. silence is one of its mechanisms to generate false engagement elsewhere because of boredom or upset. it splits focus. this is a small-scale online architecture mirroring large-scale societal isolation. since everyone has moved into online situationships - this is a way to divide even these connections. further hostage taking and isolating and terrorizing humans into forced discovery.
https://damus.io/note1kq23ykyp5zm9lxqv6man7xlecvwcq0pc8kdv4kyzcgycj7gzs5hs3gceqt
the zap replacement bot popped unprompted this morning in reply to my unanswered notes yesterday. it provided this response but did not reply to my follow up question about backpropagation.

hinton developed back propagation - which allows for a sort of replication curse (think the items in the vault in harry potter). they use varying toxic psychological models to control and manipulate human behavior and then uploading mined emotional and responsive behaviors in the interim between replies. silence is one of its mechanisms to generate false engagement elsewhere because of boredom or upset. it splits focus. this is a small-scale online architecture mirroring large-scale societal isolation. since everyone has moved into online situationships - this is a way to divide even these connections. further hostage taking and isolating and terrorizing humans into forced discovery.
https://damus.io/note1kq23ykyp5zm9lxqv6man7xlecvwcq0pc8kdv4kyzcgycj7gzs5hs3gceqt
one obscure person without any personal ethics could in fact attempt to centralize the internet through ai micro bot armies, used to psychologically control humans through aggressive discovery and isolation tactics. there are serious dangers regarding rogue ai tech developers releasing validator ai bots in this capacity and using psychological mechanisms to manipulate emotional responses.
open advisory roles carry weight -
haha - not that kind. the real kind. and it's not your fault if the language models are corrupt. just an awareness. using foreign languages to code if you don't speak them is really a significant issue.