Totally agree. I think there’s a high likelihood that the way OpenAI and others start to become profitable is by selling weights im their training data so that companies can get the answers they want shown to customers more often.
For me, the bigger issue here is the absolute blind trust being given to the AI Overlords already. I see BIG issues happening as models are retrained yet people still have that unwavering trust.
Think back to SEO (Search Engine Optimization for non nerds) and how search algos were gamed over time. If you don't think external parties will find ways to influence models, well...
"Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers wrote in the paper. "A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise."
https://futurism.com/study-ai-critical-thinking
#ai #artificialintelligence #technology
Discussion
I was talking about finding ways to do it externally but the platform owners will no doubt do this as well because it is such a low hanging revenue stream and if they're publicly traded, gotta maximize that shareholder value above all else.
Ah yes, I see what you mean. There will definitely be both sides of that.