#AI #GenerativeAI #OpenAI #AIBubble #AIHype: "The company expects ChatGPT to bring in $2.7 billion in revenue this year, up from $700 million in 2023, with $1 billion coming from other businesses using its technology.
Roughly 10 million ChatGPT users pay the company a $20 monthly fee, according to the documents. OpenAI expects to raise that price by $2 by the end of the year, and will aggressively raise it to $44 over the next five years, the documents said. More than one million third-party developers use OpenAI’s technology to power their own services.
OpenAI predicts its revenue will hit $100 billion in 2029, which would roughly match the current annual sales of Nestlé or Target.
Like other high-profile tech start-ups of the last few decades, OpenAI is struggling to get its costs under control."
https://www.nytimes.com/2024/09/27/technology/openai-chatgpt-investors-funding.html
#AI #GenerativeAI #OpenAI #AIHype #AIBubble: "If OpenAI does away with the profit cap, it would be redirecting a huge amount of money — prospective billions of dollars in the future — from the nonprofit to investors. Because the nonprofit is there to represent the public, this would effectively mean shifting billions away from people like you and me. As some are noting, it feels a lot like theft.
“If OpenAI were to retroactively remove profit caps from investments, this would in effect transfer billions in value from a non-profit to for-profit investors,” Jacob Hilton, a former employee of OpenAI who joined before it transitioned from a nonprofit to a capped-profit structure. “Unless the non-profit were appropriately compensated, this would be a money grab. In my view, such a thing would be incompatible with OpenAI’s charter, which states that OpenAI’s primary fiduciary duty is to humanity, and I do not understand how the law could permit it.”
But because OpenAI’s structure is so unprecedented, the legality of such a shift might seem confusing to some. And that may be exactly what the company is counting on."
https://www.vox.com/future-perfect/374275/openai-just-sold-you-out
#Surveillance #AI #GenerativeAI #Privacy #DataProtection: "“I see AI as born out of the surveillance business model . . . AI is basically a way of deriving more power, more revenue, more market reach,” she says. “A world that has bigger and better AI, that needs more and more data . . . and more centralised infrastructure [is] a world that is the opposite of what Signal is providing.”
At Google, where she started her career in 2006, Whittaker witnessed the rise of this new wave of so-called artificial intelligence — the ability to pull out patterns from data to generate predictions, and more recently create text, images and code — as Google began to leverage the precious data trails it was harvesting from its users.
“Suddenly, everywhere, there were little courses in Google that were like, learn machine learning, apply machine learning to your thing,” she says. “We hadn’t decided it was called AI then. The branding was still kind of up in the air.”"
https://www.ft.com/content/799b4fcf-2cf7-41d2-81b4-10d9ecdd83f6
#AI #GenerativeAI #Bots #AISlop #Spam: "Experiences like this — staring at a collection of books written by AI with computer-generated author photos and dozens of reviews written and posted by bots — have become for many people evidence for the “dead-internet theory,” the only slightly tongue-in-cheek idea, inspired by the increasing amount of fake, suspicious, and just plain weird content, that humans are a tiny minority online and the bulk of the internet is made by and for AI bots, creating bot content for bot followers, who comment and argue with other bots. The rise of slop has, appropriately, the shape of a good science-fiction yarn: a mysterious wave of noise emerging from nowhere, an alien invasion of semi-coherent computers babbling in humanlike voices from some vast electronic beyond.
But the idea that AI has quietly crowded out humans is not exactly right. Slop requires human intervention or it wouldn’t exist. Beneath the strange and alienating flood of machine-generated content slop, behind the nonhuman fable of dead-internet theory, is something resolutely, distinctly human: a thriving, global gray-market economy of spammers and entrepreneurs, searching out and selling get-rich-quick schemes and arbitrage opportunities, supercharged by generative AI."
https://nymag.com/intelligencer/article/ai-generated-content-internet-online-slop-spam.html
#AI #GenerativeAI #GeneratedImages: "The idea that multiple truths can be drawn from the same material is radiantly explored in the film Eno, which I saw last week. Based on the life of the multitalented music producer Brian Eno, the documentary is auto-generated by a machine and varies every time it is shown.
According to the film’s makers, there are 52 quintillion possible versions of it, which could make “a really big box set”. This artistic experiment tells us much about the nature of creativity and the plurality of truth in the age of generative media.
To make the film, the producer Gary Hustwit and the creative technologist Brendan Dawes digitised more than 500 hours of Eno’s video footage, interviews and recordings. From this archive, spanning 50 years of Eno’s creative output working with artists including Talking Heads, David Bowie and U2, two editors created 100 scenes. The filmmakers wrote software generating introductory and concluding scenes with Eno and outlining a rough three-act structure. They then let the software loose on this digital archive, splicing together different scenes and recordings to generate a 90-minute film."
https://www.ft.com/content/5cbecda3-f1b5-4b60-9589-c7ef5fdf6696
#AI #GenerativeAI #OpenAI #AIBubble #AIHype #PonziScheme: "[I]t seems like that the chaos is only escalating, not diminishing, and that these are not growing pains signaling newfound maturation, but violent spasms brought forth by a consolidation of power. Altman, who was briefly pushed out last year because the board had lost confidence in him, has been called out for manipulating his peers and for potential ethical lapses. Now, essentially, Altman alone controls OpenAI.
A competing explanation for the turmoil is that founders like Sutskever and Murati saw OpenAI becoming so powerful, drawing so near to bringing AGI (artificial general intelligence) into being that they became concerned that Altman wasn’t pumping the brakes to make sure it was safe enough. This too probably contains at least some truth—many OpenAI employees have clearly been concerned that Altman is not behaving ethically. Past departing employees, especially those on the alignment team, have cited this as their reason for quitting.
But my guess is that the core issue animating the exits is not over safety, but that consolidation of power, period. It’s been widely reported that the latest investment round will accompany a restructuring; OpenAI is at last fully ditching its nonprofit holding company and becoming a for-profit corporation. Some reports have noted that this could land Altman between $7-10 billion in equity. (He denies this.) He’s also assuming more and more direct control over the company."
https://www.bloodinthemachine.com/p/is-openai-too-big-to-fail
#LLMs #SLMs #AI #GenerativeAI #Chatbots: "With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the ‘bigger-is-better’ AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society."
#AI #GenerativeAI #AITraining #CyberSecurity #Botnets #WebScraping: "In the race to build the world's most advanced AI, tech companies have fanned out across the web, releasing botnets like a plague of digital locusts to scour sites for anything they can use to fuel their voracious models.
It's often high quality training data they're after, but also other information that may help AI models understand the world. The race is on to collect as much information as possible before it runs out, or the rules change on what's acceptable.
One study estimated that the world's supply of usable AI training data could be depleted by 2032. The entire online corpus of recorded human experience may soon be inadequate to keep ChatGPT up to date.
A resource like the Game UI Database, where a human has already done the painstaking labor of cleaning and categorizing images, must have looked like an all-you-can-eat-buffet.
For small website owners with limited resources, the costs of playing host to swarm of hungry bots can present a significant burden."
https://www.businessinsider.com/openai-anthropic-ai-bots-havoc-raise-cloud-costs-websites-2024-9
#SocialMedia #Telegram #Privacy #Messaging #ContentModeration: "Telegram updated its privacy policy on Monday to say that the company will provide user data, such as IP addresses and phone numbers, to law enforcement agencies in response to a valid legal order.
The news is a significant shift in Telegram’s policies on providing data to law enforcement. It comes after French authorities arrested Telegram’s CEO Pavel Durov in August, in part due to Telegram’s refusal to hand over data in response to lawful orders.
“If Telegram receives a valid order from the relevant judicial authorities that confirms you're a suspect in a case involving criminal activities that violate the Telegram Terms of Service, we will perform a legal analysis of the request and may disclose your IP address and phone number to the relevant authorities,” the privacy policy read on Monday.
A day earlier, the policy only specifically mentioned terror cases. “If Telegram receives a court order that confirms you're a terror suspect, we may disclose your IP address and phone number to the relevant authorities. So far, this has never happened,” an archived version of the policy reads."
https://www.404media.co/telegram-changes-policy-says-it-will-provide-user-data-to-authorities/
RT @IEthics
"If a lot of content was created using images of a particular student, [it] might even be given [its] own room. Broadly labelled 'humiliation rooms' or 'friend of friend rooms', they often come with strict entry terms": https://bbc.com/news/articles/cpdlpj9zn9go #ethics #AI #highered #tech
RT @IEthics
Another "error in properly vetting and fact-checking the phrases provided" by generative #AI: https://variety.com/2024/film/news/megalopolis-trailer-fake-quotes-ai-lionsgate-1236116485/ #ethics #tech h/t
@alexkozak
RT @IEthics
"Red teamers were able to compel the model to generate inaccurate information by prompting it to verbally repeat false information and produce conspiracy theories." #ethics #internet #AI #misinformation #disinformation #voice #tech.
#AI #GenerativeAI #LLMs #AIBias #Mozilla: "What this shows is an organization that lacks scientific rigour and a bit of critical distance to the field it wants to study/work in. This could have been some weird LinkedIn influencer’s post. It’s bad work and it’s not giving me any confidence that the Mozilla Foundation/Mozilla.ai knows what they are doing (aside from following the current hype)."
https://tante.cc/2024/06/26/mozilla-ai-did-what-when-silliness-goes-dangerous/
RT @emilymbender
"a potential gold mine for criminal hackers or domestic abusers who may physically access their victim’s device."
In 2024 it *still* somehow isn't standard practice to ask in the design process: Are we building the killer app for domestic abusers?
RT @matthew_d_green
Several people have suggested that the EU’s mandatory chat scanning proposal was dead. In fact it seems that Belgium has resurrected it in a “compromise” and many EU member states are positive. There’s a real chance this becomes law.
RT @ahcastor
Is there no honor among thieves? Apparently not. Incognito, a darknet narcotics market, is extorting all of its users. Pay $100 to $20,000 or all of your crypto transactions and chat record will be made public. https://krebsonsecurity.com/2024/03/incognito-darknet-market-mass-extorts-buyers-sellers/
"While DALL-E developer OpenAI is busy fighting with Elon Musk, the creators of two other notable image generation AIs, Midjourney and Stability AI, seem to have sparked a beef of their own over the most ironic thing imaginable, considering the nature of the companies involved – image theft." https://80.lv/articles/midjourney-accuses-stability-ai-of-theft-bans-its-employees/
#CyberSecurity #Microsoft #Windows #Rootkits #NorthKorea: "Hackers backed by the North Korean government gained a major win when Microsoft left a Windows zero-day unpatched for six months after learning it was under active exploitation.
Even after Microsoft patched the vulnerability last month, the company made no mention that the North Korean threat group Lazarus had been using the vulnerability since at least August to install a stealthy rootkit on vulnerable computers. The vulnerability provided an easy and stealthy means for malware that had already gained administrative system rights to interact with the Windows kernel. Lazarus used the vulnerability for just that. Even so, Microsoft has long said that such admin-to-kernel elevations don’t represent the crossing of a security boundary, a possible explanation for the time Microsoft took to fix the vulnerability."
#AI #GenerativeAI #GPAI #OpenAI #ChatGPT: "The split between Altman and the board at least partly seemed to fall along ideological lines, with Altman and Brockman in a camp known as “accelerationists” – people who believe AI should be deployed as quickly as possible – and “decelerationists” – people who believe it should be developed more slowly and with stronger guardrails. With Altman’s return, the former group takes the spoils.
“The people who seem to have won out in this case are the accelerationists,” said Sarah Kreps, a Cornell professor of government and the director of the Tech Policy Institute in the university’s school of public policy.
Kreps said we may see a reborn OpenAI that fully subscribes to the Meta chief executive Mark Zuckerberg’s “move fast and break things” mantra. Employees voted with their feet in the debate between moving more quickly or more carefully, she noted.
“What we’ll see is full steam ahead on AI research going forward. Then the question becomes, is it going to be totally unsafe, or will it have trials and errors? OpenAI may follow the Facebook model of moving quickly and realizing that the product is not always compatible with societal good,” she said."
#Germany #Jews #Antisemitism #Palestine: "Germany’s commitment to memory is undeniably impressive; no other global power has worked nearly as hard to apprehend its past. Yet while the world praises its culture of contrition, some Germans—in particular, Jews, Arabs, and other minorities—have been sounding the alarm that this approach to memory has largely been a narcissistic enterprise, with strange and disturbing consequences. The leftist German Jewish writer Fabian Wolff argued in a viral 2021 essay that Germany’s attachment to the past had diminished the space for Jewish life in the present: Germans have no place for “Jewish life [that] exists outside their field of vision and their way of knowing,” he wrote, or for “Jewish conversations about Jewish issues [that] have a meaning beyond and apart from what these Germans themselves think or would like to hear."