#UK #Anguilla #TLDs #AI #GenerativeAI #DomainSquatting #CyberSquatting: "The tropical British territory of Anguilla wasn’t known to many outside those who sought out its sun-kissed beaches. That was until the generative AI revolution. Suddenly, the island of around 16,000 people became a key broker in the future of our digital lives.
In the 1990s, it was given the domain name ending .AI. Back then, gTLDs (or generic top-level domain names) didn’t go much beyond .com, .org, or .net. The only real envisaged market for .AI domain name endings was local businesses on the island. Until ChatGPT changed everything, and the market for .AI domain names exploded.
Today, more than half a million .AI domain names are registered with Anguillan authorities, who have until now used a local firm called DataHaven.net. The domain names registered include x.ai and claude.ai but notably not open.ai, which is currently held up in a long-running dispute between an individual who claims to have invented the name and concept of OpenAI before Sam Altman unveiled it in 2015—and has a surprising amount of documentary evidence to support his case."
#AI #GenerativeAI #AIBots #Law: "Let’s say you’ve expanded into a new market where a competitor feels threatened and decides to go on the legal offense. They use an AI tool to comb public information about your company and file hundreds of copyright infringement, IP, and trade secret theft cases. The scale means you can’t just ignore it or settle for a nominal amount.
Or perhaps you run a restaurant or a coffee shop. What if every smartphone that entered the restaurant was a few clicks away from capturing your employees’ behavior and filing a claim for discrimination? Your exposure to risk is greatly heightened.
Finally, imagine one of your customers has a claim against your company. Traditionally, they follow your customer-service process because the cost of legal action is prohibitive. But what if they could use a platform to click a few buttons to file a complaint that would lead to a higher payout for them and cost you more in legal fees to defend than it would to settle? We bet they would click that button.
Companies today can often get away with bad behavior because it’s too expensive to enforce against it. Their employees, customers, or competitors have to think hard about going on offense because legal battles are time consuming, expensive, and distracting. When legal action becomes much easier, many more actions will be taken. It levels the playing field by removing the asymmetric advantage businesses today have of greater legal resources and expertise at their fingertips."
https://hbr.org/2024/10/gen-ai-makes-legal-action-cheap-and-companies-need-to-prepare
#AI #GenerativeAI #Microsoft #OpenAI #AIHype #AIBubble: "Microsoft had already pumped $13 billion into OpenAI, and Mr. Nadella was initially willing to keep the cash spigot flowing. But after OpenAI’s board of directors briefly ousted Mr. Altman last November, Mr. Nadella and Microsoft reconsidered, according to four people familiar with the talks who spoke on the condition of anonymity.
Over the next few months, Microsoft wouldn’t budge as OpenAI, which expects to lose $5 billion this year, continued to ask for more money and more computing power to build and run its A.I. systems.
Mr. Altman once called OpenAI’s partnership with Microsoft “the best bromance in tech,” but ties between the companies have started to fray. Financial pressure on OpenAI, concern about its stability and disagreements between employees of the two companies have strained their five-year partnership, according to interviews with 19 people familiar with the relationship between the companies.
That tension demonstrates a key challenge for A.I. start-ups: They are dependent on the world’s tech giants for money and computing power because those big companies control the massive cloud computing systems the small outfits need to develop A.I."
https://www.nytimes.com/2024/10/17/technology/microsoft-openai-partnership-deal.html
The whole web is based on fair use. If open-source AI startups can't train their models on works available online to improve productivity and advance science, you might as well paywall the entire web. Copyright is theft from the commons and the public domain. Disney knows this very well.
https://www.theguardian.com/commentisfree/2024/oct/18/ai-systems-big-tech-data-ministers
#AI #GenerativeAI #UK #AITraining #Copyright #IP
#USA #DoD #Deepfakes #SocialMedia #AI #Propaganda: "The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.
The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.
The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”
In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans."
https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/
At last a great measure proposed by Starmer!! After all, if artists and creators cannot opt-out for copyright, why not at least this time open reverse the scheme with the actual goal of contributing to the advancement of arts and sciences? People who are against this proposal forget not only that all cultural works stem from the public domain but that copyright and IP in general are monopolies.
#UK #AI #GenerativeAI #Copyright #IP: "The UK government is set to consult on a scheme that would allow AI companies to scrape content from publishers and artists unless they “opt out”, in a move that would anger the creative industry.
The decision follows months of lobbying from both sides over whether online content should be automatically included in material that AI firms can use to train their algorithms.
Big tech companies including Google owner Alphabet have argued that they should be able to mine the internet freely to train their algorithms, with companies and creators being given the opportunity to “opt out” of such arrangements.
But publishers and creatives have argued that it is unfair and impracticable to ask them to opt out of these arrangements, as it is not always possible to know which companies are trying to mine their content."
https://www.ft.com/content/26bc3de1-af90-4c69-9f53-61814514aeaa
#AI #GenerativeAI #BigTech #BigAI #Competition #Monopolies #Oligopolies #Competition: "Artificial intelligence (AI) is shaping how we live, work and connect with the world. From chatbots to image generators, AI is transforming our online experiences. But this change raises serious questions: Who controls the technology behind these AI systems? And how can we ensure that everyone — not just traditional big tech — has a fair shot at accessing and contributing to this powerful tool?
To explore these crucial issues, Mozilla commissioned two pieces of research that dive deep into the challenges around AI access and competition: “External Researcher Access to Closed Foundation Models” (commissioned from data rights agency AWO) and “Stopping Big Tech From Becoming Big AI” (commissioned from the Open Markets Institute). These reports show how AI is being built, who’s in control and what changes need to happen to ensure a fair and open AI ecosystem."
https://blog.mozilla.org/en/mozilla/ai/unlocking-ai-research/
#AI #GenerativeAI #USA #Google #GoogleGemini #NARA #NationalArchives #Archiving: "In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”
Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history.
One worker who attended the presentation told 404 Media “I suspect they're going to introduce it to the workplace. I'm just a person who works there and hates AI bullshit.”
The presentation was given about a month after the National Archives banned employees from using ChatGPT because it said it posted an “unacceptable risk to NARA data security,” and cautioned employees that they should “not rely on LLMs for factual information.”"
#AI #GenerativeAI #AITraining #NYT #Copyright #IP #OpenAI #Journalism #News #Media: "The New York Times sent a cease and desist letter demanding that Jeff Bezos-backed Perplexity stop accessing and using its content in AI summaries and other output. The Wall Street Journal reviewed the document.
The letter argues that Perplexity has been “unjustly enriched” by using the publisher’s “expressive, carefully written and researched, and edited journalism without a license,” which it says violates copyright laws.
This isn’t the paper’s first tangle with AI companies — it’s suing OpenAI for using content without consent to train ChatGPT. Other publishers have also accused Perplexity of unethical web scraping."
#AI #GenerativeAI #LLMs #Reasoning: "The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding "seemingly relevant but ultimately inconsequential statements" to the questions. For this "GSM-NoOp" benchmark set (short for "no operation"), a question about how many kiwis someone picks across multiple days might be modified to include the incidental detail that "five of them [the kiwis] were a bit smaller than average."
Adding in these red herrings led to what the researchers termed "catastrophic performance drops" in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent, depending on the model tested. These massive drops in accuracy highlight the inherent limits in using simple "pattern matching" to "convert statements to operations without truly understanding their meaning," the researchers write.
In the example with the smaller kiwis, for instance, most models try to subtract the smaller fruits from the final total because, the researchers surmise, "their training datasets included similar examples that required conversion to subtraction operations." This is the kind of "critical flaw" that the researchers say "suggests deeper issues in [the models'] reasoning processes" that can't be helped with fine-tuning or other refinements."
https://www.wired.com/story/apple-ai-llm-reasoning-research/
#AI #GenerativeAI #AcademicPublishing #AITraining: "So far, several major publishers have announced deals. For them, there is a substantial near-term revenue upside. The basic idea behind these deals is to generate revenue for the publishing house in exchange for easy, reliable, and legal access to the content for the LLM. A number of companies are in the hunt for this content, including not only OpenAI and Google but also Apple and more specialized providers.
Thus far a standard set of terms or overall model from which to build these deals has yet to emerge. Pricing of course is at the top of mind for everyone, but there are many other considerations as well. There are technical and reputational questions about how corrections or retractions will propagate through an LLM and whether an author can opt out, and there are business model issues such as whether provenance will be tracked through the output from an LLM such that a citation or link can be provided back into the scholarly record, just to take several straightforward examples.
We will update the tracker periodically. You may also access the tracker as a Google Sheet. If you are aware of other deals that we have not yet documented in this tracker, please contact us using the form below."
https://sr.ithaka.org/our-work/generative-ai-licensing-agreement-tracker/
#AI #GenerativeAI #Productivity #Automation #AIBubble #AIHype: "Training costs for a single frontier AI model are increasing exponentially, , from $1,000 in 2017 to nearly $200 million in 2024, driven by constant returns to scale in AI model training data, compute capacity and model complexity. Costs could reach billions of dollars by 2030, despite a rapid fall in unit costs per computing operation over the same period. Global AI infrastructure costs in hardware could exceed $1 trillion by the mid-2030s. Amortizing these huge fixed costs requires business models that can be rolled out across a very large user market.
Estimates about AI’s contribution to productivity growth vary, from a modest 0.5 percent per year to a very optimistic 10 percent per year. Research shows that productivity usually catches up slowly compared to costs. Without significant productivity gains, the current investment cost trajectory is unsustainable. We estimate that 3 percent annual productivity growth across advanced economies would be required to sustain AI model cost extrapolations to 2030. In an optimistic scenario, productivity increases would result in accelerated economic growth."
#AI #AGI #GenerativeAI: "OpenAI’s Sam Altman last month said we could have AGI within “a few thousand days.” Elon Musk has said it could happen by 2026.
LeCun says such talk is likely premature. When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” he replied on X.
He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.
Léon Bottou, who has known LeCun since 1986, says LeCun is “stubborn in a good way”—that is, willing to listen to others’ views, but single-minded in his pursuit of what he believes is the right approach to building artificial intelligence.
Alexander Rives, a former Ph.D. student of LeCun’s who has since founded an AI startup, says his provocations are well thought out. “He has a history of really being able to see gaps in how the field is thinking about a problem, and pointing that out,” Rives says."
#AI #GenerativeAI #LinkedIn #Microsoft #DataProtection #Misinformation #Hallucinations: "Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading.
LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.
LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.
The relevant passage, which takes effect on November 20, 2024, reads:"
https://www.theregister.com/2024/10/09/linkedin_ai_misinformation_agreement/
#AI #ClimateChange #AGI #GlobalWarming: "Look, this is not that hard. Even without AGI, we already know what we have to do. We do not need a complex and all-knowing artificial intelligence to understand that we generate too many carbon emissions with our cars, power plants, buildings, and factories, and we need to use less fossil fuels and more renewable energy.
The tricky part—the only part that matters in this rather crucial decade for climate action—is implementation. As impressive as GPT technology or the most state of the art diffusion models may be, they will never, god willing, “solve” the problem of generating what is actually necessary to address climate change: Political will. Political will to break the corporate power that has a stranglehold on energy production, to reorganize our infrastructure and economies accordingly, to push out oil and gas.
Even if an AGI came up with a flawless blueprint for building cheap nuclear fusion plants—pure science fiction—who among us thinks that oil and gas companies would readily relinquish their wealth and power and control over the current energy infrastructure? Even that would be a struggle, and AGI’s not going to doing anything like that anytime soon, if at all. Which is why the “AI will solve climate change” thinking is not merely foolish but dangerous—it’s another means of persuading otherwise smart people that immediate action isn’t necessary, that technological advancements are a trump card, that an all hands on deck effort to slash emissions and transition to proven renewable technologies isn’t necessary right now. It’s techno-utopianism of the worst kind; the kind that saps the will to act."
https://www.bloodinthemachine.com/p/ai-will-never-solve-this
#SocialMedia #Twitter #Musk #Journalism #Censorship #FreedomOfSpeech #Trump: "The Trump campaign coordinated with Elon Musk’s X (née Twitter) to kill circulation of my publication of the J.D. Vance Dossier, The New York Times reported today. Simply put, X colluded with a political campaign to restrict the public’s access to information about a Vice Presidential candidate just weeks before the election.
Millions of Americans like myself rely on social media platforms like X to participate in the political process, whether by active discussion or simply consuming political content. X’s decision to remove my article and permanently suspend my account demonstrates the awesome power concentrated in these platforms and their billionaire owners.
The Trump campaign claims that my publication of the Vance dossier constitutes election interference. The real election interference here is that a social media corporation can decree certain information unfit for the American electorate. Two of our most sacred rights as Americans are the freedoms of speech and assembly, online or otherwise. It is a national humiliation that these rights can be curtailed by anyone with enough digits in their bank account."
https://www.kenklippenstein.com/p/trump-camp-worked-with-musks-x-to
#AI #GenerativeAI #AIRegulation #RuleOfLaw: "In the thick of AI’s promises to secure a better future, there may well be something unique about AI that requires specific, focused regulation when it comes to its application in the justice system. AI’s adaptability, powered by advanced pattern-matching capabilities that draw out novel insights from massive datasets, creates an acceleration effect. Its speed and perceived reliability risk transforming potential answers into concrete outcomes with worrying ease. Deploying AI-powered tools developed by private industry within the legal sphere, especially the administrative state, requires thinking hard about balance. Achieving the appropriate balance will require thinking through the degree of impact we want to allow corporate entities to have within our public institutions. We risk undue corporate influence, coupled with automation bias, through overreliance on AI-powered tools. Those risks are simply too high when it comes to the cornerstone of functioning democracies — a transparent and robust legal system that aims to guarantee just outcomes to its citizens through the Rule of Law."
https://www.justsecurity.org/103777/maintaining-the-rule-of-law-in-the-age-of-ai/
#IA #GenerativeAI #AI: "O experimento levou a 6 descobertas no que diz respeito a rótulos:
- prejudicam candidatos que utilizaram GenIA em peças – eleitores os consideraram menos confiáveis com a marcação;
- resultam em um efeito “tiro pela culatra” o uso de GenIA com ataques a oponentes – avaliação negativa ficou para o criador do anúncio, não seu alvo;
- diminuem as avaliações em relação a candidatos que criaram anúncios e que pertenciam aos seus próprios partidos políticos ou não sem filiação política;
- na maioria das condições, as implicações do disclaimer foram pequenas, e uma minoria notável não as notou;
- a formulação do rótulo de GenIA é relevante, mas é difícil imaginar como as pessoas o interpretarão;
- entrevistados são os que menos amparam a abordagem política mais comumente adotada pelos governos estaduais: quando perguntados sobre o que apoiar, expressam o menor suporte aos que exigem disclaimer em usos enganosos.
Os resultados indicam que os custos de exigir selo podem superar os benefícios. É preciso garantir a melhor compreensão dessa medida no marketing político antes de promulgar mais leis que as exijam, conclui a pesquisa."
https://www.poder360.com.br/opiniao/estudo-mostra-que-leis-sobre-ia-em-campanhas-ignoram-eleitor/
#SocialSciences #STS #AI: "How much time should practitioners put into training a model to do the work that they normally do? At what scale of AI adoption does all of this time become an appropriate return on their investments of efforts? Is it reasonable for us to imagine that in the future people will speak with a chatbot first, before talking to a human practitioner? Each profession will have to work through these questions. They will have to manage the scale at which models can align with their practice and who gets left behind in this transformation.
Thinking like a sociotechnical researcher requires that we grapple with these choices. It requires us to cut through the arguments about the inevitability of AI. When we start paying attention to how we make choices about AI, we are all thinking like sociotechnical researchers.
We can ask: What does a given application of AI mean for us as individuals? What does it mean for the communities we identify with? What does it mean for the work we do; for our professions; for our culture; for our places; for our countries; for the world we inhabit; and for the other species that we share our planet with?"
https://datasociety.net/points/how-to-think-like-a-sociotechnical-researcher/
#AI #AITraining #Surveillance #Privacy #DataProtection #Ecovacs #Robots #RobotVaccum: "Ecovacs's privacy policy – available elsewhere in the app – allows for blanket collection of user data for research purposes, including:
- The 2D or 3D map of the user's house generated by the device
- Voice recordings from the device's microphone
- Photos or videos recorded by the device's camera
It also states that voice recordings, videos and photos that are deleted via the app may continue to be held and used by Ecovacs.
An Ecovacs spokesperson confirmed the company uses the data collected as part of its product improvement program to train its AI models.
Critical cybersecurity flaws – allowing some Ecovacs models to be hacked from afar – have cast doubt on the company's ability to protect this sensitive information.
Cybersecurity researcher Dennis Giese reported the problems to the company last year after he found a series of basic errors putting Ecovacs customers' privacy at risk.
"If their robots are broken like that," he asked, "how does their back-end [server] look?
"Even if the company's not malicious, they might be the victim themselves of corporate espionage or nation state actors."
Ecovacs — which is valued at $4.6 billion — said it is "proactively exploring more comprehensive testing methods" and committed to fixing the security issues in its flagship robot vacuum in November."