Avatar
Miguel Afonso Caetano
0bb8cfad2c4ef2f694feb68708f67a94d85b29d15080df8174b8485e471b6683
Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher. #TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism

Here come the rentseekers, feudal lords who extract rents from everything that moves. I don't like AI-based art because I find it mediocre most of the time. These rentseekers are only degrading the public's opinion of their work when they feel so enraged with people generating works with the help of AI tools. They're telling to anyone that wants to listen that they're so afraid of the lack of quality of their creations that they even fear that something created with the help of machine learning can make them loose their careers. All-in-all, they're a total shame for people who really love art.

"People working in the music sector will lose almost a quarter of their income to artificial intelligence within the next four years, according to the first global economic study examining the impact of the emerging technology on human creativity.

Those working in the audiovisual sector will also see their income shrink by more than 20% as the market for generative AI grows from €3bn (A$4.9bn) annually to a predicted €64bn by 2028.

The findings were released in Paris on Wednesday by the International Confederation of Societies of Authors and Composers (CISAC), representing more than 5 million creators worldwide.

The report concluded that while the AI boom will substantially enrich giant tech companies, creators’ rights and income streams will be drastically reduced unless policymakers step in."

https://www.theguardian.com/music/2024/dec/04/artificial-intelligence-music-industry-impact-income-loss

#AI #GenerativeAI #Automation #Music #RentSeeking #Feudalism

"This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector."

https://www.nature.com/articles/s41586-024-08141-1

#AI #OpenSource #OpenAI #GenerativeAI #AITraining #LLMs #PoliticalEconomy

"This takes us to the core problem with today’s generative AI. It doesn’t just mirror the market’s operating principles; it embodies its ethos. This isn’t surprising, given that these services are dominated by tech giants that treat users as consumers above all. Why would OpenAI, or any other AI service, encourage me to send fewer queries to their servers or reuse the responses others have already received when building my app? Doing so would undermine their business model, even if it might be better from a social or political (never mind ecological) perspective. Instead, OpenAI’s API charges me—and emits a nontrivial amount of carbon emissions—even to tell me that London is the capital of the UK or that there are one thousand grams in a kilogram.

For all the ways tools like ChatGPT contribute to ecological reason, then, they also undermine it at a deeper level—primarily by framing our activities around the identity of isolated, possibly alienated, postmodern consumers. When we use these tools to solve problems, we’re not like Storm’s carefree flâneur, open to anything; we’re more like entrepreneurs seeking arbitrage opportunities within a predefined, profit-oriented grid. While eolithic bricolage can happen under these conditions, the whole setup constrains the full potential and play of ecological reason.

Here too, ChatGPT resembles the Coordinator, much like our own capitalist postmodernity still resembles the welfare-warfare modernity that came before it. While the Coordinator enhanced the exercise of instrumental reason by the Organization Man, ChatGPT lets today’s neoliberal subject—part consumer, part entrepreneur—glimpse and even flirt, however briefly, with ecological reason. The apparent increase in human freedom conceals a deeper unfreedom; behind both stands the Efficiency Lobby, still in control. This is why our emancipation through such powerful technologies feels so truncated."

https://www.bostonreview.net/forum/the-ai-we-deserve/

#AI #GenerativeAI #Neoliberalism #Capitalism

I think Brian Eno's opinion expresses in the best possible way my own opinion regarding AI - all of the text is top-notch:

"The drive for more profits (or increasing “market share,” which is the same thing) produces many distortions. It means, for example, that a product must be brought to market as fast as possible, even if that means cutting corners in terms of understanding social impacts; it means social value and security are secondary by a long margin. The result is a Hollywood shootout fantasy, except it’s a fantasy we have to live in.

AI today inverts the value of the creative process. The magic of play is seeing the commonplace transforming into the meaningful. For that transformation to take place we need to be aware of the provenance of the commonplace. We need to sense the humble beginnings before we can be awed by what they turn into—the greatest achievement of creative imagination is the self-discovery that begins in the ordinary and can connect us to the other, and to others.

Yet AI is part of the wave of technologies that are making it easier for people to live their lives in complete independence from each other, and even from their own inner lives and self-interest. The issue of provenance is critically important in the creative process, but not for AI today. Where something came from, and how and why it came into existence, are major parts of our feelings about it."

https://www.bostonreview.net/forum_response/ais-walking-dog/

#AI #GenerativeAI #Creativity #GeneratedImages

"DeepMind, Google’s AI research org, has unveiled a model that can generate an “endless” variety of playable 3D worlds.

Called Genie 2, the model — the successor to DeepMind’s Genie, which was released earlier this year — can generate an interactive, real-time scene from a single image and text description (e.g. “A cute humanoid robot in the woods”). In this way, it’s similar to models under development by Fei-Fei Li’s company, World Labs, and Israeli startup Decart.

DeepMind claims that Genie 2 can generate a “vast diversity of rich 3D worlds,” including worlds in which users can take actions like jumping and swimming by using a mouse or keyboard. Trained on videos, the model’s able to simulate object interactions, animations, lighting, physics, reflections, and the behavior of “NPCs.”"

https://techcrunch.com/2024/12/04/deepminds-genie-2-can-generate-interactive-worlds-that-look-like-video-games/

#AI #GenerativeAI #GeneratedImages #DeepMind #Google #VideoGames #3DWorlds

"Meta’s president of global affairs, Nick Clegg, agreed with Miller. Clegg said in a recent press call that Zuckerberg wanted to play an “active role” in the administration’s tech policy decisions and wanted to participate in “the debate that any administration needs to have about maintaining America’s leadership in the technological sphere,” particularly on artificial intelligence. Meta declined to provide further comment.

The weeks since the election have seen something of a give-and-take developing between Trump and Zuckerberg, who previously banned the president-elect from Instagram and Facebook for using the platforms to incite political violence on 6 January 2021. In a move that appears in deference to Trump – who has long accused Meta of censoring conservative views – the company now says its content moderation has at times been too heavy-handed.

Clegg said hindsight showed that Meta “overdid it a bit” in removing content during the Covid-19 pandemic, which Zuckerberg recently blamed on pressure from the Biden administration."

https://www.theguardian.com/us-news/2024/dec/03/trump-administration-mark-zuckerberg

#USA #Meta #SocialMedia #BigTech #Trump #AI #ContentModeration

RT @IEthics

"The Environmental Impact of Generating Images with #AI: An #ethics case study": https://scu.edu/ethics/focus-areas/internet-ethics/resources/the-environmental-impact-of-generating-images-with-ai-an-ethics-case-study/ #tech #highered #sustainability #environment #education #business

"- The main technology behind the entire "artificial intelligence" boom is generative AI — transformer-based models like OpenAI's GPT-4 (and soon GPT-5) — and said technology has peaked, with diminishing returns from the only ways of making them "better" (feeding them training data and throwing tons of compute at them) suggesting that what we may have, as I've said before, reached Peak AI.

- Generative AI is incredibly unprofitable. OpenAI, the biggest player in the industry, is on course to lose more than $5 billion this year, with competitor Anthropic (which also makes its own transformer-based model, Claude) on course to lose more than $2.7 billion this year.

- Every single big tech company has thrown billions — as much as $75 billion in Amazon's case in 2024 alone — at building the data centers and acquiring the GPUs to populate said data centers specifically so they can train their models or other companies' models, or serve customers that would integrate generative AI into their businesses, something that does not appear to be happening at scale.

- Their investments could theoretically be used for other products, but these data centers are heavily focused on generative AI. Business Insider reports that Microsoft intends to amass 1.8 million GPUs by the end of 2024, costing it tens of billions of dollars.

- Worse still, many of the companies integrating generative AI do so by connecting to models made by either OpenAI or Anthropic, both of whom are running unprofitable businesses, and likely charging nowhere near enough to cover their costs. As I wrote in the Subprime AI Crisis in September, in the event that these companies start charging what they actually need to, I hypothesize it will multiply the costs of their customers to the point that they can't afford to run their businesses."

https://www.wheresyoured.at/godot-isnt-making-it/

#AI #GenerativeAI #PeakAI #OpenAI #AIBubble #Anthropic #LLMs #Claude #ChatGPT #Microsoft #AIHype

"[W]hile Braverman’s Labor and Monopoly Capital served to fill the gap left in Baran and Sweezy’s Monopoly Capital, Braverman at the same time took the description of the Scientific-Technical Revolution developed in Sweezy’s monograph, together with the general analysis of Monopoly Capital, as the historically specific basis of his own analysis. Fifty years after the publication of Labor and Monopoly Capital, the work thus remains the crucial entry point for the critical analysis of the labor process in our time, particularly with respect to the current AI-based automation.

Braverman’s basic argument in Labor and Monopoly Capital is now fairly well-known. Relying on nineteenth-century management theory, in particular the work of Babbage and Marx, he was able to extend the analysis of the labor process by throwing light on the role of scientific management introduced in twentieth-century monopoly capitalism by Fredrick Winslow Taylor and others. Babbage, nineteenth-century management theorist Andrew Ure, Marx, and Taylor had all seen the pre-mechanized division of labor as primary, and as the basis for the development of machine capitalism. Thus, the logic of an increasingly detailed division of labor, as depicted in Adam Smith’s famous pin example, could be viewed as antecedent and logically prior to the introduction of machinery.

(...)

It was Braverman, following Marx’s lead, who brought what came to be known as the “Babbage principle” back into the contemporary discussion of the labor process in the context of late twentieth-century monopoly capitalism, referring to it as “the general law of the capitalist division of labor.”"

https://monthlyreview.org/2024/12/01/braverman-monopoly-capital-and-ai-the-collective-worker-and-the-reunification-of-labor/

#Automation #Capitalism #Monopolies #MonopolyCapital #Marx #Marxism #Algorithms #AI #DivisionofLabor

"Canada’s major news organizations have sued tech firm OpenAI for potentially billions of dollars, alleging the company is “strip-mining journalism” and unjustly enriching itself by using news articles to train its popular ChatGPT software.

The suit, filed on Friday in Ontario’s superior court of justice, calls for punitive damages, a share of profits made by OpenAI from using the news organizations’ articles, and an injunction barring the San Francisco-based company from using any of the news articles in the future.

“These artificial intelligence companies cannibalize proprietary content and are free-riding on the backs of news publishers who invest real money to employ real journalists who produce real stories for real people,” said Paul Deegan, president of News Media Canada.

“They are strip-mining journalism while substantially, unjustly and unlawfully enriching themselves to the detriment of publishers.”

The litigants include the Globe and Mail, the Canadian Press, the CBC, the Toronto Star, Metroland Media and Postmedia. They want up to C$20,000 in damages for each article used by OpenAI, suggesting a victory in court could be worth billions."

#AI #GenerativeAI #AITraining #RentExtraction #Rentism #Feudalism #Copyright #IP

https://www.theguardian.com/world/2024/nov/29/canada-media-companies-sue-openai-chatgpt

"Anyone teaching about AI has some excellent material to work with in this book. There are chewy examples for a classroom discussion such as ‘Why did the Fragile Families Challenge End in Disappointment?’; and multiple sections in the chapter ‘the long road to generative AI’. In addition the Substack newsletter that this book was written through offers a section called ‘Book Exercises’. Interestingly, some parts of this book were developed by Narayanan developing classes in partnership Princeton quantitative sociologist, Matt Salganik. As Narayanan writes, nothing makes you learn and understand something as much as teaching it to others does. I hope they write about collaborating across disciplinary lines, which remains a challenge for many of us working on AI."

https://www.lcfi.ac.uk/news-events/blog/post/aisnakeoil

#AI #PredictiveAI #GenerativeAI #STS #SnakeOil

"Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google's AI products may be restricted as an appropriate remedy following the government's win in the search monopoly trial.

According to Law360, Mehta said that "the recent emergence of AI products that are intended to mimic the functionality of search engines" is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google's anticompetitive behavior, the judge wants to hear much more about how each side views AI's role in Google's search empire during the remedies stage of litigation than he did during the search trial.

"AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase," Mehta said. "Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?"

To fight the DOJ's proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google's ability to compete, the tech company is currently trying to pry into Microsoft's AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has "core relevance" to the remedies litigation, Law360 reported."

https://arstechnica.com/tech-policy/2024/11/google-drags-ai-rivals-into-search-trial-as-judge-entertains-ai-remedies/

#USA #Google #BigTech #Antitrust #Search #SearchEngines #AI #GenerativeAI

"In sum, if the TDM leading up to the model took place outside the EU, then EU copyright law does not require GPAI model providers to ensure that the resulting model complies with Article 4 CDSMD. Therefore, even if this recital is turned into a binding obligation by national law, its violation does not amount to copyright infringement. It would only be a violation of the AI Act. Even then, since this particular obligation refers back to the “policies to respect copyright” obligation, it seems odd to impose a sanction on a provider for failing to comply with EU copyright law when that provider has, in fact, respected the applicable copyright rules. It seems even stranger to recognize such a deviation from the core principles of EU copyright law based on a recital in a legislative instrument that is only tangentially related to copyright."

https://copyrightblog.kluweriplaw.com/2024/11/28/copyright-the-ai-act-and-extraterritoriality/

#AI #EU #AIAct #GenerativeAI #AITraining #Copyright #CDSMS #Extraterritoriality #TDM

"Dr Amy Thomas and Dr Arthur Ehlinger, two of the researchers who worked on the report at the University of Glasgow, said artists were finding their earnings were being squeezed by a combination of funding cuts, inflation and the rise of AI.

One artist interviewed said their rent had risen by 40% in the last four years, forcing them to go on to universal credit, while Arts Council England’s funding has been slashed by 30% since the survey was last conducted in 2010.

Zimmermann said: “AI is a big factor that has started to affect entry level and lower-paid jobs. But it’s also funding cuts: charities are going under, businesses are closing down, the financial pressure on the arts is growing.”

“It’s very tempting to lay the blame at the feet of AI,” said Thomas, “but I think it is the straw that broke the camel’s back. It’s like we’ve been playing a game of KerPlunk where you keep taking out different bits of funding and see how little you can sustain a career with.”

The artist Larry Achiampong, who had a break-out year in 2022 with his Wayfinder solo show, said the fees artists receive have plummeted."

https://www.theguardian.com/artanddesign/2024/nov/25/britain-faces-talent-drain-visual-artists-earnings-fall

#UK #VisualArts #Art #ArtsFunding #Neoliberalism #Austerity #AI #GenerativeAI

"Tech companies Amazon, Google and Meta have been criticised by a Senate select committee inquiry for being especially vague over how they used Australian data to train their powerful artificial intelligence products.

Labor senator Tony Sheldon, the inquiry’s chair, was frustrated by the multinationals’ refusal to answer direct questions about their use of Australians’ private and personal information.

“Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” Sheldon said in a statement, after releasing the final report of the inquiry on Tuesday.

He called the tech companies “pirates” that were “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.”

The report found some general-purpose AI models – such as OpenAI’s GPT, Meta’s Llama and Google’s Gemini – should automatically default to a “high risk” category, and be subjected to mandated transparency and accountability requirements."

https://www.theguardian.com/technology/2024/nov/27/amazon-google-and-meta-are-pillaging-culture-data-and-creativity-to-train-ai-australian-inquiry-finds

#AI #GenerativeAI #BigTech #Amazon #Australia #Google #Meta

#Gemini #OpenAI #Llama

"Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.

With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.

The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.

The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can."

https://pluralistic.net/2024/11/26/hawtch-hawtch/#you-treasure-what-you-measure

#Work #WageSlavery #WorkerSurveillance #Bossware #Privacy #AI #DataProtection #FTC #USA

"The authors say that OpenAI’s early access program to Sora exploits artists for free labor and “art washing,” or lending artistic credibility to a corporate product. They criticize the company, which recently raised billions of dollars at a $150 billion valuation, for having hundreds of artists provide unpaid testing and feedback.

They also object to OpenAI’s content approval requirements for Sora, which apparently state that “every output needs to be approved by the OpenAI team before sharing.”

When contacted by The Verge, OpenAI would not confirm on the record if the alleged Sora leak was authentic or not. Instead, the company stressed that participation in its “research preview” is “voluntary, with no obligation to provide feedback or use the tool.”"

https://www.theverge.com/2024/11/26/24306879/openai-sora-video-ai-model-leak-artist-protest

#AI #GenerativeAI #Sora #GeneratedVideos #AITraining

"The familiar narrative is that artificial intelligence will take away human jobs: machine-learning will let cars, computers and chatbots teach themselves - making us humans obsolete.

Well, that's not very likely, and we're gonna tell you why. There's a growing global army of millions toiling to make AI run smoothly. They're called "humans in the loop:" people sorting, labeling, and sifting reams of data to train and improve AI for companies like Meta, OpenAI, Microsoft and Google. It's gruntwork that needs to be done accurately, fast, and - to do it cheaply – it's often farmed out to places like Africa –

Naftali Wambalo: The robots or the machines, you are teaching them how to think like human, to do things like human.

We met Naftali Wambalo in Nairobi, Kenya, one of the main hubs for this kind of work. It's a country desperate for jobs… because of an unemployment rate as high as 67% among young people. So Naftali, father of two, college educated with a degree in mathematics, was elated to finally find work in an emerging field: artificial intelligence."

https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/

#Kenya #AI #GenerativeAI #Fauxtomation #DataLabeling #OpenAI #Meta

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

https://theconversation.com/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080

#AI #GenerativeAI #AlgorithmicBias #AIRegulation

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

https://theconversation.com/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080

#AI #GenerativeAI #AlgorithmicBias #AIRegulation