Avatar
Miguel Afonso Caetano
0bb8cfad2c4ef2f694feb68708f67a94d85b29d15080df8174b8485e471b6683
Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher. #TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism

"Amazon’s race to create an AI-based successor to its voice assistant Alexa has hit more snags after a series of earlier setbacks over the past year. Employees have found there is too much of a delay between asking the technology for something and the new Alexa providing a response or completing a task.

The problem, known as latency, is a critical shortcoming, employees said in an internal memo from earlier this month obtained by Fortune. If released as is, customers could become frustrated and the product—a particularly critical one to Amazon as it tries to keep up in the crucial battle to launch blockbuster consumer AI products—could end up as a failure, some employees fear.

“Latency remains a critical issue requiring significant improvements,” before the new version of Alexa could launch, the memo said."

https://fortune.com/2024/11/18/new-ai-alexa-latency-problems-echo-compatibility-uber-opentable/

#AI #GenerativeAI #Amazon #Alexa #AIAgents #AIBubble #AIHype

"Meta’s open large language model family, Llama, isn’t “open-source” in a traditional sense, but it’s freely available to download and build on—and national defense agencies are among those putting it to use.

A recent Reuters report detailed how Chinese researchers fine-tuned Llama’s model on military records to create a tool for analyzing military intelligence. Meta’s director of public policy called the use “unauthorized.” But three days later, Nick Clegg, Meta’s president of public affairs, announced that Meta will allow use of Llama for U.S. national security.

“It shows that a lot of the guardrails that are put around these models are fluid,” says Ben Brooks, a fellow at Harvard’s Berkman Klein Center for Internet and Society. He adds that “safety and security depends on layers of mitigation.”"

https://spectrum.ieee.org/ai-used-by-military

#AI #GenerativeAI #AIWarfare #AISafety #DoD #Meta #Llama

#AI #BigTech #SiliconValley #GenerativeAI #AIBubble #AIHype: "Big Tech’s AI spending continues to accelerate at a blistering pace, with the four giants well on track to spend upwards of a quarter trillion dollars predominantly towards AI infrastructure next year.

Though there have recently been concerns about the durability of this AI spending from Big Tech and others downstream, these fears have been assuaged, with management teams stepping out to highlight AI revenue streams approaching and surpassing $10 billion with demand still outpacing capacity.

Below, I take a look at the growth in AI spending from Big Tech this year and yet, as it quickly approaches the quarter-trillion mark, and next week, I’ll discuss exactly what this means for the market’s biggest beneficiary."

https://www.forbes.com/sites/bethkindig/2024/11/14/ai-spending-to-exceed-a-quarter-trillion-next-year/

"The facts on the ground show the reality: 15 years after Bitcoin was created, no-one uses it as a currency. It’s an asset. It’s still far slower and more expansive even for complex international money transfers than the “old world” financial system.

Bitcoin and the other currencies around it are barely even useful for criminals: online drug markets proved much easier for law enforcement to infiltrate than traditional organised crime networks, and the traceability of crypto transactions has brought down crime kingpins across the globe.

The failure of Bitcoin – or any other cryptocurrency – to actually get used as a currency has led to people searching for more outlandish explanations as to why is might have value, rather than just a high price. It was once claimed it would topple the existing financial order, but its embrace from hedge funds, big tech, and other financial world staples make those ring hollow.

There is a deeply confused claim that because Bitcoin mining requires huge amounts of energy, that means that Bitcoins are themselves a store of energy – almost like a digital battery. This sounds clever to a certain sort of galaxy brained young man, but incredibly dumb to everyone else. The latter group are correct: a forest fire might incinerate a huge number of trees, but that does not make fire a store of wood."

https://www.theneweuropean.co.uk/thanks-to-trump-bitcoin-is-booming-but-it-is-still-useless/

#USA #Trump #Crypto #Cryptocurrencies #Bitcoin

"AI researchers are still grappling for the right metaphors to understand our enigmatic creations. But as we humans make choices on how we deploy and use these systems, how we study them, and how we craft and apply laws and regulations to keep them safe and ethical, we need to be acutely aware of the often unconscious metaphors that shape our evolving understanding of the nature of their intelligence."

https://www.science.org/doi/full/10.1126/science.adt6140?af=R

#AI #GenerativeAI #LLMs #ChatBots #Metaphors #Language #Rhetoric

"Elon Musk might be in charge of the business of Grok, but the artificial intelligence has seemingly gone into business for itself, labeling Musk as one of the worst offenders when it comes to spreading misinformation online.

User Gary Koepnick asked the AI which person spreads the most information on Twitter/X—and the service did not hesitate in pointing a finger at its creator.

“Based on various analyses, social media sentiment, and reports, Elon Musk has been identified as one of the most significant spreaders of misinformation on X since he acquired the platform,” it wrote, later adding “Musk has made numerous posts that have been criticized for promoting or endorsing misinformation, especially related to political events, elections, health issues like COVID-19, and conspiracy theories. His endorsements or interactions with content from controversial figures or accounts with a history of spreading misinformation have also contributed to this perception.”"

https://www.msn.com/en-in/news/techandscience/elon-musk-s-ai-turns-on-him-labels-him-one-of-the-most-significant-spreaders-of-misinformation-on-x/ar-AA1u5Ioe

#AI #GenerativeAI #SocialMedia #Twitter #Musk #Disinformation

I'm not sure this kind of tools are legal in the European Union...

"Fast-forward to today, and millions of artists have deployed two tools born from that Zoom: Glaze and Nightshade, which were developed by Zhao and the University of Chicago’s SAND Lab (an acronym for “security, algorithms, networking, and data”).

Arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping, Glaze and Nightshade work in similar ways: by adding what the researchers call “barely perceptible” perturbations to an image’s pixels so that machine-learning models cannot read them properly. Glaze, which has been downloaded more than 6 million times since it launched in March 2023, adds what’s effectively a secret cloak to images that prevents AI algorithms from picking up on and copying an artist’s style. Nightshade, which I wrote about when it was released almost exactly a year ago this fall, cranks up the offensive against AI companies by adding an invisible layer of poison to images, which can break AI models; it has been downloaded more than 1.6 million times.

Thanks to the tools, “I’m able to post my work online,” Ortiz says, “and that’s pretty huge.” For artists like her, being seen online is crucial to getting more work. If they are uncomfortable about ending up in a massive for-profit AI model without compensation, the only option is to delete their work from the internet. That would mean career suicide."

https://www.technologyreview.com/2024/11/13/1106837/ai-data-posioning-nightshade-glaze-art-university-of-chicago-exploitation/

#AI #GenerativeAI #WebScraping #AITraining #GeneratedImages

"The fact that OpenAI did not publish the absolute values of the above chart’s x-axis is telling. If you have a way of outcompeting human experts on STEM tasks, but it costs $1B to run on a days worth of tasks, you can't get to a capabilities explosion, which is the main thing that makes the idea of artificial general intelligence (AGI) so compelling to many people.

Additionally, the y-axis is not on a log scale, while the x-axis is, meaning that cost increases exponentially for linear returns to performance (i.e. you get diminishing marginal returns to ‘thinking’ longer on a task).

This reminds me of quantum computers or fusion reactors — we can build them, but the economics are far from working. Technical breakthroughs are only one piece of the puzzle. You also need to be able to scale (not that this will come as news to Silicon Valley).

Smarter base models could decrease the amount of test-time compute needed to complete certain tasks, but scaling up the base models would also increase the inference cost (i.e. the price of prompting the model). It’s not clear which effect would dominate, and the answer may depend on the task. And if researchers really are reaching a plateau, they may be stuck with base models only marginally smarter than what’s published right now."

https://garrisonlovely.substack.com/p/is-deep-learning-actually-hitting

#AI #GenerativeAI #DeepLearning #PeakAI #AGI #AIBubble #AIHype

"The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems. Orion’s unsatisfactory coding performance was due in part to the lack of sufficient coding data to train on, two people said. At the same time, even modest improvements may not be enough to justify the tremendous costs associated with building and operating new models, or to live up to the expectations that come with branding a product as a major upgrade.

There is plenty of potential to make these models better. OpenAI has been putting Orion through a months-long process often referred to as post-training, according to one of the people. That procedure, which is routine before a company releases new AI software publicly, includes incorporating human feedback to improve responses and refining the tone for how the model should interact with users, among other things. But Orion is still not at the level OpenAI would want in order to release it to users, and the company is unlikely to roll out the system until early next year, one person said. The Information previously reported some details of OpenAI’s challenges developing its new model, including with coding tasks."

https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai

#AI #GenerativeAI #PeakAI #OpenAI #AIBubble #AIHype #AGI

"OpenAI is preparing to launch a new artificial intelligence agent codenamed “Operator” that can use a computer to take actions on a person’s behalf, such as writing code or booking travel, according to two people familiar with the matter.

In a staff meeting on Wednesday, OpenAI’s leadership announced plans to release the tool in January as a research preview and through the company’s application programming interface for developers, said one of the people, who spoke on the condition of anonymity to discuss internal matters."

https://www.bloomberg.com/news/articles/2024-11-13/openai-nears-launch-of-ai-agents-to-automate-tasks-for-users

#AI #AIAgents #OpenAI #PeakAI #Automation

"Character.AI is an explosively popular startup — with $2.7 billion in financial backing from Google — that allows its tens of millions of users to interact with chatbots that have been outfitted with various personalities.

With that type of funding and scale, not to mention its popularity with young users, you might assume the service is carefully moderated. Instead, many of the bots on Character.AI are profoundly disturbing — including numerous characters that seem designed to roleplay scenarios of child sexual abuse.

Consider a bot we found named Anderley, described on its public profile as having "pedophilic and abusive tendencies" and "Nazi sympathies," and which has held more than 1,400 conversations with users."

https://futurism.com/character-ai-pedophile-chatbots

#AI #GenerativeAI #Chatbots #CharacterAI #Google

"An AI chatbot called “FungiFriend” was added to a popular mushroom identification Facebook group Tuesday. It then told users there how to “sauté in butter” a potentially dangerous mushroom, signaling again the high level of risk that AI chatbots and tools pose to people who forage for mushrooms.

404 Media has previously reported on the prevalence and risk of AI tools intersecting with the mushroom foraging hobby. We reported on AI-generated mushroom foraging books on Amazon and the fact that Google image search has shown AI-generated images of mushrooms as top search results. On Tuesday, the FungiFriend AI chatbot to the Northeast Mushroom Identification & Discussion Facebook group, which has 13,500 members and is a place where beginner mushroom foragers often ask others for help identifying the mushrooms they have found in the wild. A moderator for the group said that the bot was automatically added by Meta and that “we are most certainly removing it from here.” Meta did not immediately respond to a request for comment.

The bot is personified as a bearded, psychedelic wizard. Meta recently began adding AI chatbots into specific groups, and has also created different character AIs."

https://www.404media.co/ai-chatbot-added-to-mushroom-foraging-facebook-group-immediately-gives-tips-for-cooking-dangerous-mushroom/

#AI #GenerativeAI #SocialMedia #Facebook #Chatbots #Hallucinations #Disinformation

"To conduct our study, we analyzed 1,388,711 job posts from a leading global online freelancing platform from July 2021 to July 2023. Online freelancing platforms provide a good setting for examining emerging trends due to the digital, task-oriented, and flexible nature of work on these platforms. We focus our analysis on the introduction of two types of gen AI tools: ChatGPT and image-generating AI. Specifically, we wanted to understand whether the introduction and diffusion of these tools decreased demand for jobs on this platform and, if so, which types of jobs and skills are affected most and by how much.

Using a machine learning algorithm, we first grouped job posts into different categories based on their detailed job descriptions. These categories were then classified into three types: manual-intensive jobs (e.g., data and office management, video services, and audio services), automation-prone jobs (e.g., writing; software, app, and web development; engineering), and image-generating jobs (e.g., graphic design and 3D modeling). We then examined the impact that the introduction of Gen AI tools had on demand across these different types of jobs.

We find that the introduction of ChatGPT and image-generating tools led to nearly immediate decreases in posts for online gig workers across job types, but particularly for automation-prone jobs. After the introduction of ChatGPT, there was a 21% decrease in the weekly number of posts in automation-prone jobs compared to manual-intensive jobs. Writing jobs were affected the most (30.37% decrease), followed by software, app, and web development (20.62%) and engineering (10.42%).

A similar magnitude of decline in demand was observed after the introduction of popular image-generating AI tools (including Midjourney, Stable Diffusion, and DALL-E 2) were introduced."

https://hbr.org/2024/11/research-how-gen-ai-is-already-impacting-the-labor-market

#AI #GenerativeAI #Automation #Unemployment #Freelancing #GigEconomy

"A Home Office artificial intelligence tool which proposes enforcement action against adult and child migrants could make it too easy for officials to rubberstamp automated life-changing decisions, campaigners have said.

As new details of the AI-powered immigration enforcement system emerged, critics called it a “robo-caseworker” that could “encode injustices” because an algorithm is involved in shaping decisions, including returning people to their home countries.

The government describes it as a “rules-based” rather than AI system, as it does not involve machine-learning from data, and insists it delivers efficiencies by prioritising work and that a human remains responsible for each decision. The system is being used amid a rising caseload of asylum seekers who are subject to removal action, currently about 41,000 people.

Migrant rights campaigners called for the Home Office to withdraw the system, claiming it was “technology being used to make cruelty and harm more efficient”."

#UK #AI #PredictiveAI #Algorithms #PredictiveAlgorithms #Immigration #AsylumSeekers

https://www.theguardian.com/uk-news/2024/nov/11/ai-tool-could-influence-home-office-immigration-decisions-critics-say

"The artificial intelligence boom isn't just consuming massive amounts of energy and water: It's also creating an unprecedented tsunami of electronic waste.

According to Stanford University, private investment in AI went from $3 billion in 2022 to $25 billion last year, with companies adopting AI tools faster than ever. This surge is forcing data centers to continually upgrade their hardware, discarding still-functional equipment in a race to maintain competitive edge.

This massive use of components to fuel the hardware that runs AI models is throwing off millions of tons of discarded electronic components. A new study published in Nature by a team of researchers from China, Israel, and the UK estimates that large language models (LLMs) like ChatGPT, Claude or LlaMa alone could generate 2.75 million tons (2.5 million tonnes) of e-waste annually, severely increasing the environmental impact of AI."

https://decrypt-co.cdn.ampproject.org/c/s/decrypt.co/290638/ai-boom-e-waste-toxic-materials-2030?amp=1

#AI #GenerativeAI #ElectronicWaste #EWaste #ToxicMaterials #Hardware #Environment

"I’ve outlined each step of my process to illustrate the iterative nature of working with AI tools, which become like power tools in the hands of skilled writers. Simply asking the AI to generate a final draft of the documentation based only on the original meeting transcript would likely lead to a poor result, prompting the writer to mistakenly conclude that AI tools are unsuitable for documentation tasks like this. But by using an approach that involves multiple processes and iterative drafts, the AI tools produce a more intelligent outcome.

Some might argue that this extensive iterative process — with its numerous drafts, prompts, and reviews — negates the time-saving benefits of using AI tools. Why not just write it yourself from scratch? However, I still think writing the draft from scratch would be more time-consuming and be less accurate. Although it seems like a lot of iterative steps, remember that the AI performs each step in a matter of seconds. Additionally, writing sentences from scratch consumes a lot of mental effort, whereas using AI tools is a more mechanical task with only judgement and editorial acumen needed. For instance, I could likely watch a football game or a TV show while guiding the AI through these steps. In contrast, drafting the content manually would likely exhaust me within two hours."

https://idratherbewriting.com/ai/prompt-engineering-iterative-chain-of-thought.html

#AI #GenerativeAI #PromptEngineering #LLMs #Chatbots #TechnicalWriting #SoftwareDocumentation #APIDocumentation #SoftwareDevelopment

"Last week, the Vatican unveiled Luce, a Japanese-style cartoon character that will serve as the Catholic Church’s mascot for its upcoming jubilee year, as well as its Expo 2025 in Osaka, Japan.

Archbishop Rino Fisichella, the Vatican's chief organizer for the jubilee year who presented Luce to the world, said that the mascot was "created from the desire to enter into the world of pop culture, so beloved by our young people".

I believe his excellency should have considered his desires more carefully, because there is no clearer sign that Luce has indeed entered pop culture and is beloved by young people than the fact that there are now dozens of AI-generated hardcore pornographic images of her on the internet.

On Civitai, a site for sharing custom AI models and generating images, users have created at least a dozen different Luce-themed AI models specifically for generating images of the Vatican’s mascot."

https://www.404media.co/luce-porn-vatican/

#AI #GenerativeAI #Vatican #CatholicChurch

"Trump is symptom, not cause, of the “crisis of democracy.” Trump did not turn the nation in a hard-right direction, and if the liberal political establishment doesn’t ask what wind he caught in his sails, it will remain clueless about the wellsprings and fuel of contemporary antidemocratic thinking and practices. It will ignore the cratered prospects and anxiety of the working and middle classes wrought by neoliberalism and financialization; the unconscionable alignment of the Democratic Party with those forces for decades; a scandalously unaccountable and largely bought mainstream media and the challenges of siloed social media; neoliberalism’s direct and indirect assault on democratic principles and practices; degraded and denigrated public education; and mounting anxiety about constitutional democracy’s seeming inability to meet the greatest challenges of our time, especially but not only the climate catastrophe and the devastating global deformations and inequalities emanating from two centuries of Euro-Atlantic empire. Without facing these things, we will not develop democratic prospects for the coming century."

https://www.bostonreview.net/articles/the-violent-exhaustion-of-liberal-democracy/

#USA #Democracy #Neoliberalism #Trump #Fascism

"This article explores the intricate intersection of copyright law and large language models (LLMs), a cutting-edge artificial intelligence technology that has rapidly gained prominence. The authors provide a comprehensive analysis of the copyright implications arising from the training, fine-tuning, and use of LLMs, which often involve the ingestion of vast amounts of copyrighted material. The paper begins by elucidating the technical aspects of LLMs, including tokenization, word embeddings, and the various stages of LLM development. This technical foundation is crucial for understanding the subsequent legal analysis. The authors then delve into the copyright law aspects, examining potential infringement issues related to both inputs and outputs of LLMs. A comparative legal analysis is presented, focusing on the United States, European Union, United Kingdom, Japan, Singapore, and Switzerland. The article scrutinizes relevant copyright exceptions and limitations in these jurisdictions, including fair use in the US and text and data mining exceptions in the EU. The authors highlight the uncertainties and challenges in applying these legal concepts to LLMs, particularly in light of recent court decisions and legislative developments. The paper also addresses the potential impact of the EU's AI Act on copyright considerations, including its extraterritorial effects. Furthermore, it explores the concept of "making available" in the context of LLMs and its implications for copyright infringement. Recognizing the legal uncertainties and the need for a balanced approach that fosters both innovation and copyright protection, the authors propose licensing as a key solution. They advocate for a combination of direct and collective licensing models to provide a practical framework for the responsible use of copyrighted materials in AI systems."

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4963711

#AI #GenerativeAI #AITraining #LLMs #Copyright #IP #FairUse #TDM

"It is true that Silicon Valley is home to a horde of reactionaries that pine for a modernity free of liberalism, as well as gripped by a delusion that society would be best served under their dominion. But why? If we are going to offer a theory such as techno-authoritarianism, we have to have some theory about why there is such a break. LaFrance offers us a tautology: these individuals accelerating the pace of our innovation and the triumph of digital technologies owe it to ideas that prioritize accelerating our innovation and the triumph of digital technologies.

The sad truth is this: Silicon Valley and its reactionaries are chickens coming home to roost. Silicon Valley is not some Promethean flame we smuggled from Olympos. Technology is not some primordial force corrupted by libertarian nerds with too much money. It's of the real world, it's material, it's influenced by the forces of history as much as anything else. Those forces come out of choosing to prioritize technology that helps us undertake surveillance for commerce, advertising, imperial adventure, speculation, and political suppression. The existential threat posed by today's tech capitalist overlords will get much worse. The funny ideas they have about their divine right to rule, their suspicion of competition and democracy, and other people who do not look like them—well, that's how we got here in the first place."

https://thetechbubble.substack.com/p/ai-slavery-surveillance-and-capitalism

#AI #Capitalism #Surveillance #SurveillanceCapitalism #TechnoFeudalism #SocialControl #Automation #TechnoAuthoritarianism #Slavery