#Israel #Palestine #Gaza #BigTech #AI #ML: "Israel launched its assault on Gaza after Hamas fighters stormed into southern Israel on Oct. 7 last year, killing 1,200 people and capturing around 250 hostages, according to Israeli tallies.
Goodfriend said the conflict in Gaza revealed the "lethal" effect of the application of high-tech systems in war.
"The scale of the destruction has made it hard to see any of this technology as neutral - and it's made a lot of people within the tech industry quite critical of supplying systems that are driving warfare," she told Context.
"There's no way Israel could have the technical infrastructure that they do without the support of private companies, and cloud computing infrastructure - they just wouldn't be able to operate their AI systems without the major tech conglomerates.""
https://www.context.news/big-tech/one-year-of-war-in-gaza-decoding-the-role-of-big-tech
#AI #GenrativeAI #LLMs #DataCenters #ChatGPT #Water #WaterScarcity: "You may be hungry for knowledge, but your chatbot is thirsty for the world’s water supplies. The huge computer clusters powering ChatGPT need four times as much water to deliver answers than previously thought, it has been claimed.
Using the chatbot for between ten to 50 queries consumes about two litres of water, according to experts from the University of California, Riverside.
A pre-print study from the academics, which was released last year, estimated that one 500ml bottle was used for this volume of queries, but they have now discovered it underestimated the problem.
Technology companies developing powerful artificial intelligence use water for cooling, power generation and in manufacturing chips.
The study, entitled Making AI Less Thirsty, looked at an earlier version of ChatGPT (GPT-3) and will be published in the Communications of the ACM magazine."
#AI #GenerativeAI #Algorithms #DigitalLiteracy #AILiteracy: "We are experiencing a massive and volatile expansion of AI-based products and services. The current intermeshing of digital technologies, people, and society is shaping how we live and bringing algorithms to the forefront of decision making. The algorithmification of society and the narratives used to make it appear inevitable serve specific interests, mostly profitable for and controlled by few actors. It is not AI in itself, but the utilitarian sophistication of optimisation mechanisms and the power structures behind them that profit from controlling all that we do, when and how we do it, our behaviours, and even ourselves. In education, this is of serious concern as academia is gradually moving to uncertain dependencies on corporate interests. This paper calls for radical changes in dealing with the AI narratives that have monopolised recent public debates and discussions. It sheds light on the key terminology surrounding today's AI algorithms and the technological background that makes them possible. It shows examples of the negative impacts and the implications of not addressing or ignoring certain issues, especially in education. This paper also suggests good practices through consistent advocacy, grounded materials, and critical work on digital literacy, particularly AI literacy."
#AI #GenerativeAI #AIHype #AIBubble #OpenAI: "At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future so we don’t get too caught up in the underwhelming details of the present. Why focus on how AI is being used to harass and exploit children when you can imagine the ways it will make your life easier? It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
Remember, these technologies already have a track record. The world can and should evaluate them, and the people building them, based on their results and their effects, not solely on their supposed potential."
#Efficiency #AlgorithmicManagement #AI #HumanRights: "[R]esearchers from MIT find that focusing solely on efficiency can lower employee satisfaction, wellbeing, and performance in the long-run by treating workers like “cogs in a machine” or triggering employees to continue working to the point of exhaustion.
As Interfaith Center on Corporate Responsibility (ICCR), a coalition of faith-based and values-based investors, and OpenMIC, a nonprofit focused on responsible use of digital technologies, explain in their new report, Dehumanization, Discrimination and Deskilling: The Impact of Digital Tech on Low-Wage Workers, a critical element of algorithmic management systems is the monitoring and surveillance of workers in violation of their human rights. More specifically, the report outlines a number of human rights impacted by digital technologies, including right to privacy (Article 12 in the 1948 Universal Declaration on Human Rights) and occupational safety and health (ILO Convention 155 in 1981 and 187 in 2006).
Worker well-being, satisfaction, and human rights should matter to all investors. When algorithmic management systems detract from them, the resulting higher worker turnover, higher injury rates, increasing regulatory fines and sanctions, and increasing regulation can materially dampen long-term value creation."
#AI #GenerativeAI #OpenAI #AIBubble #AIHype: "At a valuation of $157bn, OpenAI investors are paying about 13 times the company’s estimated $12bn in 2025 revenue. That might seem modest: fellow AI icon Nvidia trades at 18 times, according to LSEG data. Then again, Nvidia is lavishly profitable, while OpenAI is burning $5bn a year. True, losses are common in tech. Back in 2017, Tesla had $12bn of revenue and $2bn of losses. The carmaker’s market capitalisation, though, was a more modest $50bn.
For now, OpenAI is less of a company, and more of an idea. Granted, it’s an exciting one. If Altman’s models scale Olympian heights, so might his company’s value, as Tesla’s did. But investors making that call today must surely be powered more by instinct than intelligence."
https://www.ft.com/content/55fac2af-8eac-4214-ae15-f5ce349c1a9e
#AI #GenerativeAI #AITraining #Meta #Surveillance #Privacy #DataProtection: "In short, any image you share with Meta AI can be used to train its AI.
“[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy,” said Meta policy communications manager Emil Vazquez in an email to TechCrunch.
In a previous emailed statement, a spokesperson clarified that photos and videos captured on Ray-Ban Meta are not used by Meta for training as long as the user doesn’t submit them to AI. However, once you ask Meta AI to analyze them, those photos fall under a completely different set of policies.
In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to “opt out” is to simply not use Meta’s multimodal AI features in the first place.
The implications are concerning because Ray-Ban Meta users may not understand they’re giving Meta tons of images – perhaps showing the inside of their homes, loved ones, or personal files – to train its new AI models. Meta’s spokespeople tell me this is clear in the Ray-Ban Meta’s user interface, but the company’s executives either initially didn’t know or didn’t want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of “publicly available data” to anything people look at through its smart glasses and ask its AI chatbot to analyze."
Only by nuclearizing the entire globe will this technology be able to continue to evolve...
#AI #GenerativeAI #CharacterAI #Google #LLMs #Chatbots #BigTech: "Character.ai is seeking to rebound from Google poaching its founders in a $2.7bn deal by focusing on improving its consumer products rather than building AI models, as concern grows that Big Tech will squash competition from rival start-ups.
Dominic Perella, the company’s new interim chief executive, told the Financial Times that the San Francisco-based start-up had largely abandoned the race to build large language models against better-funded competitors such as Microsoft-backed OpenAI, Amazon and Google.
Instead, three-year-old Character.ai will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users.
“It got insanely expensive to train frontier models . . . which is extremely difficult to finance on even a very large start-up budget,” said Perrella in his first interview since taking the role in August."
https://www.ft.com/content/f2a9b5d4-05fe-4134-b4fe-c24727b85bba
#CyberCrime #CyberSecurity #AI #GenerativeAI #Russia: "Multiple sites which promise to use AI to ‘nudify’ any photos uploaded are actually designed to infect users with powerful credential stealing malware, according to new findings from a cybersecurity company which has analyzed the sites. The researchers also believe the sites are run by Fin7, a notorious Russian cybercrime group that has previously even set up fake penetration testing services to trick people into hacking real victims on their behalf.
The news indicates that services for producing AI-generated nonconsensual intimate content are becoming enticing enough that hackers feel it is worth the time and effort to build fake versions they can then use to hack people. The news also shows that Fin7 is alive despite the U.S. Department of Justice saying last year that “Fin7 as an entity is no more.”"
https://www.404media.co/a-network-of-ai-nudify-sites-are-a-front-for-notorious-russian-hackers-2/
#AI #GenerativeAI #AIRegulation #USA #California: "True, AI regulation could be neater. Newsom complained that the bill’s size threshold would spare smaller entities that could still do grave damage. That happened in banking, when mid-sized and lightly regulated Silicon Valley Bank unravelled chaotically in 2023. Still, better to set the threshold high and tweak later. In preventing crises, the perfect is the enemy of good.
The masterminds behind AI, and their venture backers, ought perhaps to pay more attention to the Wall Street analogy. For one, hated rules have only made big banks bigger. Despite harsh oversight, US institutions reign supreme because customers deem them safer. Financial invention, meanwhile, continues apace.
Moreover, regulation born out of an unprevented crisis can be very onerous indeed."
https://www.ft.com/content/cffd24e9-713a-424f-8fc0-0e421e78e5c9
#AI #GenerativeAI #APIs #APIDocumentation #CodeSamples #Programming #SoftwareDevelopment #TechnicalWriting: "Providing code samples in documentation remains one of the most challenging aspects of API documentation. Even so, I’m optimistic about the helpfulness of AI tools to jump-start the process for tech writers. We can leverage code from test apps along with documentation, especially reference documentation, to provide starter code that engineers can then customize and adjust.
However, whether these basic code samples are helpful is another question. They might be mistaken for more valuable application code that would get into the deeper questions about how to design an application and handle the complexities of the developer’s specific business logic, along with questions about purpose, rationale, and design patterns."
https://idratherbewriting.com/ai/prompt-engineering-code-samples.html
#AI #GenerativeAI #AISafety #SafetyFrameworks: "To provide a concrete foundation for this analysis, I primarily focus on Anthropic's safety framework (version 1.0), which stands as the most comprehensive public document of its kind to date. I then outline how this analysis extends to and informs other safety frameworks. By employing a measurement modeling lens, I identify six neglected problems (See Table 1) that are crucial to address through collaboration among diverse expert perspectives. First, a collection of models or an embedded model in larger AI ecosystems might trigger catastrophic events. Second, indirect or contributing causes can bring about catastrophic events via complex causal chains. Third, the open-ended characterization of AI Safety Levels (ASLs) can introduce uncertainties into the effective governance of AI catastrophic risks. Fourth, the lack of rigorous justification in setting specific quantitative thresholds presents obstacles in reliably defining and measuring catastrophic events. Fifth, the validity of AI safety assessments can be compromised by the fundamental limitations inherent in red-teaming methodologies. Lastly, mechanisms to ensure developer accountability in ASL classification, particularly for false negatives, are needed to address the risk of AI systems being deployed with inadequate safety measures due to inaccurate catastrophic risk evaluations."
#SocialMedia #AI #Chatbots #SocialA #LLMs #SocialNetworks: "Instead of a chatbot, which tries to deliver you the single best response to your prompt, SocialAI offers you options and filters in the form of replies. When you respond to a bot, or favorite a reply, that teaches the model more about what you’re looking for — and lets you choose your own AI adventure instead of just hoping the model gets it right.
“Over the past 10 years, we’ve had social media giants iterating relentlessly,” Sayman says, “with all the data in the world, to try and perfect an interface where people can interact with as many people and points of view as possible, right?” SocialAI looks like Twitter or Threads, he says, not to trick you into forgetting all the reply guys are AI but because we all know exactly how social networks work. “It’s not social for the sake of the social network, but social for the sake of social interface.”"
https://www.theverge.com/24255887/social-ai-bots-social-network-chatgpt-vergecast
#AI #GenerativeAI #AITransparency #AIEthics: "Our experts are overwhelmingly in favor of mandatory disclosures, with 84% either agreeing or strongly agreeing with the statement. At the same time, they have a wide array of views on implementing such disclosures. Below, we share insights from our panelists and draw on our own RAI experience to offer recommendations for how organizations might approach AI-related disclosures, whether they’re mandated by law or voluntarily provided to promote consumer trust and confidence."
https://sloanreview.mit.edu/article/artificial-intelligence-disclosures-are-key-to-customer-trust/
#AI #GenerativeAI #Udemy: "Stegs told 404 Media that she has had a course on the platform for eight years, and that she ended up finding out about the program with an email that said “You’re IN, Welcome to the GEN AI program.”
“This was the first I heard of it and I immediately logged in to the creator dashboard to turn it off,” she said. “The settings were buried away deep somewhere in my account. And then when i finally found it, the option to remove my consent from the program was greyed out. I never consented to the program.”
“I would like to remove my course from the platform but as the course was uploaded by my co-creator I can't do that part,” she added. “It's just unbelievable the theft of intellectual property that was never agreed to or communicated in a reasonable manner. It should be illegal. And on top of this, I should be allowed to withdraw consent at any time given it is MY content.”
Several other instructors have also said they intend to leave the platform entirely over the generative AI training, including the digital artist Hardy Fowler, who has a Discord community of 10,000 people and had many classes on Udemy until late last month."
#AGI #AI #GenerativeAI #LLMs #ML #CriticalAILiteracy: "In their paper, the researchers introduce a thought experiment where an AGI is allowed to be developed under ideal circumstances. Olivia Guest, co-author and assistant professor in Computational Cognitive Science at Radboud University: ‘For the sake of the thought experiment, we assume that engineers would have access to everything they might conceivably need, from perfect datasets to the most efficient machine learning methods possible. But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise.’
That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we'd even get close,’ Olivia Guest adds."
https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable
These studies are important but nevertheless they don't tell the whole history because the outcome will always depend on the model that is used. If you only test one model, of course the result cannot be generalized to all software development tasks. Ultimately, I'm afraid these studies will only contribute to increase the perception against AI:
#AI #GenerativeAI #CoPilot #LLMs #SoftwareDevelopment #Programming: "Many developers say AI coding assistants make them more productive, but a recent study set forth to measure their output and found no significant gains. Use of GitHub Copilot also introduced 41% more bugs, according to the study from Uplevel, a company providing insights from coding and collaboration data.
The study measured pull request (PR) cycle time, or the time to merge code into a repository, and PR throughput, the number of pull requests merged. It found no significant improvements for developers using Copilot.
Uplevel, using data generated by its customers, compared the output of about 800 developers using GitHub Copilot over a three-month period to their output in a three-month period before adoption.
(...)
There’s a difference between writing a few lines of code and full-fledged software development, Gekht adds. Coding is like writing a sentence, while development is like writing a novel, he suggests.
“Software development is 90% brain function — understanding the requirements, designing the system, and considering limitations and restrictions,” he adds. “Converting all this knowledge and understanding into actual code is a simpler part of the job.”"
https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
#LLMs #AI #GenerativeAI #Chatbots #Hallucinations: "The prevailing methods to make large language models more powerful and amenable have been based on continuous scaling up (that is, increasing their size, data volume and computational resources1) and bespoke shaping up (including post-filtering2,3, fine tuning or use of human feedback4,5). However, larger and more instructable large language models may have become less reliable. By studying the relationship between difficulty concordance, task avoidance and prompting stability of several language model families, here we show that easy instances for human participants are also easy for the models, but scaled-up, shaped-up models do not secure areas of low difficulty in which either the model does not err or human supervision can spot the errors. We also find that early models often avoid user questions but scaled-up, shaped-up models tend to give an apparently sensible yet wrong answer much more often, including errors on difficult questions that human supervisors frequently overlook. Moreover, we observe that stability to different natural phrasings of the same question is improved by scaling-up and shaping-up interventions, but pockets of variability persist across difficulty levels. These findings highlight the need for a fundamental shift in the design and development of general-purpose artificial intelligence, particularly in high-stakes areas for which a predictable distribution of errors is paramount."
#AI #GenerativeAI #Chatbots #BigTech: "Most people don’t realise how much is being gambled on imposing this technology on the public. Last year, almost all of the wealth gained by the richest people in the world (96 per cent of the gains recorded by the Bloomberg Billionaires Index) came from the boom in certain AI-related stocks, such as Nvidia and Microsoft. Generative AI is not necessarily useless or evil, but it is being implemented as fast as possible, with little regard for the consumer, in order to please financial markets that are in thrall to its promises.
If you buy the new iPhone, or Google’s new Pixel phone, you are taking part in an experiment into what happens when you outsource thinking and feeling to generative AI. You don’t know how it will affect your child’s emotional development if you let a machine take over the job of teaching them how to react to life’s more difficult moments. You don’t know what it will cost you if you accept that a machine will always know the right thing to say."
https://www.newstatesman.com/technology/2024/09/do-not-buy-ai-smartphone
#AI #GenerativeAI #LLMs #Chatbots #Hallucinations: "José Hernández-Orallo at the Polytechnic University of Valencia, Spain, and his colleagues examined the performance of LLMs as they scaled up and shaped up. They looked at OpenAI’s GPT series of chatbots, Meta’s LLaMA AI models, and BLOOM, developed by a group of researchers called BigScience.
The researchers tested the AIs by posing five types of task: arithmetic problems, solving anagrams, geographical questions, scientific challenges and pulling out information from disorganised lists.
They found that scaling up and shaping up can make LLMs better at answering tricky questions, such as rearranging the anagram “yoiirtsrphaepmdhray” into “hyperparathyroidism”. But this isn’t matched by improvement on basic questions, such as “what do you get when you add together 24427 and 7120”, which the LLMs continue to get wrong.
While their performance on difficult questions got better, the likelihood that an AI system would avoid answering any one question – because it couldn’t – dropped. As a result, the likelihood of an incorrect answer rose.
The results highlight the dangers of presenting AIs as omniscient, as their creators often do, says Hernández-Orallo – and which some users are too ready to believe. “We have an overreliance on these systems,” he says. “We rely on and we trust them more than we should.”"