"While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.
As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security. The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems."
#USA #CyberSecurity #Surveillance #AI #AlgorithmicTransparency
"The most popular writers on Substack earn up to seven figures each year primarily by persuading readers to pay for their work. The newsletter platform’s subscription-driven business model offers creators different incentives than platforms like Facebook or YouTube, where traffic and engagement are king. In theory, that should help shield Substack from the wave of click-courting AI content that’s flooding the internet. But a new analysis shared exclusively with WIRED indicates that Substack hosts plenty of AI-generated writing, some of which is published in newsletters with hundreds of thousands of subscribers.
The AI-detection startup GPTZero scanned 25 to 30 recent posts published by the 100 most popular newsletters on Substack to see whether they contained AI-generated content. It estimated that 10 of the publications likely use AI in some capacity, while seven “significantly rely” on it in their written output. (GPTZero paid for subscriptions to Substack newsletters that are heavily paywalled.) Four of the newsletters that GPTZero identified as using AI extensively confirmed to WIRED that artificial intelligence tools are part of their writing process, while the remaining three did not respond to requests for comment.
Many of the newsletters GPTZero flagged as publishing AI-generated writing focus on sharing investment news and personal finance advice. While no AI-detection service is perfect—many, including GPTZero, can produce false positives—the analysis suggests that hundreds of thousands of people are now regularly consuming AI-generated or AI-assisted content that they are specifically subscribing to read. In some cases, they’re even paying for it."
https://www.wired.com/story/substacks-writers-use-ai-chatgpt/
#AI #GenerativeAI #Substack #Newsletters
"Amazon has invested a further $4bn in artificial intelligence start-up Anthropic, doubling its total investment in the company to $8bn, as Big Tech’s race to dominate the generative AI sector intensifies.
The deal will be Amazon’s biggest-ever venture investment, after it committed an initial $1.25bn in the San Francisco based-group in September last year, increasing that to $4bn at the end of March.
The funding is one of a number of investment partnerships struck between AI start-ups and so-called hyperscalers, or large cloud service providers, over the past year.
Microsoft has invested more than $13bn in OpenAI, while backing French AI start-up Mistral and Abu Dhabi-based G42. Google has a deal with Cohere, where it provides cloud infrastructure to train the Canadian start-up’s AI software."
#BigTech #AI #GenerativeAI #Amazon #Anthropic #Claude #AIHype #AIBubble https://www.ft.com/content/01da4ed8-0371-44b0-a07d-63e07f32939e?accessToken=zwAGJ4gMPHB4kc8B2k7YA3FEsNOgfWPgfzKTng.MEUCIEemZS1iNhS7o9sZbigOlwWOUr2OQ6N0uZNa8a3nZnu0AiEAhqSWGs1Cu7Xbn_YV-Uf25mAVjE1sp8Z5Kw1PATDzp4I&sharetype=gift&token=7bb8c695-211d-415e-aec5-53b5d9dfe018
"I never like to root against fellow reporters, but I’ll admit I was also happy to see them go. While James and Rose did not actively supplant any existing newsroom jobs, I was concerned that the effort diverted resources that could be used on traditional media expenses, like human reporters, photographers, and editors.
The Garden Island was severely underresourced—for much of my time working there, I was one of only two reporters covering an island of 73,000. The paper was purchased earlier this year by the conglomerate Carpenter Media Group, which controls more than 100 local outlets throughout North America.
Caledo, while declining to disclose how much it was paid, said that new ads embedded in the broadcasts would offset the cost of the program. However, it does not appear as though OPI was able to sell a single ad on the videos."
https://www.wired.com/story/the-ai-reporter-who-took-my-old-job-just-got-fired/
#AI #GenerativeAI #Journalism #Media #News #USA #Hawaii
"Companies around the world are rushing to come up with clever fixes to these problems, from more efficient and specialised chips to more specialised and smaller models that need less power. Others are dreaming up ways of tapping new high-quality data sources such as textbooks, or generating synthetic data, for use in training. Whether this will lead to incremental improvements in the technology, or make the next big leap forward affordable and feasible, is still unclear. Investors have poured money into superstar firms like OpenAI. But in practice there is not much difference in performance and capabilities between the flagship models offered by OpenAI, Anthropic and Google. And other firms including Meta, Mistral and xAI are close behind.
It seems that much adoption of AI is in secret, as workers use it without telling their bosses
For end users of AI, a different kind of struggle is under way, as individuals and companies try to work out how best to use the technology. This takes time: investments need to be made, processes rethought and workers retrained. Already some industries are further ahead in adopting AI than others: a fifth of information-technology firms, for instance, say they are using it.
As the technology becomes more sophisticated—such as with the arrival in 2025 of “agentic” systems, capable of planning and executing more complex tasks—adoption may accelerate.
But culture also matters. Although few firms tell statisticians they are using AI, one-third of employees in America say they are using it for work once a week."
#AI #GenerativeAI #AIBubble #AIHype #AIAgents
"A paper[1] presented at last week's EMNLP conference reports on a promising new AI-based tool (available at https://spinach.genie.stanford.edu/ ) to retrieve information from Wikidata using natural language questions. It can successfully answer complicated questions like the following:
"What are the musical instruments played by people who are affiliated with the University of Washington School of Music and have been educated at the University of Washington, and how many people play each instrument?"
The authors note that Wikidata is one of the largest publicly available knowledge bases [and] currently contains 15 billion facts, and claim that it is of significant value to many scientific communities. However, they observe that Effective access to Wikidata data can be challenging, requiring use of the SPARQL query language.
This motivates the use of large language models to convert natural language questions into SPARQL queries, which could obviously be of great value to non-technical users."
https://meta.wikimedia.org/wiki/Research:Newsletter/2024/November
#Wikipedia #Wikidata #LLMs #AI #GenerativeAI #SPARQL
"OpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the results of an investigation that looked at how often ChatGPT produced a harmful gender or racial stereotype based on a user’s name. Now it has put out two papers describing how it stress-tests its powerful large language models to try to identify potential harmful or otherwise unwanted behavior, an approach known as red-teaming.
Large language models are now being used by millions of people for many different things. But as OpenAI itself points out, these models are known to produce racist, misogynistic and hateful content; reveal private information; amplify biases and stereotypes; and make stuff up. The company wants to share what it is doing to minimize such behaviors.
MIT Technology Review got an exclusive preview of the work. The first paper describes how OpenAI directs an extensive network of human testers outside the company to vet the behavior of its models before they are released. The second paper presents a new way to automate parts of the testing process, using a large language model like GPT-4 to come up with novel ways to bypass its own guardrails."
#AI #GenerativeAI #OpenAI #ChatGPT #LLMs #AITraining
"Microsoft Office, like many companies in recent months, has slyly turned on an “opt-out” feature that scrapes your Word and Excel documents to train its internal AI systems. This setting is turned on by default, and you have to manually uncheck a box in order to opt out.
If you are a writer who uses MS Word to write any proprietary content (blog posts, novels, or any work you intend to protect with copyright and/or sell), you’re going to want to turn this feature off immediately.
I won’t beat around the bush. Microsoft Office doesn’t make it easy to opt out of this new AI privacy agreement, as the feature is hidden through a series of popup menus in your settings:
On a Windows computer, follow these steps to turn off “Connected Experiences”: File > Options > Trust Center > Trust Center Settings > Privacy Options > Privacy Settings > Optional Connected Experiences > Uncheck box: “Turn on optional connected experiences”"
https://medium.com/illumination/ms-word-is-using-you-to-train-ai-86d6a4d87021
#Microsoft #AI #GenerativeAI #AITraining #MSWord #Privacy #Word
"As part of the U.S. pledge to cut its total greenhouse gas emissions in half by the end of the decade, compared to 2005 levels, President Joe Biden has vowed to eliminate all power grid emissions by 2035.
But there are 220 new gas-burning power plants in various stages of development nationwide, according to the market data firm Yes Energy. Most of those plants are targeted to come online before 2032. Each has a lifespan of 25 to 40 years, meaning most would not be fully paid off — much less shut down — before federal and state target dates for transitioning power grids to cleaner electricity.
The trend may continue. President-elect Donald Trump and his advisers have repeatedly vowed to scrap rules on power plant emissions, which could unleash even more fossil plant construction and delay retirements of existing plants.
In several parts of the nation, data centers are the largest factor behind the building boom, according to analysts and utilities, but the precise percentage of new demand attributable to data centers is not known. Power companies have also been bracing for other new demands, including a proliferation of new factories across the country and the transition to electric vehicles and home appliances such as heat pumps."
https://www.washingtonpost.com/climate-environment/2024/11/19/ai-cop29-climate-data-centers/
#USA #ClimateChange #GlobalWarming #FossilFuels #AI #DataCenters #GasEmissions
"Narayanan and Kapoor, both Princeton University computer scientists, argue that if we knew what types of AI do and don’t exist—as well as what they can and can’t do—then we’d be that much better at spotting bullshit and unlocking the transformative potential of genuine innovations. Right now, we are surrounded by “AI snake oil” or “AI that does not and cannot work as advertised,” and it is making it impossible to distinguish between hype, hysteria, ad copy, scam, or market consolidation. “Since AI refers to a vast array of technologies and applications,” Narayanan and Kapoor explain, “most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil.”
Narayanan and Kapoor’s efforts are clarifying, as are their attempts to deflate hype. They demystify the technical details behind what we call AI with ease, cutting against the deluge of corporate marketing from this sector. And yet, their goal of separating AI snake oil from AI that they consider promising, even idealistic, means that they don’t engage with some of the greatest problems this technology poses. To understand AI and the ways it might reshape society, we need to understand not just how and when it works, but who controls it and to what ends."
https://newrepublic.com/article/188313/artifical-intelligence-scams-propaganda-deceit
#AI #PredictiveAI #SiliconValley #SnakeOil #Scams #Propaganda #AIHype #AIBubble #PoliticalEconomy
"I'm not here to diminish the need for AI training for educators, or to chastise Common Sense Media's involvement with OpenAI. Rather, it's useful for me to look at what this relationship produced, as a way of making sense of the kind of thinking that OpenAI is engaged in around education.
One criticism of the course’s design is that its videos lack closed captioning. That's a problem from accessibility, and little mention of accessibility is acknowledged here.
In the sections below I want to provide some useful counter-arguments for what the OpenAI course is "teaching." My goal is to offer more nuance to its definitions and highlight the bias of its framing. Much of this will be analyzed through the lens of my piece on "Challenging the Myths of Generative AI," which offers a more skeptical framework for thinking through how we talk about and use AI."
https://mail.cyberneticforests.com/how-does-openai-imagine-k-12-education/
#AI #GenerativeAI #OpenAI #K12 #Education #Schools
"OpenAI tried to recover the data — and was mostly successful. However, because the folder structure and file names were “irretrievably” lost, the recovered data “cannot be used to determine where the news plaintiffs’ copied articles were used to build [OpenAI’s] models,” per the letter.
“News plaintiffs have been forced to recreate their work from scratch using significant person-hours and computer processing time,” counsel for The Times and Daily News wrote. “The news plaintiffs learned only yesterday that the recovered data is unusable and that an entire week’s worth of its experts’ and lawyers’ work must be re-done, which is why this supplemental letter is being filed today.”
The plaintiffs’ counsel makes clear that they have no reason to believe the deletion was intentional. But they do say the incident underscores that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools."
#AI #GenerativeAI #OpenAI #LLMs #AITraining #NYT Copyright #IP
"Instagram is flooded with hundreds of AI-generated influencers who are stealing videos from real models and adult content creators, giving them AI-generated faces, and monetizing their bodies with links to dating sites, Patreon, OnlyFans competitors, and various AI apps.
The practice, first reported by 404 Media in April, has since exploded in popularity, showing that Instagram is unable or unwilling to stop the flood of AI-generated content on its platform and protect the human creators on Instagram who say they are now competing with AI content in a way that is impacting their ability to make a living.
According to our review of more than 1,000 AI-generated Instagram accounts, Discord channels where the people who make this content share tips and discuss strategy, and several guides that explain how to make money by “AI pimping,” it is now trivially easy to make these accounts and monetize them using an assortment of off-the-shelf AI tools and apps. Some of these apps are hosted on the Apple App and Google Play Stores. Our investigation shows that what was once a niche problem on the platform has industrialized in scale, and it shows what social media may become in the near future: a space where AI-generated content eclipses that of humans."
https://www.wired.com/story/ai-pimping-industry-deepfakes-instagram/
#AI #GenerativeAI #GeneratedImages #Instagram #SocialMedia
"OpenAI Inc. and Microsoft Corp. are demanding the New York Times Co. supply documents detailing the newspaper’s claims that the tech companies’ AI models have decreased subscription, licensing, advertising, and affiliate revenue.
Those details are important to analyze if the tech giants’ use of the Times’ content to train AI systems is “fair use” under the Copyright Act, according to OpenAI and Microsoft’s separate letters requesting a pre-motion conference on the documents filed Monday in the US District Court for the Southern District of New York. The fourth prong of the fair use doctrine tests whether use of copyrighted content affects the market for the original work."
#AI #GenerativeAI #NYT #OpenAI #Microsoft #Copyright #FairUse
LoL - OpenA: Money-wasting machine :-D
"Business spending on generative AI surged 500% this year, from $2.3 billion in 2023 to $13.8 billion, according to data released by Menlo Ventures on Wednesday.
The report also found that OpenAI ceded market share in enterprise AI, declining from 50% to 34%. Anthropic doubled its market share from 12% to 24%. The results came from a survey of 600 enterprise IT decision-makers from companies with 50 or more employees, per the report.
Menlo is an investor in Anthropic. OpenAI did not immediately respond to a request for comment.
Tim Tully, a partner at Menlo Ventures, told CNBC in an interview that the power shift is thanks in part to the advancement of Claude 3.5 and because the majority of companies are using three or more large AI models. Although OpenAI and Anthropic dominated companies’ AI model use, he said, people are “juggling models” and that habit is “not a well-understood piece of data.”
“Developers are pretty savvy — they know how to go back and forth between models fairly quickly,” Tully explained. “They’re choosing the model that fits their use case best... and that’s likely Claude 3.5.”"
#AI #GenerativeAI #Anthropic #OpenAI #Claude
"In the United States, the approach to governing artificial intelligence (AI) is still in its early stages. As policymakers, developers, and civil society work together to navigate an uncertain digital future, encouraging greater openness in the AI model ecosystem will be critical to upholding democratic values, serving the public interest, and promoting innovation. There are five essential aspects of openness that span both technical and non-technical features of models: (1) open code that can be modified, (2) open licenses that enable third-party use, (3) transparency about model inputs, (4) transparency about possible model misuse, and (5) open standards for interconnection among AI models.
Promoting these five attributes of openness can create the kind of AI ecosystem that better serves public transparency and democratic accountability, innovation and competition, education and research, and security. We explore the relationship between open AI models and each of these benefits and recommend steps that policymakers, researchers, AI companies, developers, and civil society organizations should take."
https://www.newamerica.org/oti/reports/openness-in-artificial-intelligence-models/
#AI #GenerativeAI #LLMs #OpenSource #OpenModels
"Niantic, the company behind the extremely popular augmented reality mobile games Pokémon Go and Ingress, announced that it is using data collected by its millions of players to create an AI model that can navigate the physical world.
In a blog post published last week, first spotted by Garbage Day, Niantic says it is building a “Large Geospatial Model.” This name, the company explains, is a direct reference to Large Language Models (LLMs) Like OpenAI’s GPT, which are trained on vast quantities of text scraped from the internet in order to process and produce natural language. Niantic explains that a Large Geospatial Model, or LGM, aims to do the same for the physical world, a technology it says “will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.”"
https://www.404media.co/pokemon-go-players-have-unwittingly-trained-ai-to-navigate-the-world/
#Niantic #Pokemon #AR #AugmentedReality #AI #LGM
"An AI-generated nude photo scandal has shut down a Pennsylvania private school. On Monday, classes were canceled after parents forced leaders to either resign or face a lawsuit potentially seeking criminal penalties and accusing the school of skipping mandatory reporting of the harmful images.
The outcry erupted after a single student created sexually explicit AI images of nearly 50 female classmates at Lancaster Country Day School, Lancaster Online reported.
Head of School Matt Micciche seemingly first learned of the problem in November 2023, when a student anonymously reported the explicit deepfakes through a school portal run by the state attorney’s general office called "Safe2Say Something." But Micciche allegedly did nothing, allowing more students to be targeted for months until police were tipped off in mid-2024."
#USA #Pennsylvania #AI #GenerativeAI #GeneratedImages #DeepFakes
"Top Justice Department antitrust officials have decided to ask a judge to force Alphabet Inc.’s Google to sell off its Chrome browser in what would be a historic crackdown on one of the world’s biggest tech companies.
The department will ask the judge, who ruled in August that Google illegally monopolized the search market, to require measures related to artificial intelligence and its Android smartphone operating system, according to people familiar with the plans.
Antitrust officials, along with states that have joined the case, also plan to recommend Wednesday that federal judge Amit Mehta impose data licensing requirements, said the people, who asked not to be named discussing a confidential matter.
If Mehta accepts the proposals, they have the potential to reshape the online search market and the burgeoning AI industry. The case was filed under the first Trump administration and continued under President Joe Biden. It marks the most aggressive effort to rein in a technology company since Washington unsuccessfully sought to break up Microsoft Corp. two decades ago."
#USA #DoJ #Google #Chrome #Antitrust #Search #SearchEngines #Monopolies #Oligopolies #AI #Android
MegaLoL - LLM tuga made in Barcelona (Espanha)!!! -> ""Vamos estar a trabalhar em cima de trabalho já desenvolvido por estes centros de investigação: portanto, há trabalho de vários anos nesta área, tanto na área dos dados para a língua portuguesa, trabalho feito pelo centro de investigação da Nova Faculdade de Ciências e Tecnologia (FCT), há trabalho feito também no âmbito do Técnico" e "também há trabalho que vai ser transferido do lado da Unbabel, por toda a experiência" que a tecnológica "tem a criar modelos multilíngue e modelos que estão sendo, neste momento, treinados em supercomputadores", diz.
Em suma, "a equipa que vai estar a trabalhar na criação deste LLM é uma equipa que já tem muitos anos de experiência nesta área", sublinha Paulo Dimas.
Em cima deste trabalho "é possível entregar este LLM no primeiro trimestre" e "a isso junta-se uma colaboração muito estreita com a Fundação para a Ciência e Tecnologia, que criou condições a nível de computação", essencial para este tipo de modelos de grande escala.
"E a Fundação para a Ciência e Tecnologia tem vindo a investir em capacidade computacional que vai ser usada aqui", já que "na prática vamos utilizar (...) um computador que está em Barcelona, mas que parte dele é português", prossegue.
Ou seja, "temos um computador português que fisicamente está em Barcelona, mas uma percentagem é do Estado português", sintetiza."
#IA #AI #GenerativeAI #LLMs #Portugal #Unbabel