Avatar
Miguel Afonso Caetano
0bb8cfad2c4ef2f694feb68708f67a94d85b29d15080df8174b8485e471b6683
Senior Technical Writer @ Opplane (Lisbon, Portugal). PhD in Communication Sciences (ISCTE-IUL). Past: technology journalist, blogger & communication researcher. #TechnicalWriting #WebDev #WebDevelopment #OpenSource #FLOSS #SoftwareDevelopment #IP #PoliticalEconomy #Communication #Media #Copyright #Music #Cities #Urbanism

#USA #IBM #Hacking #Cybersecurity #DataLeaks #DataProtection: "Millions of Americans had their sensitive medical and health information stolen after hackers exploited a zero-day vulnerability in the widely used MOVEit file transfer software raided systems operated by tech giant IBM.

The Colorado Department of Health Care Policy and Financing (HCPF), which is responsible for administering Colorado’s Medicaid program, confirmed on Friday that it had fallen victim to the MOVEit mass hacks, exposing the data of more than 4 million patients.

In a data breach notification to those affected, Colorado’s HCPF said that the data was compromised because IBM, one of the state’s vendors, “uses the MOVEit application to move HCPF data files in the normal course of business.”

The letter states that while no HCPF or Colorado state government systems were affected by this issue, “certain HCPF files on the MOVEit application used by IBM were accessed by the unauthorized actor.”"

https://techcrunch.com/2023/08/14/millions-americans-health-data-moveit-hackers-clop-ibm/

#USA #Texas #AgeVerification #PornHub #NannyState: "Pornhub, along with several other members and activists in the adult industry are suing Texas to block the state’s impending law that would require age verification to view adult content.

The complaint was filed on August 4 in US District Court for the Western District of Texas, and the law will take effect on September 1 unless the court agrees to block it. Governor Greg Abbott passed HB 1181 into law in June.

The plaintiffs, including Pornhub, adult industry advocacy group Free Speech Coalition, and several other site operators and industry members, claim that the law violates both the Constitution of the United States and the federal Communications Decency Act."

https://www.vice.com/en/article/pkazpy/pornhub-sues-texas-over-age-verification-law

#LatinAmerica #AI #GenerativeAI #StartUps: "- As investment in the region slowly picks up, startups feel pressured to use generative-AI tools to increase productivity and draw potential investors.

- Latin America’s startups are wary that the cost of these imported AI tools might put too much strain on their already scarce resources.

- To offset the costs of paying for AI subscriptions in the long run, startups that can afford it are looking to develop their own in-house tools."

https://restofworld.org/2023/latin-america-startups-openai-dependency/

#AI #Labor #Climate: "In this report, I examine the intersection of these two issues in AI: climate and labor. Part I focuses on the relationship between AI labor supply chains and internal corporate workplace practices and hierarchies. How are researchers and developers grappling with the complex problem of calculating carbon footprints in machine learning while assessing potential risks and impacts to marginalized communities? In an industry dominated by OKRs (objectives and key results) and quantifiable success metrics, the importance of carbon accounting or other data collection and analysis tends to take precedence over other forms of action. In other words, even with the introduction of regulations demanding that companies measure and report their carbon emissions, it is unclear if measurement alone is enough to actually reduce carbon emissions or other environmental and social impacts. In Part II, I examine organizing campaigns and coalitions, in historical and contemporary contexts both inside and outside of the tech industry, that seek to connect labor rights to environmental justice concerns. Part I takes stock of the problem and Part II offers some potential steps toward solutions."

https://ainowinstitute.org/general/climate-justice-and-labor-rights-part-i-ai-supply-chains-and-workflows

#AI #Environment #GreenAI: "Even discussions specifically devoted to ethical concerns related to AI (highlighting issues like bias, privacy violations, indiscriminate scraping to build training data sets, etc.) often fail to include environmental issues. When the White House announced last month that seven of the key companies developing AI had agreed to a list of voluntary commitments focused on protecting Americans’ “rights and safety,” the announcement said nothing about sustainability.

It did, however, mention “transparent development of AI technology,” as well as “broader societal effects.” Perhaps within those general terms we can locate some additional commitments to be required beyond those spelled out in the initial fact sheet: that the companies commit to disclosing the energy and water consumption involved in the development and deployment of their products, that they commit to educating the broader public and their direct clients about the environmental costs and that they commit to large investments in research focused on what some are calling “green AI,” which stresses energy efficiency as one of the criteria in evaluating models."

https://www.sfchronicle.com/opinion/openforum/article/ai-chatgpt-climate-environment-18282910.phpAn

#AI #Radioactivity #STS #GenerativeAI: "Historians of science and technology have seen this all before. The details were different, but the hype wasn't. If the past is any guide to the future, the push to create AGI by building ever-larger "language models" — the systems that power ChatGPT and other chatbots — will end up a giant nothingburger despite the grand proclamations all over the media.

Furthermore, there is another important parallel between radioactivity in the early 20th century and the current race to create AGI. This was pointed out to me by Beth Singler, an anthropologist who studies the links between AI and religion at the University of Zurich. She notes that just as the dangers of the everyday uses of radioactivity were ignored, the harmful everyday uses of AI are being ignored in public discourse in favor of the potential AI apocalypse.

Not long after Marie Curie wowed audiences at a major scientific conference in 1900 with vials of radium "so active that they glowed with a pearly light," a physician who studied radioactivity with Marie Curie, Sabin Arnold von Sochocky, realized that adding radium to paint caused the paint to glow in the dark. He co-founded a company that began to manufacture this paint, which was used to illuminate aircraft instruments, compasses and watches. It proved especially useful during World War I, when soldiers began to fasten their pocket watches to their wrists and needed a way to see the time in the dark trenches to synchronize their movements."

https://www.salon.com/2023/08/12/will-godlike-ai-us-all--or-unlock-the-secrets-of-the-universe-probably-not/

#India #HigherEd #Universities #IITs: "IIT Bombay, in Mumbai, is one of India’s most prestigious higher education institutes. It’s one of the five original IIT engineering schools established in the 1950s and ’60s as the Indian government’s attempt to emulate the success of the United States’ Massachusetts Institute of Technology (MIT). Major expansions since 2008 have grown the system to 23 schools in total.

The IITs are now among the world’s top schools. Many young Indians apply for the entrance exams every year. Very few get in. In 2022, the IITs accepted just 1.83% of applicants, a rate more selective than that of U.S. Ivy League universities. The struggle to get into an IIT is so dramatic, Netflix has a multi-season scripted show, Kota Factory, about the country’s many prep schools that have sprung up to help coach applicants to pass the entrance exams.

Becoming an IIT one-percenter is often a ticket to tech success. Many big names in India’s startup scene are IIT alums. Of the country’s 108 unicorns, 68 were founded by at least one IIT graduate, according to data from analytics firm Tracxn.

The schools’ influence expands abroad too. Some of Silicon Valley’s biggest companies are run by “IITians.” Google CEO Sundar Pichai said during a 2016 speech that going to IIT Kharagpur “changed the course of my life.” In 2005, the U.S. House of Representatives honored the graduates of IITs for their contribution to American society.

The IITs’ superpower is their tightly knit network of alums. Graduating from an IIT puts you in a special club."

https://restofworld.org/2023/iit-graduates-dominate-tech/

RT @timnitGebru

“By the end of their talk, Stanford’s ability to sway Washington sounded almost as powerful as any tech giant.”

That’s because it is a powerful tech giant. Stanford is BigTech and BigTech is Stanford.

By @nitashatiku.

https://www.washingtonpost.com/technology/2023/08/11/congressional-bootcamp-stanford/

#Interface #Design #Neoliberalism #Capitalism #BigTech: "It’s at the interface where users can follow Amazon’s insistent prompts to see how they sound. Halo’s interface makes the product seem useful, and it is where the use of technology actually occurs. This exemplifies so much of emerging consumer tech, which relies on the interface to produce the appearance of utility that will ensure further use. It is the form of the interface that enables the function of the technology. Remember that Amazon tells us to “see how you sound.” In other words, you can’t actually know how you sound until you see it. Today, the interface becomes the thing that legitimizes any and all human experience, even emotion.

I’d argue that the interface itself is the ideological bulwark of capitalist technology writ large. The interface serves to naturalize—in Roland Barthes’s parlance—the ideologies, biases, histories, etc., that are embedded within our technologies. The interface is the place where users actually interact with those biases. It legitimizes the ideas and ideals designed into a technology and circulates them in society."

https://www.fastcompany.com/90836114/technology-has-an-interface-problem

#Apple #AirTag #Surveillance: "The type of smartphone you own affects how easily you can discover hidden AirTags. Owners of iPhones running iOS 14.5 or newer should receive a push alert whenever an unknown AirTag is nearby for an extended period of time and away from its owner. Apple’s website does not provide an exact time frame for when this alert is triggered.

When you click on the iPhone alert, you may be given the option to play a sound on the AirTag to help locate the device. Check that you will receive these alerts by going into the Find My app, choose the Me tab in the bottom-right corner, and make sure Item Safety Alerts is green and toggled to the right under Notifications.

Months after the release of the AirTag, Apple launched the Tracker Detect app for Android phones. Unlike the security features available for the iPhone, the Android app does not automatically look for unknown AirTags. Users must initiate the scan."

https://www.wired.com/story/how-to-find-airtags/

#OECD #Democracy #MassSurveillance: "In democratic states, mass surveillance is typically associated with totalitarianism. Surveillance practices more limited in their scope draw criticism for their potential to undermine democratic rights and freedoms and the functioning of representative democracies. Despite this, citizens living in political systems classed as democratic are increasingly subject to surveillance practices by both businesses and governments. This paper presents the results of a genealogy of OECD digitalisation discourse from the 1970s to the present to show how both harms and benefits of surveillance practices have been problematised. It shows how practices once considered unacceptable are increasingly portrayed as neutral, or even positive. A shift is identified from general agreement over the incompatibility of surveillance practices with democracy to greater acceptance of those practices when rebranded as tools to promote customisation, economic growth or public health. This transformation is significant because it: (1) shows the inherent instability of policies anchored to seemingly fixed or self-evident concepts such as ‘well-being’ or ‘public interest’; (2) highlights the fragility of democratic systems when things deemed harmful to their operation can be repurposed and subsequently permitted; and (3) highlights the contingency of (seemingly inevitable) surveillance practices, thereby opening up a space in which to challenge them."

https://policyreview.info/articles/analysis/transformation-of-surveillance-in-digitalisation-discourse

#AI #GenerativeAI #USA #AIRegulation: "All of the experts I spoke with agreed that the tech companies themselves shouldn’t be able to declare their own products safe. Otherwise, there is a substantial risk of “audit washing”—in which a dangerous product gains legitimacy from a meaningless stamp of approval, Ellen Goodman, a law professor at Rutgers, told me. Although numerous proposals currently call for after-the-fact audits, others have called for safety assessments to start much earlier. The potentially high-stakes applications of AI mean that these companies should “have to prove their products are not harmful before they can release them into the marketplace,” Safiya Noble, an internet-studies scholar at UCLA, told me.

Clear benchmarks and licenses are also crucial: A government standard would not be effective if watered down, and a hodgepodge of safety labels would breed confusion to the point of being illegible, similar to the differences among free-range, cage-free, and pasture-raised eggs."

https://www.theatlantic.com/technology/archive/2023/08/ai-misinformation-scams-government-regulation/674946

RT @yaso

Não consigo nem expressar o tamanho do absurdo que é isso aqui. Medida autoritária, invasiva e porque não protofascista, vinda de um governo que tem histórico de alinhamento com o bolsonarismo, deveria dar no mínimo indenização e porque não cadeia

https://www.cartacapital.com.br/educacao/secretaria-da-educacao-instala-aplicativo-em-celular-de-professores-e-alunos-de-sao-paulo-sem-autorizacao/

#AI #GenerativeAI #LLMs #AITraining #ChatGPT #Chatbots: "The people who build generative AI have a huge influence on what it is good at, and who does and doesn’t benefit from it. Understanding how generative AI is shaped by the objectives, intentions, and values of its creators demystifies the technology, and helps us to focus on questions of accountability and regulation. In this explainer, we tackle one of the most basic questions: What are some of the key moments of human decision-making in the development of generative AI products? This question forms the basis of our current research investigation at Mozilla to better understand the motivations and values that guide this development process. For simplicity, let’s focus on text-generators like ChatGPT.

We can roughly distinguish between two phases in the production process of generative AI. In the pre-training phase, the goal is usually to create a Large Language Model (LLM) that is good at predicting the next word in a sequence (which can be words in a sentence, whole sentences, or paragraphs) by training it on large amounts of data. The resulting pre-trained model “learns” how to imitate the patterns found in the language(s) it was trained on.

This capability is then utilized by adapting the model to perform different tasks in the fine-tuning phase. This adjusting of pre-trained models for specific tasks is how new products are created. For example, OpenAI’s ChatGPT was created by “teaching” a pre-trained model — called GPT-3 — how to respond to user prompts and instructions. GitHub Copilot, a service for software developers that uses generative AI to make code suggestions, also builds on a version of GPT-3 that was fine-tuned on “billions of lines of code.”"

https://foundation.mozilla.org/en/blog/the-human-decisions-that-shape-generative-ai-who-is-accountable-for-what/

#Hardware #Intel #GPU #Telemetry #Surveillance: "Though that sounds innocuous, Intel provides a long list of the types of data it collects, many unrelated to your computer's performance. Those include the types of websites you visit, which Intel says are dumped into 30 categories and logged without URLs or information that identifies you, including how long and how often you visit certain types of sites. It also collects information on "how you use your computer" but offers no details. It will also identify "Other devices in your computing environment." Numerous performance-related data points are also captured, such as your CPU model, display resolution, how much memory you have, and, oddly, your laptop's average battery life.

Though this sounds like an egregious overreach regarding the type of data captured, to be fair to Intel, it allows you to opt out of this program. That is apparently not the case with Nvidia, which doesn't even ask for permission at any point during driver installation, according to TechPowerUp. AMD, on the other hand, does give you a choice to opt out like Intel does, regardless of what other options you choose during installation, and even provides an explainer about what it's collecting."

https://www.extremetech.com/gaming/intels-gpu-drivers-now-collect-telemetry-including-how-you-use-your-computer

#AI #Datasets #FacialRecognition #ML #CC #CreativeCommons #Flickr #Nvidi: "Flickr Faces High-Quality (FFHQ) is a dataset of Flickr face photos originally created for face generation research by NVIDIA in 2019. It includes 70,000 total face images from 67,646 unique Flickr photos. Since its release the dataset has become of the most widely used face datasets for a wide variety of research and commercial applications ranging from face recognition to oral region gender recognition. The images in FFHQ were taken from Flickr users without explicit consent and were selected because they contained high quality face images with a permission Creative Commons license. Many of the images contain infants and children and over 10% of the dataset no longer exists on the original source yet NVIDIA, a $1T company, continues to use and benefit from the 70,000 face images taken on Flickr.com to develop commercial AI technologies.

(...)

Even though the main dataset and its derivatives mention the Creative Commons licenses associated with the media, of which many require attribution, no human readable attribution was provided for any photo in any dataset. Attribution is only provided in a 256MB JSON file that could not be opened on a standard laptop computer using Sublime text editor, let alone parsed to understand author attribution. This may amount to a large-scale breach of the Creative Commons attribution requirement. For further reading on the exploitation of Creative Commons licensing scheme, read "Creative Commons and The Face Recognition Problem". To further complicate the issue, it may not be possible at all to use non-consensual face images for AI/ML when attribution is required because including the subject or author name can force the face photo to become PII (personally identifiable information), a protected class of data."

https://exposing.ai/ffhq/

#Crypto #Cryptocurrencies #Worldcoin #AI #UBI #FacialRecognition #Biometrics #DataProtection: "As usual, Worldcoin is a power play — and one that needs to be challenged. It’s far less likely the company will gain the prominence of OpenAI, but in the meantime it’s collecting very sensitive information from people based on false promises. If the company is just a means to pump a token and cash out, that’s bad enough; but if it really does strive to achieve its grander ambitions, that would have serious consequences for us all.

Dan McQuillian, author of Resisting AI, has warned that expanding the scope of AI systems and implementing them into more essential services will ultimately shape the lives of people all around the world, and curtail the choices available to them. We’ll all become the Uber drivers kicked off the app by an algorithm who are then unable to even speak to a human to get the problem resolved. These tools take away our agency, and shift power to those who control them. As McQuillian put it, “AI can decide which humans are disposable.”

There’s a similar risk in a project like Worldcoin: if it does become a global identification system with other services built on top of it, there will be major equity problems regardless of whether Altman assures us otherwise. Operators have already told journalists the Orbs frequently break, don’t detect people properly, or even allow people to be scanned multiple times, while a user in Chile told MIT Technology Review that once he lost access to his account, the company couldn’t help him retrieve it."

https://www.disconnect.blog/p/dont-look-into-the-orb

#China #USA #TradeWar #CleanTech #Energy #Renewables: "China is responsible for the production of about 90 per cent of the world’s rare earth elements, at least 80 per cent of all the stages of making solar panels and 60 per cent of wind turbines and electric-car batteries. In some of the materials used in batteries and more niche products, China’s market share is close to 100 per cent.

China’s cornering of the clean tech supply chain has drawn comparisons to the high level of influence that Saudi Arabia enjoys in the oil market. Just as petrochemical production provides an immovable strategic buffer for the Gulf state, China’s dominance over these clean energy sectors is adding to growing geopolitical competition and has the potential to complicate the world’s fight against global warming.

The stakes are incredibly high.

The rise and rise of China’s clean tech companies poses a massive competitive threat to western manufacturing industries, including legacy carmakers and energy giants. But in the context of a worsening technological cold war with the west, those capabilities could become a source of leverage for China."

https://www.ft.com/content/6d2ed4d3-c6d3-4dbd-8566-3b0df9e9c5c6

#AI #OpenSource #Music #GenerativeAI #Copyright #IP: "The surge of open-source AI programs such as So-Vits-SVC — shared online by Chinese programmers on platforms such as GitHub — has allowed internet users to train and build their own deepfake models that mimic celebrity voices. From Singapore to Spain, people have used these Chinese-made AI programs to resurrect dead artists, parodize politicians, and bulk-produce songs in the voices of Kanye West, Taylor Swift, and Donald Trump.

But the proliferation of voice-cloning AI has also triggered warnings from authorities and the music industry. Chinese state media have warned creators of AI songs about copyright violations. After an AI-generated song featuring the voices of Drake and the Weeknd went viral, the Universal Music Group, one of the world’s biggest record companies, called deepfake songs a form of “fraud” and a threat to human creative expression."

https://restofworld.org/2023/deepfake-pop-songs-stefanie-sun-ai/

#California #SanFrancisco #SelfDriving #Activism: "The “coning” that Afergan witnessed was part of a campaign launched by Safe Street Rebel, a local activist group previously known for organizing protests in support of bike-lane construction and public-transit funding. Now its members have turned their attention to robotaxis. According to government data reported by the news site Mission Local, Cruise and its rival Waymo—a subsidiary of Google’s parent, Alphabet—together operate 571 self-driving cabs in California. Users can hail them via an app. Service is concentrated in San Francisco, where the companies have been subject to a variety of limits imposed by the California Public Utilities Commission. The two companies now want the CPUC to remove those restrictions, despite objections from San Francisco’s police union and transportation and fire departments about robotaxis’ troubling habit of blocking traffic and obstructing emergency vehicles. The commission has postponed a decision twice but is expected to vote tomorrow."

https://www.theatlantic.com/ideas/archive/2023/08/robotaxis-san-francisco-self-driving-car/674956/