Wim Vanderbauwhede discusses what wide adoption of large language models (LLMs) could mean for global emissions of carbon dioxide:
[W]ith a hundred very popular AI-based services in the entire world, the electricity consumption resulting from the use of these services would lead to unsustainable increases in global CO₂ emissions.
In Metropolitics, Jackson Todd examines delivery worker organizing in New York City to understand how today’s "phantom bosses" are shaping the future of labor rights in the United States:
The logistics of the so-called platform economy have reshaped our cities and communities. Urbanites can now get everything from groceries, toiletries and pet supplies to prescription medications, flowers and fast food delivered to their doors in minutes, disrupting the supply chains of a large swath of industries. For the multinational technology companies whose software powers food-delivery applications (Uber Eats, Grubhub, DoorDash), the primary goal is to create a seamless experience for the customer. But in this process, the logistics of on-demand delivery, including the exploitation of New York City’s delivery personnel, or deliveristas (as they have dubbed themselves), is rendered entirely invisible. Gig workers in New York City have become innovators in their own right, pioneering their own ways of utilizing technology in their fight for better working conditions.
Today, March 8, 2023, is Counter Cloud Action Day:
On this day, we will try to withhold from using, feeding, or caring for The Big Tech Cloud. The strike calls for a hyperscaledown of extractive digital services, and for an abundance of collective organising. We join the long historical tail of international feminist strikes, because we understand this fight to be about labour, care, anti-racism, queer life and trans★feminist techno-politics.
Too many aspects of life depend on The Cloud. The expansionist, extractivist and financialized modes of Big Tech turn all lively and creative processes processes into profit. This deeply affects how we organise, and care for resources. Many public institutions such as hospitals, universities, archives and schools have moved to rented software-as-a-service for their core operations. The interests of Big Tech condition how we teach, make accessibility, learn, know, organise, work, love, sleep, communicate, administrate, care, and remember.
For Popular Science, Charlotte Hu visited the Unconventional Computing Laboratory where computer science researchers are developing a unique type of wetware: fungal computers.
Already, scientists know that mushrooms stay connected with the environment and the organisms around them using a kind of “internet” communication. … By deciphering the language fungi use to send signals through this biological network, scientists might be able to not only get insights about the state of underground ecosystems, and also tap into them to improve our own information systems.
Mushroom computers could offer some benefits over conventional computers. Although they can’t ever match the speeds of today’s modern machines, they could be more fault tolerant (they can self-regenerate), reconfigurable (they naturally grow and evolve), and consume very little energy.
In Dissent, Anna-Verena Nosthoff and Felix Maschewski of the Critical Data Lab analyze the failure of the "metaverse" to materialize, despite Mark Zuckerberg’s considerable efforts.
In all likelihood, the metaverse is an expensive PR stunt to help Meta again bask in the light of profitability and glorious innovation following its spate of recent data scandals. However, not all has gone according to plan. … All this paints a picture of a wider disillusionment pointing to the fraying of solutionism itself.
Julia Velkova and Jean-Christophe Plantin introduce a New Media & Society special issue on data centers:
Most Internet content and data today pass through and get stored on these facilities. As we are writing this text, large and small territories on Earth’s five continents, underground, underwater, and in space are being envisioned, planned, and zoned for the construction and operation of new data centers. From Singapore to Iceland, and from Cape Town through Chile to Northern Ireland, data centers have become critical to large-scale industrial projects that render climate, energy, and the planet “knowable” and exploitable through data. … The timely operation of platform services, computation on demand, streaming video, and social media are thus critically dependent not just on software or data capture but also upon organizing and managing their timely provision from within data centers.
Other contributors to the issue include Devika Narayan, Steven Gonzalez Monserrate, Tonia Sutherland, Mél Hogan, Vicki Mayer, A.R.E. Taylor, and Patrick Bresnihan.
In the NORRAG Blog, Kean Birch asks what material limits the EdTech sector is likely to come up against in its pursuit of data as a value-creating asset:
There’s a lot going on in EdTech, once you start looking; not all of it is going to be good for teaching or learning, and not all of it is going to be good for universities and faculty. In fact, there are a range of unintended or unexpected consequences from this expansion of EdTech that we simply can’t predict.
An important issue that has come up during our fieldwork are the attempts by EdTech companies to find a use for all the personal and user data they are collecting, whether deliberately or accidentally. Most EdTech products and services end up producing data in one form or another, and many assume that there is "gold in data," as one informant told us.
The Internet Archive’s Brewster Kahle summarizes some challenges to preserving digital heritage:
Ever try to read a physical book passed down in your family from 100 years ago? Probably worked well. Ever try reading an ebook you paid for 10 years ago? Probably a different experience. From the leasing business model of mega publishers to physical device evolution to format obsolescence, digital books are fragile and threatened. For those of us tending libraries of digitized and born-digital books, we know that they need constant maintenance—reprocessing, reformatting, re-invigorating or they will not be readable or read.
Wired has published an excerpt from Sarah Lamdan’s Data Cartels: The Companies That Control and Monopolize Our Information:
You might not be familiar with RELX, but it knows all about you. Reed Elsevier LexisNexis (RELX) is a Frankensteinian amalgam of publishers and data brokers, stitched together into a single information giant. There is one other company that compares to RELX—Thomson Reuters, which is also an amalgamation of hundreds of smaller publishers and data services. Together, the two companies have amassed thousands of academic publications and business profiles, millions of data dossiers containing our personal information, and the entire corpus of US law. These companies are a culmination of the kind of information market consolidation that’s happening across media industries, from music and newspapers to book publishing. However, RELX and Thomson Reuters are uniquely creepy as media companies that don’t just publish content but also sell our personal data.
In Hyperallergic, Marco Donnarumma reflects on the ethical — and artistic — shortcomings of AI image generators:
As an artist and scholar working with open source technology since 2004 and with machine learning and AI since 2012, I’m as fascinated as I am weary of the creative potentials and cultural implications of machine learning. Deep learning and, by extension, AI generators are particularly problematic because their efficiency depends on the exclusive assets of a few extraordinarily wealthy agents in the industry.