From Kate Crawford and Vladan Joler, who previously collaborated on Anatomy of an AI System, this visualization explores the mutual shaping of social structures and technological systems since 1500.
The aim is to view the contemporary period in a longer trajectory of ideas, devices, infrastructures, and systems of power. It traces technological patterns of colonialism, militarization, automation, and enclosure since 1500 to show how these forces still subjugate and how they might be unwound. By tracking these imperial pathways, Calculating Empires offers a means of seeing our technological present in a deeper historical context. And by investigating how past empires have calculated, we can see how they created the conditions of empire today.
Make sure to check out the five-minute audio tour.
In Time Magazine, Petra Molnar discusses her research on border technologies:
We need stronger laws to prevent further human rights abuses at these deadly digital frontiers. To shift the conversation, we must focus on the profound human stakes as smart borders emerge around the globe. With bodies becoming passports and matters of life and death are determined by algorithm, witnessing and sharing stories is a form of resistance against the hubris and cruelty of those seeking to use technology to turn human beings into problems to be solved.
The Association for Progressive Communications newsletter covers community networks from Colombia to Nigeria:
By being rooted in their own communities and encouraging collective articulation, a community network can became a catalyst for rethinking digital spaces and build more inclusive practices, taking into account, say, inequalities of gender, race and those that impact people with disabilities – as the pieces collated for this issue show.
In a contribution to a series of essays on Silicon Valley for the venerable academic blog Crooked Timber, Tamara Kneese writes about being an ethnographer in the world of tech:
What do the stories of the many generations of ethnographic researchers who joined and sometimes left the tech industry have to tell us about how Silicon Valley ideologies are taken up, embedded, and contested in workflows and products? How do the collected personal stories, or oral histories, of UX researchers interface with those of tech campus janitors and engineers? And is there something valuable that can be learned from their varied experiences about the sometimes ambivalent relationships between research, work, and collective action?
The introductory post to the series (with links to all contributions) penned by Henry Farrell can be found here.
Melissa Heikkilä reports on a new tool for artists for MIT Technology Review:
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. …
Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet. Nightshade messes with those images.
The Thesis Whisperer on social media for academics – and why it may be a good idea to step away:
Telling academics they can achieve career success by using today’s algorithmic-driven platforms is like telling Millennials they could afford to buy a house by eating less avocado on toast. It’s a cruel lie because social media is a shit way to share your work now.
AI Now’s annual report diagnoses the challenge of concentrated power in tech – and seeks ways to bring change to the industry.
We intend this report to provide strategic guidance to inform the work ahead of us, taking a bird’s eye view of the many levers we can use to shape the future trajectory of AI – and the tech industry behind it – to ensure that it is the public, not industry, that this technology serves.
MIT Technology Review covers research by Alexandra Sasha Luccioni, Christopher Akiki, Margaret Mitchell, and Yacine Jernite about bias in generative text-to-image models like DALL-E 2 and Stable Diffusion:
After analyzing the images generated by DALL-E 2 and Stable Diffusion, they found that the models tended to produce images of people that look white and male, especially when asked to depict people in positions of authority. That was particularly true for DALL-E 2, which generated white men 97% of the time when given prompts like "CEO" or "director." That’s because these models are trained on enormous amounts of data and images scraped from the internet, a process that not only reflects but further amplifies stereotypes around race and gender.
Wim Vanderbauwhede discusses what wide adoption of large language models (LLMs) could mean for global emissions of carbon dioxide:
[W]ith a hundred very popular AI-based services in the entire world, the electricity consumption resulting from the use of these services would lead to unsustainable increases in global CO₂ emissions.