See how biased AI image models are

MIT Technology Review covers research by Alexandra Sasha Luccioni, Christopher Akiki, Margaret Mitchell, and Yacine Jernite about bias in generative text-to-image models like DALL-E 2 and Stable Diffusion:

After analyzing the images generated by DALL-E 2 and Stable Diffusion, they found that the models tended to produce images of people that look white and male, especially when asked to depict people in positions of authority. That was particularly true for DALL-E 2, which generated white men 97% of the time when given prompts like "CEO" or "director." That’s because these models are trained on enormous amounts of data and images scraped from the internet, a process that not only reflects but further amplifies stereotypes around race and gender.

Submitted by jboy (via)