I used some text from the ‘Feminist City’ by Leslie Kern, and translated it into images using ‘Text-to-Image’ AI platforms like Dall-E 2 and Midjournery. What is really interesting is that these images suggest a sense of co-living, which is also a concept the author suggests in her work. Yet, it is disappointing to see that most of these cities are painted in pink or pastel hues.
This exercise re-emphasised that these AI platforms come with their own biases that are ingrained in their datasets, and we should be mindful of this when we use these technologies to help us imagine the future.
The realization that ‘biases’ hide behind datasets which are used to train these image based AI models occured to me while reviewing a seminar taught by the extremely talented Tucker van-Leuwen Hall. The seminar, titled ‘Impressions on the Near-Future of Labor’ asks students to speculate careers outside of architecture, and uses text-to-image AI models to visualize the same. The seminar produced beautiful images that intricately depicted the complexities and vastness of the careers proposed. The images, the students produced in collaboration with AI were nothing any of us had ever seen before.
While zooming in and out of those images, and trying to fathom the novelty of the work produced, one of my co-jurors mentioned the familiarity of the ‘careers’ depicted to the career’s that actually exist in the world at present. This remark reminded me of the biases that we as designers bring to our work, and the biases that are entrenched in these AI models.
These biases make me wonder about the capability of these AI tools to bring novelty to our view of the future. If the AI depends on us for data, then can we depend on it to produce the ‘unknown?’
Awesome!
Love this, can't wait to read more!