First draft
Misread by AI
What I like about the photo is it could be identified as nature or something else by the nowadays machine. Because the AI has been set up their program to a specific function and analysis in a specific way, which is answering/information without understanding the question correctly.
The idea of this process of iterating is imaging the world that humans are no longer existing, but AI/robot does. By importing the photos into different AI generators, the outcome is diversified through the generators.
The context is generated by the AI generator, besides the idea of the process and the original photo was taken by me, where is “me” in this practice? The tools seem diverse, how do the quantitive elements and variable elements affect the context.
Second draft
Gender Bias of AI
What I like about the photo is it could be identified as either male or female by the nowadays AI machine. Because the AI has been trained by the existed images and captions and uses a specific algorithm to a particular function and analysis in a particular way, which it is giving answer/information without understanding the question correctly.
As Barbara Wright (1958) claims in the preface of Exercises in Style, Queneau wrote a story in 99 ways that were the experiment of communication patterns instead of linguistic exploration. The idea of my process of iterating is to trick the same AI generator with different biases and see the response.
- By picking a series of photos that could be biased, to get the neutral sentences, then hand out the sentences to different people, ask them to outline the images that pop up in their head when they first see the sentence.
- There still have the bias of AI generator, collect those sentences with gender bias, then put them into the text-to-images generator, we got a whole vision of “standard man” and “standard woman”.
- There has a bias ratio of the AI generator in a different topic, for example, you can’t get a sentence that the subject is female from any skateboarding picture.
The context is generated by an AI generator, besides the idea of the process and the original photo was taken by me, where is “me” in this practice? (The process I use is representing me and my way of thinking.) The tools seem diverse, how do the quantitive elements and variable elements affect the context. (Use the same AI generator, which is SeeingAI, the variable elements of different source images could exaggerate the assumption of AI’s bias)
Reference:
Queneau, R. & Wright, B., 1958. Exercises in style / by Raymond Queneau; translated by Barbara Wright., London: Gaberbocchus.b
Third draft
Machine Gaze
What I found interesting about AI machine is it is trained by the existed images and captions and use a specific algorithm to a particular function and analysis in a particular way, it becomes a container of bias. In the patriarchal society, the male gaze has objectified women, nevertheless, men have to meet certain regulations to have muscularity. The question is, can we see the AI machine gaze as a term or a value that is similar to the male gaze?
In JeongMee Yoon’s ongoing project The Pink and Blue Project, the initial idea was inspired by how her daughter was fascinated about the pink colour products, and the observation on colour consumption between Korea and America gave her a deeper thought on how consumerism shape people’s gender identity. By revisiting the children she had filmed before, she is documenting the change of that person but also the change of the social recognition toward gender bias, furthermore, the effect on colour by a social movement. JeongMee uses documenting of different children’s favourite consumptions in different ages of that person, as her way of iterating practice.



With the development of AI technology, it become more accurate. The way that AI machines answer questions, might be the combination of social movement and political correctness due to the program setting.
I use two AI systems to test the current bias of it:
- The first one is the image-to-test generator, an app called SeeingAI designer by Microsoft. By importing collages of different part of human postures or features to see in what extend, the person in that image could be identify as woman or man or with a more neutral demonstration, a person.
- The second one is to use the text-to-image generator called nightcafe.studio. The main subject is either woman or man or person, with the slightly change of the sentence, the sentence has grown to have more emotion or change to other things. Bit by bit, the outcome become a catalog and the commonality and differences rise to you.
These two ways of generating seems to be parallel but share the same idea of machine bias. The comparison of those outcomes also gives a more in-depth insight.
I wonder after a few years if I put these collages and sentences into the AI machine of that time, would the outcome be more neutral and gender-ambiguous? Or just stay the same?
Different generations would probably have different gender biases, how can the AI define the mixture of modern hipster and Napoléon?
Reference:
It’s Nice That (2019) A study of gender and colour over 14 years: JeongMee Yoon on The Pink and Blue Project. Available at: https://www.itsnicethat.com/features/jeongmee-yoon-the-pink-and-blue-project-photography-040319