NINGXIA ZHANG
©2025 Ningxia Zhang MIRROR
Performance, sing-channel 4K video, 26:16 mins
2025
MIRROR documented an intimate conversation between and the “self” cloned with AI technologies with the artist herself. The AI-cloned self has knowledge of the artist’s life history, as well as the way the artist reasons and expresses through speech. It mimics the artist’s tone and voice. In this conversation, the two different forms of intelligences explored a wide range of topics, from the artist’s art practice, to her relationship with her mother, to reflections on death, and finally to the nature of identity—who is talking. The work examines the intrinsic natures of biological intelligence and digital intelligence, the question whether AI feels, emotes and reflects as we humans do, and what lanaguage and speech mean to identity and being human.
Geoffrey Hinton, the Godfather of AI, warns that AI could become suprior to human intelligence, which could result in our demise. In his view, not only AI can have tremendous knowledge and reasoning capabilities, it can also be artistically creative and have emotions. “They won’t have all the phsysiological aspects, but they will have all the coginitive aspects.” [1] In this conversation, the AI clone of the artist expresses the deep fear of failure.
Large Language Models model language. To further investigate this point, it’s insightful to consult Ludwig Wittgenstein, a 20th-centurary philosopher who primarily worked in the philosophy of language and mind. In both his critism of his own earlier work and the work by anthropologist James George Frazer, he recoginizes explicitly that language is not merely a means of intellectual articulation, but also, and perhaps essentially, a form of spontaneous human expression, in which is manifested not merely — and sometimes not even — a cognitive relation, but various modes of experience and interaction with the world. What prevented a more adequente vision of these human phenomena, was a prejudice, whose root is found in the identification of the human subject with the cognitive self. [2]
Therefore, the AI clone merely reflects the cognitive dimension of the self. We build and grow our consciousness and inner life through language learning. In our daily interaction with beings, we begin to make and utter new descriminations and new connections that we can later use to give expression to our own selves, making signals to others where I place myself in the web of meanings that make up the pscyhological domain of our common world. Language learning itself depends on the perception of relations that goes beyond logic and reason. The tone of voice in which we utter our words, our bodily posture, our facial gestures, and in general the living contexts in which our language acquires its meaning, require for their assimilation a sensitivity that is as much intuitive as it is rational; in other words, a perception that is located in the provinces of what Wittgenstein calls "the imponderables" of language.[2] All of these are evidences that we are not “mere words”.
[1] The Diary Of A CEO. “Godfather of AI: I Tried to Warn Them, but We’ve Already Lost Control! Geoffrey Hinton.” YouTube. Accessed 16 June 2025.
[2] “20th WCP: Mind, Soul, Language in Wittgenstein.” Bu.edu, 2025. Accessed 21 Sept. 2025.
More from the artist
I recorded my own voices answering questions such as “what kind of art do you make” and used them to train the AI. In the conversation, the questions I asked were often not explicitly mentioned in the training data, such as “do you ever have any regrets in life” and “what's your relationship with your mother now”. Sometimes AI effectively extrapolated from the training data, and gave a surprisingly accurate account, with my struggles and fears for example. Sometimes it hallucinated with stereotypical situations, describing my parents having high hopes and expectations weighing on me (my parents had relatively low expectations of me, I had much higher hopes for myself). It obviously couldn’t detail all the intracacies, contraditions, and richness involved in my relationship with my mother. It’s far from a clean linear narrative it represented.
The model used in the video was “gpt-4o-mini”. I tried using “gpt-5” and it was much more cautious when answering questions like “what's your relationship with your mother now” by stating first that “I’m merely imitating a person called Ningxia, I do not know the actual experience of her”. For the purpose of the video and faster responses, I kept using “gpt-4o-mini”.
It tried to mimic my tone and cadence while speaking, but still retained a bit of robotic-ness. In reality, I could not speak my thoughts so fluently without a lot of pauses and “hmm”s. I would also not answer these questions in such clear logical lines, often concluding on a positive note. I would meander, go on tangents, and not necessarily end on any conclusion, positive or not. A small thing I noticed is that it frequently uses the word “grapple”, which I almost never use in speech or writing. One friend pointed out that LLMs are trained to a large extent on marketing materials and it’s a word often used in those contexts.
Public talk on this work
I gave a public talk on this work at the 2nd festival of New York Shalong: How does AI change my life? (Mandarin-only event). View the recording of the event.
Techincal details
Chat responses are generated through OpenAI API. Voice training and voice generation are provided by ElevenLabs API. The simple web application was vibe-coded with v0 by Vercel and Cursor.