We gaze intently at our machines all the time, spending the majority of our time with computers, cell phones, and TVs. But how do machines see us? In the future, children and adults will need to understand how machines work, and how machines understand us. This project investigates how the process of a machine that recognizes us and communicates with us works behind the scene. We created an interactive installation that shows how Computer Vision works in tracking and recognizing faces and projects its knowledge base onto a 3D face sculpture. We then created a workshop that uses activities like costumes and disguises to help families understand machine vision.



We refurbished a surveillance camera from a supermarket in Washington and fortified it with Computer Vision to track human faces. Its two axial and one rotational motors can move the camera, following the human face as it meanders around the gallery.


We used CNC to make a 3D form of a face with one side smooth like a real face and another side of low poly texture akin to a digital version of a real face. A projector shows looping images of faces used to train the algorithm in the face tracker.


As the camera looks around, if a face is detected, a scaled version of the face is projected onto the 3D face sculpture. The audience can see their own faces from the point of view of the camera as they are detected.


To show families how Computer Vision works, we created a workshop in conjunction with the exhibit which calls for participants to dress up in disguises to fool the machine into categorizing their faces as new faces, a game they can play at the exhibit.


After spending decades tediously picking out shoplifters at a Whole Foods supermarket in Richmond, Washington, this surveillance camera has been refitted with Computer Vision and Machine Learning and given a new lease on Life: to show off how machines see people. We project what it sees through its camera when faces are detected as a comparison with some of the images used to train it. Audiences can see the camera's view on a monitor and figure out ways their faces can be detected or avoid detection using other body parts, (un)favorable lighting, and angles of view. Current computer technologies are hard to understand. Taking the perspective of a refurbished security camera, we show how Computer Vision operates in recognizing faces.
This is how we built it, and how we ran the workshops.


As the security camera moves around int its three axes, audiences begin to perceive emotional qualities like curiosity, caution, and shyness in the machine's movements. When their faces are detected, the projection reveals their own features on the large sculpture, making them curious about how it got there. Then audiences begin to examine the details, noticing how their own faces are seen in the monitor next to the camera, which portrays the point-of-view of the machine and how face detection works. The exhibit aims to facilitate the process of development of self-awareness through interaction with how machines see us.
To see how we built it, see the photos above, the design document, the museum label, and this video.


To allow audiences to participate handson with our work, we ran workshops focused on understanding how Computer Vision and Machine Learning works for young people and their families. We let them draw faces and discuss what it means for computers to see us. We then let everyone play with training an image classifier online using their own faces. Then we let people go nuts and put on disguises and train the classifier again on these new faces. They then try taking off articles of clothing on their face until they can play with the classifier telling them its their face vs. the new disguised face. Finally we take them to the exhibit where they continue exploring image recognition with our projection-coupled camera. Many families kept their disguises on at the exhibit and explored what self-awareness means in the context of computer vision.
See this video for the workshop interactions.

This work was supported by the New York Hall of Science Designer-in-Residence 2019 program, with additional materials and facilities support from the Design and Technology deparment at Parsons School of Design and the Mechanical Engineering department at NYU.
Thanks to support and assistance from Liz Slagus, Erin Thelen, Michael Cosaboom, Nolan Quinn, Sean Walsh, Karl Szilagi, Jeffrey Geiringer, Philipp Schmitt, and Truck McDonald.
Further data and description of how we produced the exhibit for the development of self-awareness can be found in this paper in Frontiers in Robotics and AI: Human-Robot Interaction.