We gaze intently at our machines all the time, spending the majority of our time with computers, cell phones, and TVs. But how do machines see us? In the future, children and adults will need to understand how machines work, and how machines understand us. This project investigates how the process of a machine that recognizes us and communicates with us works behind the scene. We created an interactive installation that shows how Computer Vision works in tracking and recognizing faces and projects its knowledge base onto a 3D face sculpture. We then created a workshop that uses activities like costumes and disguises to help families understand machine vision.
We refurbished a surveillance camera from a supermarket in Washington and fortified it with Computer Vision to track human faces. Its two axial and one rotational motors can move the camera, following the human face as it meanders around the gallery.
We used CNC to make a 3D form of a face with one side smooth like a real face and another side of low poly texture akin to a digital version of a real face. A projector shows looping images of faces used to train the algorithm in the face tracker.
As the camera looks around, if a face is detected, a scaled version of the face is projected onto the 3D face sculpture. The audience can see their own faces from the point of view of the camera as they are detected.
To show families how Computer Vision works, we created a workshop in conjunction with the exhibit which calls for participants to dress up in disguises to fool the machine into categorizing their faces as new faces, a game they can play at the exhibit.
This work was supported by the New York Hall of Science Designer-in-Residence 2019 program, with additional materials and facilities support from the Design and Technology deparment at Parsons School of Design and the Mechanical Engineering department at NYU.
Thanks to support and assistance from Liz Slagus, Erin Thelen, Michael Cosaboom, Nolan Quinn, Sean Walsh, Karl Szilagi, Jeffrey Geiringer, Philipp Schmitt, and Truck McDonald.
Further data and description of how we produced the exhibit for the development of self-awareness can be found in this paper submitted to Frontiers in Robotics and AI: Human-Robot Interaction.