This “empathic interactive portrait” explores the possibilities of empathy as a meta-language through the most powerful physical interface which is our face. The face is a part of ourselves which is not visible to us without the use of a mirror or an external tool (photography, video, etc.). Dedicated entirely to the “other”, it becomes our window to the world and the world’s window to ourselves.
“This Is Not Private” uses the very privacy-invasive language of biometric recognition, to explore a very private issue such as emotion quantification. Among the questions this work is trying to raise, a particular one is whether a social-controlling tool, such as emotion recognition, can rather be used to connect people and to improve the sense of belonging to a global community.
Eight people were asked to share a personal episode of their lives, which they could somehow reconnect to one of the six basic emotions: anger, fear, sadness, joy, disgust, surprise. (Ekman, P. – 1975) They were requested to speak in their first language. This minimises the reason connected to the language and emphasises the emotional communication between the actor and the viewer.
The challenge with this work is inducing in the viewer a sort of “identity displacement”, which reminds the phenomenon of the empathy. An algorithm tracks and calculates the empathic level between the actor and the viewer. The more the viewer empathises with the actor the more their faces merge into a new identity which is no more the actor nor the viewer but something new.
This piece has been shown at:
06/16 – XXI Triennale International Exhibition, Call Over35 – Milano, IT (Currently Showing)
I wrote the original software in C++, using Openframeworks. The Program uses, among the other features, the ofxFaceTracker addon by Kyle McDonald based on Jason Saragih’s FaceTracker library.
at the opening night – Except/0n 2015
This is what really happens ‘behind the scenes’. The viewer’s and the actor’s facial features are evaluated and compared in real time. The program at the same time evaluates the emotions using a small facial expressions custom database. Finally the algorithm calculates the empathy level between the actor and the viewer and this will merge the two faces into a new one. The more the viewer syncs and empathizes with the actor, the more the two faces will merge.
some of the results obtained.