General

Through the Looking-Glass: Building an Interactive AI Experience

Using AI and style transfer to immerse viewers inside a neural network.
By Cross Labs
|
May 13, 2020
A young woman's silhouette against a screen with a computer rendered brain projected on to it.
READ THE PAPER

When Associate Professor Yuta Ogai from Kougei University approached us about exhibiting an interactive installation on neural networks, we knew we wanted to create an immersive experience that was both artistic and educational. As the audience of ColoLab's exhibition "Talking with Color" would be high school and university students, we really wanted to create an experience that promoted a sense of wonder and curiosity, one that might spark a deeper interest in the sciences long after leaving the exhibition.

Artificial Intelligences are quickly closing the gap to human-level performance in a range of tasks. Abilities that were once considered to be uniquely human are now handled by algorithms with ease thanks to the advent of Deep Learning techniques. These bio-inspired models aim to reproduce the variety of cognitive functions supported by the brain – vision, attention, decision, control etc. – and as such, they are just as hard to understand and interpret as their biological counterparts. And yet, it doesn’t have to be this way, as the mechanics of these models are perfectly known. Our goal was to show the inner workings of what we understand about neural networks.

Through The Looking-Glass lets viewers immerse themselves in the inner mechanics in a dynamic and interactive way. We built our AI to apply a Multi-Style Transfer to a live video feed to show bursts of activity, color, and movement when visual input is received. The viewer can see the image deconstructed and passed through the neural network, through style filters, and finally, reconstructed in a completely new style, all in real-time.

Each screen acts as a different type of looking-glass. The left screen directly reflects what we see (the video feed) and perceive (convolutional layers) of the real world. In the center is the “Latent Space”, which is an integration layer for all signals that contribute to achieving the goal of the task. Here, that is the merging of different styles of transfer. In our brain, this space consists of the “Connectome”, that is, the neural wiring between various areas of our cortex. On the right, reconstruction layers (deconvolution) operate the multi-style transfer and the merge of what becomes an imaginary, distorted mirror – one might call it hallucination – of the reality.

Talking with Color's exhibition (exhibit Through the Looking-Glass from 3:05).

Implementation

Real-Time Multi-Style Transfer

Starting with state-of-the-art models in multi-style transfer [1;2;3],we worked on several improvements of the algorithm towards real-time efficiency. We also trained the model with many styles to select the most colorful, visually appealing and contrasting transfers. We wanted to ensure that selected neural layers provided a visually compelling, and accurate, exchange of information (inputs/outputs) across these internal states, and offered the viewer a smooth and intuitive experience.

Taking inspiration from the work of Adam Gazzaley and his team [4] at the Neuroscape lab (UCSF) on the connectome [5], we recreated, from scratch, 3D models of a glass brain and its connection pathways based on real neuro-imaging data. Whereas Neuroscape’s goal was to monitor the brain activity measured from neuro-imaging studies with MRI and EEG, our tool is instead meant to visualize a deep neural network’s latent space in an abstract yet meaningful way.

Each screen tells the viewer a different story and is designed to elicit a different experience, from curiosity, to interest, to fun. Our team loved working on this project and hope to share it with audiences around the world in the future.

Team

Antoine Pasquali, Corentin Risselin, Daniel Majonica, Javier Fernandez, and Steven Weigh

References

[1] Gatys, L.A., Ecker, A.S., and Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of IEEE ComputerVision and Pattern Recognition, 2016.

 [2] Johnson, J., Alahi, A., and Fei, L.F. (2016). Perceptual losses for real-time style transfer and super-resolution. In Proceedings of theEuropean Conference on Computer Vision (ECCV), 2016.

 [3] Zhang, H., & Dana, K. (2018). Multi-style Generative Network for Real-time Transfer. In Proceedings of the European Conference on ComputerVision Workshops (ECCVW), 2018.

 [4] Mishra, J., & Gazzaley A. (2015). Closed-loop cognition: the next frontier arrives. Trends in Cognitive Sciences, 19 (5): 242-243.

 [5] The Human Connectome Project (2013), NIH Blueprint for Neuroscience Research, National Institutes of Health.