Late 2017 research from the ATR Computational Neuroscience Laboratories and Kyoto University, Japan has shown an AI capable of visualising human thoughts.
The paper ‘Deep Image Reconstruction from Human Brain Activity’ builds on previous research concerning the representation of simple images.
Machine learning analyses of fMRI patterns has previously enabled machines to visualise ‘low-level’ images, such as basic shapes, or to match a thought to an example photo.
“Whereas it has long been thought that the externalization or visualization of states of the mind is a challenging goal in neuroscience, brain decoding using machine learning analysis of fMRI activity nowadays has enabled the visualization of perceptual content”, the authors state.
Recent work found that the “hierarchical features of a deep neural network” can be decoded or translated into more complex images, providing the foundations for studies of this kind.
The study unveils a new reconstruction method, which puts greater emphasis on the way pixels are interpreted by the human deep neural network at multiple layers.
It found that for both natural images (e.g. those of animals) and for artificial shapes, the AI was able to interpret fMRI data to reconstruct an image from scratch.
The AI was trained using images from nature, so the application to artificial shapes was significant in suggesting that the algorithms were genuinely interpreting thought data rather than recalling and matching images from past inputs.
“The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images”, the authors note.
The algorithm is applicable to both seen and imagined images.
Read the full study here.