A research paper from Google’s DeepMind has sought to develop an abstract reasoning test for Artificial Intelligence systems.
The paper, ‘Measuring abstract reasoning in neural networks’, was discussed at the International Conference on Machine Learning in Stockholm, Sweden this week.
Abstract reasoning tests typically measure IQ via a series of verbal, numerical and spatial tests. Neural networks had some ability to invalidate classical IQ tests, however, as they were easily able to memorise answers to questions and observe patterns.
The researchers were able to circumvent this capacity by designing more abstract tests with unfamiliar variables. One example of a question can be found here:
Researchers at DeepMind said the AI “needed to induce and detect from raw pixel input the presence of abstract notions such as logical operations and arithmetic progressions, and apply these principles to never-before-observed stimuli.”
The Wild Relation Network (WReN) performed better than other AI systems on the reasoning tests, but researchers noted that it had some limitations.
The system was reportedly showing a high degree of proficiency in some domains, but struggling in others. It could only generalise the abstract idea of a logical ‘progression’ in certain circumstances.
They said: “Our results show that it might be unhelpful to draw universal conclusions about generalization: the neural networks we tested performed well in certain regimes of generalization and very poorly in others”.
Read more here.