Google’s DeepMind Using AI to Explore the Role of Dopamine in Learning

Google’s DeepMind Using AI to Explore the Role of Dopamine in Learning
Share this:

Deep learning algorithms have the potential to outdo humans in numerous areas including making predictions about the future, reading lips and classifying images among others. Although they boast superhuman proficiency levels, their rate of learning is limited. In fact, some machine learning algorithms take hundreds of hours to understand classic video games, which can take an average individual an afternoon.

According to a paper published by Google’s DeepMind in Nature Neuroscience, the reason for the slow rate of learning for machine learning algorithms could be related to the neurotransmitter dopamine. Meta-learning or the process of learning fast from examples as well as obtaining rules from those examples over time is considered to be one of the reasons that human beings obtain new knowledge efficiently compared to their computer counterparts. Nevertheless, the underlying mechanisms or meta-learning are still poorly understood.

To provide more insight on the process, DeepMind’s researchers in London used a recurrent neural network to create a model of human physiology. The network used entails a type of neural network that has the potential to internalize past observations and actions as well as draw data from those experiences, especially during training. The network’s reward prediction error, which is a signal that mathematically augments the algorithm over time using trial and error, worked as dopamine. Dopamine is a brain-released chemical that influences sensations of pain and pleasure, movements and emotions. It is also claimed to play a crucial role in the learning process.

DeepMind’s researchers set the system on six neuroscientific meta-learning experiments by comparing its performance to that of the animals that underwent a similar test. The Harlow Experiment, one of the algorithms, gave the algorithm the role of selecting between two random images, whereby one was associated with a reward. In the initial experiment, the subjects (monkeys) learned a strategy for selecting objects quickly, which involved selecting a random object the first time and then picking reward-based objects the second time as well as each subsequent time afterwards.

In animals, dopamine is said to strengthen behaviors through reinforcing synaptic connections, particularly in the prefrontal cortex. However, the consistency of the neural networks’ behavior showed that the chemical is also useful in conveying and encoding information regarding rule and task structures.

The idea that artificial intelligence systems can imitate human biology is not entirely new. In fact, a study by researchers from Netherland-based Radboud University revealed that recurrent neural networks could forecast how the human brain processes sensory information, especially visual stimuli. Nonetheless, the discoveries have largely informed machine learning as opposed to neuroscientific research.

Last year DeepMind used complementary algorithms to create a partial anatomical model that represented the human brain. The model encompassed a neural network that imitated the prefrontal cortex’s behavior as well as a memory network that functioned as the hippocampus. The result of this effort was an artificial intelligence (AI) machine that considerably outperformed most neural networks. Recently, DeepMind shifted its focus on rational machinery, leading to the production of synthetic neural networks that can apply human-like logic and reasoning skills in solving problems.

Source VentureBeat

Share this:

Leave a Reply

avatar
  Subscribe  
Notify of