Hypothesis that the neural network model of language processing has the same structure as the function of the brain
Research
Quanta Magazine
https://www.quantamagazine.org/how-ai-transformers-mimic-parts-of-the-brain-20220912/
Relating transformers to models and neural representations of the hippocampal formation | OpenReview
https://openreview.net/forum?id=B8DVo9B1YE0
Transformer was announced in 2017 as a new way for AI to process language. Transformer works with time-series data to create relatable lyrics and messages impersonating customer service representatives. Transformer's mechanism has an important point called 'attention mechanism'. In the conventional neural network, the input words and phrases were connected only to other specific words, but Transformer was able to detect all input words and numbers. etc. to all other inputs. Originally designed for language tasks, Transformer excels at tasks such as image classification and is also used to model the brain.
In 2020, the research group of Sepp Hokleiter, who studies computer science at Johannes Kepler University Linz, Austria, used Transformer to reconstruct a memory retrieval model called the `` Hopfield network ''. Hochleiter notes that the Hopfield network, which obeys the general rule that neurons working together form strong connections with each other, is related to Transformer's attention mechanism, which consistently associates all phrases. did. The `` Transformer-based Hopfield network '' upgraded by Mr. Hokuleiter also proved to be `` biologically valid '' by John Hopfield, who devised the Hopfield network.
Then, in early 2022, cognitive neuroscientist James Whittington and
Whittington said, ``The hippocampal model, which plays an important role in memory, can reproduce neural firing patterns with Transformer. is easier,' he said, emphasizing that Transformer enables advanced behavior like the brain. Whittington said the Transformer model, which can mimic the workings of the brain, will help us better understand how artificial neural networks work, as well as how memory and computation are performed in the brain. It is said that there is a possibility that
And Martin Shrimpf, a computational neuroscientist at the Massachusetts Institute of Technology , analyzed 43 neural network models to show how accurately each could predict measures of human neural activity. rice field. According to Shrimpf, Transformer is currently the most advanced neural network available and could predict almost any variation found in functional brain imaging .
While we are seeing great progress in ``a network model that imitates the brain'', Mr. Behrens said, ``Transformer is just the first step toward an accurate model of the brain, it is not the end of the quest. Transformer understands sentences. 'Even if it's the best model we have, I don't think it reflects how we process language in our brains.' “Even the best-performing Transformers work well with words and short phrases, but not with large-scale tasks like storytelling. 'I think we have a good place to understand the structure of , and we can improve it with training in the future. This is a good trend, but it's still a very complex area.'
Related Posts:
in Science, Posted by log1e_dh