It has been barely two years since OpenAI’s ChatGPT became publicly available, yet it has already become an integral part of digital communication and business processes. This large language model (LLM) is now one of several leading AI systems that convincingly mimic human intelligence in basic queries. However, can artificial intelligence also show signs of cognitive decline, similar to humans?
Scientists have observed that some answers to questions about a subject’s location in space were almost identical to those given by dementia patients
Israeli scientists from Hadassah Medical Center and data experts, testing the cognitive abilities of AI models, came to intriguing conclusions. Neurologists Roy Dayan and Benjamin Uliel, together with data scientist Gal Koplewitz, conducted a series of cognitive tests on publicly available chatbots, including versions 4 and 4o of ChatGPT, two versions of Alphabet’s Gemini, and version 3.5 of Anthropic’s Claude. They concluded that the level of “cognitive decline in AI is comparable to neurodegenerative processes in the human brain,” reports Novi List, without citing additional sources.
Test Results
Using the Montreal Cognitive Assessment (MoCA), a tool commonly employed by neurologists to measure cognitive abilities such as memory, spatial skills, and executive functions, the team found that AI models exhibit signs of decline in specific cognitive areas. ChatGPT 4o achieved the highest score of 26 out of 30 possible points, which in humans would indicate mild cognitive impairment. It was followed by ChatGPT 4 and Claude, both scoring 25 points, while Gemini scored only 16— a result that, in humans, would indicate severe cognitive impairment.
While humans build knowledge through experience, emotions, and context, AI remains limited to predicting patterns based on previously learned data
The worst performance was observed in visuospatial skills, including tasks such as copying simple shapes or drawing a clock, where many AI models failed or required additional guidance. Scientists also noticed that some answers to questions about a subject’s location in space were almost identical to those given by dementia patients. However, the researchers emphasize that LLMs are not human brains and cannot be diagnosed in the same way. Instead, they compare them to advanced forms of predictive text, which operate based on statistical patterns rather than actual cognition.
Key Differences
These findings raise intriguing questions about the future of artificial intelligence. If AI models indeed show some form of “cognitive decline,” can this be compared to the way humans lose cognitive abilities over time? Another critical question is how AI could be improved to avoid such issues—will future systems require continuous “refreshing,” or will entirely new architectures be developed to address these challenges?
AI models performed particularly poorly in visuospatial abilities, with many failing or requiring additional instructions
Moreover, the study highlights a fundamental difference between artificial and human intelligence. While humans build knowledge through experience, emotions, and context, AI remains constrained to pattern prediction based on previously learned data. Perhaps such studies are key not only to better understanding AI but also to understanding our own minds.
Will AI ever fully achieve human-like cognition, or will it remain at the level of an advanced predictive model? That remains to be seen.