Psychosis, Dreams, and Memory in AI
by Henry Wilkin
figures by Rebecca Clements
The original dream of research in artificial intelligence was to understand what it is that makes us who we are. Because of this, artificial intelligence has always been close to cognitive science, even if the two have been somewhat far apart in practice.
Functional AIs have tended to do best at quickly finding ‘good-enough’ approaches to problems that are easy to state but whose solutions are difficult or tedious to describe explicitly. A more modest definition of artificial intelligence might read as ‘computer programs that can learn how to perform tasks rather than require specific hardwired instructions.’ It turns out this encompasses a lot—think language processing in Amazon’s Alexa, or Google’s AlphaGo—and AI has recently even been able to produce art. At least until this point, the ‘art’ of computer science has been more in how the answers are reached than in what the answers turn out to be. As research in AI advances, it has become possible to glimpse parallels between certain features of AI and human cognitive functions, including in some cases a sort of primitive capacity to dream.
Most AI that dream, however, have very limited control over what they can dream about. Currently, there seem to be essentially three ways in which computers dream. First, a computer can ‘dream’ by accident (these are sometimes called computer hallucinations.) Second, a computer can dream through programs like Google’s DeepDream. This gives a window into the inner workings of an AI, which I’ll describe in more detail below. Third is through a process called experience replay and/or one of its offshoots. This process can improve the rate at which AIs learn and arguably bears the closest resemblance to actual dreaming. These different types of ‘computer dreams’ seem to come naturally out of balancing sensitivity to new experience with robustness and usefulness of old memories.
To learn, an AI tries out several behaviors and chooses the one that seems to work best. The problem is, the AI can’t prove that the behavior it settles on is ‘best’ or even that the behavior will always produce sensible answers. A ‘computer hallucination’ is when an AI gives a nonsensical answer to a reasonable question or vice versa. For example, an AI that has learned to interpret speech accurately may also attribute meaning to gibberish. Training an AI is in some ways a bit like making a good map of the world: the map will inevitably be distorted and might even suggest the existence of sea monsters, but it can still be useful. Just as there are many possible maps of the world, each with its own advantages and disadvantages, there are often many possible ‘best’ behaviors of an AI, each with their own advantages and disadvantages. The behavior of most AIs that dream is determined by a kind of artificial neural network, which is essentially the AI’s brain.
DeepDream as a window into neural networks
One challenge of using artificial neural networks is that it is near impossible to understand exactly what goes on inside of such a network. To this end, people at Google devised a way to probe the inner workings of an artificial neural network that they call DeepDream. DeepDream is most relevant for programs that recognize structure in images, often using a type of artificial neural network known as a deep convolutional neural network. The idea is to relieve tension between what the AI is given as input and what it might want to receive as input. That is, an image is distorted slightly to one that would better match the AI’s original interpretation of the image. While this sounds innocent enough, it can lead to some pretty bizarre images. This is mainly because an artificial neural network can often function well enough without complete confidence in its own answers or even really knowing what it is looking for in the image that is given. Real images always look at least somewhat strange or ambiguous to an AI, and distorting the image to forcibly reduce uncertainty from the AI’s point of view causes it to look strange to us. The images produced by DeepDream are a way of probing the uncertainty or tension in an artificial neural network, which is otherwise hidden (especially when the artificial neural network can only give binary yes or no answers.)
In a paper published in the journal Schizophrenia Research this past winter, Matcheri Keshavan from Harvard Medical School and Mukund Sudarshan of Cornell proposed a connection between Google’s DeepDream program and hallucinations caused by psychedelic drugs, or conditions like schizophrenia. While DeepDream always creates strange images, the most interesting ones come from when the AI has made a mistake. For example, if an artificial neural network happens to mistake a cat’s ear for a butterfly wing, DeepDream will distort the original image and impose something that resembles a butterfly wing where the ear should be.
Keshavan and Sudarshan note a possible connection through the fact that all people have an internal representation of their environment, which is distorted from its true state by an amount, more or less depending on the ability of regulatory components of the brain to counter bias and correct for random errors and the limited granularity of memory. As a memory is repeatedly summoned, the level of distortion may either grow or stay fixed depending on the amount of uncontrolled error during recall and the brain’s ability to ‘connect the dots’ and regulate with context. Keshavan and Sudarshan suggest that this feedback mechanism, which could explain the disconnect between reality and hallucinations, can be modeled effectively by repeatedly applying programs similar to DeepDream to an image. The sequence of images obtained this way occasionally morphs into something totally different from the initial state, indicating psychosis, or converges onto a fixed representation that is close to the initial image. By varying the distortion caused by DeepDream, it may be possible to find ‘simple’ models for various kinds of psychosis.
Learning faster with experience replay
Experience replay was introduced in 1991 by then Carnegie-Mellon Ph.D. student Long-ji Lin. It is a way of helping AI to learn tasks in which meaningful feedback either comes rarely or with a significant cost. The AI is programmed to reflect on sequences of past experiences in order to reinforce any possibly significant impressions those events may make on its behavior. In its original form, experience replay can be viewed as an ‘unregulated’ policy of encouraging an AI to approach nearby ‘feasible’ solutions and reject poor behaviors more rapidly. The idea is that significant events will naturally reinforce each other to make large impressions on the network. Meanwhile, the impressions of individual incoherent events will tend to cancel out. As long as the replay memory of the neural network is large enough, experiences of arbitrarily high significance can make appropriately large impressions on the state of the neural network.
Last year, researchers from Google’s DeepMind group developed an AI that uses a variant of experience replay to play a video game “Labyrinth.” In the game, the player must traverse a maze in search of tasty food (apples and melons) while avoiding unpleasant food (in this case, lemons.) To expedite learning, the AI is encouraged to recall events associated with immediate, large-magnitude rewards or punishments more often than relatively insignificant events. This can help make sure that important memories make appropriate impressions on the AI before they are forgotten, i.e. replaced by more ordinary events. The AI was also given additional goals that encouraged it both to explore the environment and also to preferentially use more of the ‘neurons’ in its network. Combined with the modified form of experience replay, the performance of the AI improved significantly.
There are many other variations of experience replay, each with its own way of combining multiple memories to help AI learn effectively. The relationship between memory and dreams has been acknowledged for many years. The role of memory in dreams, though, is still an active area of research in psychology. Although most people can learn from consciously replaying memories while awake, a similar process may be happening naturally during sleep. In a paper published in Behavior and Brain Sciences, Sue Llewellyn of Manchester University proposed that the surreal images of dreams may be a sort of unconventional but efficient way of linking individual memories into something more meaningful. The idea is that our brain may acknowledge unusual associations between events that our conscious mind does not, similarly to how creative and sometimes bizarre image associations can improve memory by linking memories with emotional or logical salience. Perhaps variants of artificial neural networks will provide pathways toward testing some of the current hypotheses about dreams.
Although the nature of dreams is a mystery and probably always will be, artificial intelligence may play an important role in the process of its discovery.
Henry Wilkin is a 4th year physics student studying self-assembly.
This article is part of a Special Edition on Artificial Intelligence.
For more information:
Hindsight Experience Replay: https://arxiv.org/abs/1707.01495
Prioritized Experience Replay: https://arxiv.org/abs/1511.05952
DeepDream: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
Computer generated art: https://www.technologyreview.com/s/608195/machine-creativity-beats-some-modern-art/
Computer hallucinations: https://www.americanscientist.org/article/computer-vision-and-computer-hallucinations
Dreams and memory: https://www.psychologytoday.com/blog/dream-catcher/201505/new-evidence-dreams-and-memory
https://www.psychologytoday.com/blog/dream-catcher/201312/dreams-and-memory
http://www.sciencemag.org/news/2010/04/dreams-linked-better-memories