Tuesday, June 16, 2020

The Stream of Thought is a Looping Vector


In 2005 I was waiting for the bus, wondering about consciousness, and doodling on the back of a journal article I had printed out. In doing so, I convinced myself that the stream of thought can be described by a single line that loops every few centimeters. A rational line of reasoning that stays on topic would loop and then continue to run in the same direction (vector) that it was in originally. A series of loose associations, on the other hand, would exit the loop at an unpredictable tangent and start off in a new direction, irrespective of the direction of the line before it. In this model, the direction of the line indicated the direction of the stream of thought. Thus a line that continues in the same direction will progress toward a goal, whereas a line that is constantly changing directions might be making interesting new associations, but has no long-term intention or goal. The pictures below illustrate the difference between these two strategies or modes of operation.



This idea, and these doodles, strongly influenced my understanding of the thought process. It went on to influence my model of working memory and consciousness (Reser, 2011; 2016). However, I never wrote anything about it or elaborated on it. Let’s do so here. Specifically, let’s consider this model relative to the current cognitive neuroscience of working memory, and my 2016 model of the stream of thought.

A line that loops is changing course, and experiencing a detour, perhaps fleshing out a related problem. This means that the items or representations coactive in the focus of attention have changed and are iterating though a different scenario. But if after the loop the line of thought regains its original course then it will return to the items it was previously holding and continue to iterate. The detour (loop) may have introduced important new representations that will help it towards its goal. For example, you may be thinking about what will be involved on your bus trip to another city. Once you board the bus in your imagination you realize that you have to pay and you model what it will be like to purchase a ticket to ride. Your interaction with the bus driver, and your wallet pulls you away from your thoughts about the trip itself. Now you could forget about the trip and go on thinking about your wallet, how old and worn it is getting, and what kind of wallet you want to replace it with. This would involve a change of course for your vector, or line of thought. Or you could imagine yourself paying the fare and resuming the next step related to your trip, such as finding out where you need to get off. This would involve your line of thought looping around another set of mental representations, but then returning to the original representations (bus, trip, destination, etc.).

Working memory is thought to have two major components: 1) the focus of attention, and 2) the short-term store. As you transitioned from thinking about your trip, to thinking about paying the fare, and then back to thinking about the trip again, the items related to thinking about the trip were transferred between stores. They would have gone from the focus of attention, to being saved temporarily in the short-term store, and then back into the focus of attention. In other words, the short-term store is a holding depot for lines of thought that are deemed to be important that we may need to return to. If instead, you had just kept thinking about your wallet, that would not have necessitated the short-term store and would have amounted to a loose association with no associative connection to the recent past. Schizophrenia, Alzheimer’s and many other brain disorders are characterized by a reduction in the capacity and duration of the short-term store, and that is why thought is often constantly derailed in people that have them.

In my 2016 model of working memory (Reser, 2016) I use uppercase letters to denote items of thought. When two successive thoughts share a large amount of content they share a large proportion of letters (e.g. thought one = A, B, C, D and thought two = B, C, D, E). When two successive thought share less content, they share fewer letters, and thus carry less continuity (e.g. thought one = A, B, C, D and thought two = D, C, E, F). In the first example above the two states shared most of their active representations in common (B, C, and D). In the second example though, the two states only shared one common representation (D). The next figure applies this general model to the discussion about lines and loops discussed earlier.



As you can see the first figure uses a single new representation introduced by each loop, but then returns to A, B, and C. This represents a prolonged process of thinking about the same concepts in different, connected contexts. In the second figure none of the concepts under consideration are maintained after the loop. There is still continuity between two states, but not between three states. This is clearly more chaotic. I think that these two modes of operation represent two sides of a continuum. I think they corresponds to type one (Kahneman’s thinking fast), and type two (thinking slow), with intermediate rates of updating between the two.

What do you think? Can you see how thought might be taking constant detours, that temporarily interrupt continuity, so that it can introduce new content to the line of persistent content? Do you ever notice that your train of thought breaks away for a few seconds only to comeback more inspired and informed? How about the opposite. Can these respites and detours be distracting?


Reser, J. 2016. Incremental change in the set of coactive cortical 
assemblies enables mental continuity. Physiology and Behavior. 
167: 222-237.

Why Having Right and Left Cortical Hemispheres Might Be Important for Superintelligent AI


The cerebral cortex can be divided right down the middle (sagittally) into two, nearly identical hemispheres. It has become clear to neuroscientists in the last few decades why our brain has right and left halves. Far from being redundant, they each process much of the same information in slightly different ways, leading to two complementary and cooperating worldviews. I will explain here why this organization (called hemispheric lateralization) would be beneficial for AI.

Scientists that design AI systems are interested in how to implement important features of the brain inside a computer. Neural networks simulate many of these features today including neurons, axons, and their hierarchical structure. However, neural networks are missing many of the human brain’s key information processing principles. Hemispheric laterality may be one such engineering principle that could give current AI systems the boost they need to reach artificial general intelligence. The main benefit would be that over developmental time these two innately different networks would reprogram each other by being exposed to each other's outputs. In essence, two dissimilar heads are better than one. 



Getting a neural network architecture (e.g. Reser, 2016) to benefit from hemispheric laterality would actually be very easy. All you have to do is duplicate the network you already have and then connect the two of them via a large number of high bandwidth weighted links that would act as the corpus callosum (the bundle of tracts that connect the right and left hemispheres in mammals). The connections between them should respect the brain’s natural bilateral symmetry, connecting similar and dissimilar areas across both hemispheres. If you had a million dollar supercomputer running your AI system and you wanted to create a whole other hemisphere, you would have to buy another million dollars worth of hardware, but it may well be worth it. Next let's talk about how they would be different

Each hemisphere of the brain processes information slightly differently. Despite the fact that the macrostructure of the two hemispheres is almost identical, the two are different microstructurally. Many researchers believe that this is because the right hemisphere has an average axonal length slightly (microscopically) longer than the left. This means that the right brain has relatively more white matter (axons), and the left hemisphere has relatively more grey matter (cell bodies). This also means that on average the cells of the right brain are further away from one another. There are many theories as to why the longer-ranging wiring is responsible for the right hemisphere’s tendency for broad generalization and holistic perspective. The left brain is more densely woven and this might underlie its ability for detailed work and close, quick cooperation between neurons.

People with a left hemisphere injury may have impaired perception of high resolution, or detailed aspects of an image, whereas those with right hemisphere injury may have trouble seeing the low resolution, or big picture aspects of an image. In other words, they miss the forest for the trees. Moreover, attending to the Ds in the figure below activates the left hemisphere, whereas concentrating on the “L” that these Ds form activates the right.


D
D
D
D
D      D      D      D      D     


It takes messages just a tiny bit longer to travel from one neuron to another in the right hemisphere. Neuroscientists think that this causes the two hemispheres to process the same informational inputs slightly differently. Each side learns during its development while organizing its thoughts according to a different algorithm and this leads each hemisphere to become a master at its unique way of perceiving and responding to the world. This discrepancy in temporal processing parameters makes the feedback and crosstalk between these two non-identical specialists meaningful. These would be commensurate to specialists that could check, balance, reconcile, compare, and contrast their approaches. If they processed information in exactly the same way it would be unnecessary to have two, but because they don’t they provide a type of stereoscopic view on the world similar to the view provided by our two offset eyes. This enables them to form their own perceptions and opinions and then reconcile with one another. Right now, no AI systems have anything like this.


It would be easy for the AI architect to take two identical neural networks and then alter each one so that they are each capable of generating different, but equally valid perspectives on reality. There are countless parameters that could be fine-tuned to do this. However, they should probably start with the average connectional distance between neural nodes in the network. If the hardware was neuromorphic they could literally increase the length of the axons, or if the software was responsible for rendering the network then brief variable pauses could be introduced between the links. In a computer these variables could be changed at any time according to processing priorities. In other words, if the AI system anticipated that its left hemisphere should “lean” even more to the left to help it solve a particular problem, it could alter these “temporal weights” in real time. There would be too much complexity in finding the optimal parameters to use human trial and error, and instead genetic algorithms would have to be used.

Our brain’s two hemispheres also differ as to their value systems. Our left hemisphere is dedicated to approach behaviors, and our right hemisphere is dedicated to withdrawal behaviors. Stimulating the left hemisphere of a rat will make it go toward a new object, whereas stimulating the right side will make it back away from that object. The fact that vertebrate brains have made this fundamental differentiation between approach and withdrawal for hundreds of millions of years suggests that it might represent an organizing principle that should be used in AI. One way to do this would be to wire up the AI’s “subcortical” appetitive and motivational centers (like the ventral tegmental area and the nucleus accumbens) involved in reinforcement learning with the left hemisphere, and to wire up the threat detection centers (like the amygdala) involved in punishment learning with the right hemisphere. Can you think of a more important distinction between two fundamentally important behavioral influencers? I can’t. AI needs two, functionally equivalent, dedicated processors, one for approach/liking/curiosity, and one for withdrawal/disliking/fear. As I have explained elsewhere both hemispheres should influence the dopaminergic system and sustained firing so that important rewards and threats can be maintained in mind; however, the left should be focused on approaching those rewards, and the right should be focused on withdrawing from the threats.

In the mammal brain, at any moment in time one hemisphere will be more active than the other (dominant). Its behavior at that point in time will be influenced by the dominant hemisphere. Approach and withdrawal form a pivoting scale for how we act in the world. They also structure our conscious attention even when we aren’t moving or behaving by allowing us to pivot between interest and disinterest. The AI agent would also have to continually select a dominant hemisphere to give priority to either approach or withdrawal at every point in time.

As in vertebrate animals, the left hemisphere could be used to control the right half of the AI’s robot body and the right hemisphere could be used to control the left half. This could be a good way to ensure that both approach and withdrawal have equal potential behavioral outputs. It seems that this would create a robot whose two sides would pull it in different directions. But this is how it works in the brain and you and I aren’t pulled in two directions. The fact that both networks are so densely connected through the corpus callosum, and that they developed together side by side probably play a big role in their cooperativity.

Certainly, widespread hemispheric lateralization in vertebrate animals indicates that laterality is associated with an evolutionary advantage. It is pretty clear to see that its unique features could also contribute to consciousness. This dichotomous organization might be similarly advantageous in the creation of human-like intelligence, and superintelligence in computers.


Reser, J. 2016. Incremental change in the set of coactive cortical assemblies enables mental continuity.
Physiology and Behavior. 167: 222-237.