Wednesday, October 30, 2019

Computer Short-term Memory is Not Working Memory, But It Could Be


This post aims to explain the most fundamental difference between computers and brains. It involves how the two use and instantiate memory. As you will see, there are two differences. Both computers and brains have a form of short-term memory that is updated iteratively, but in the brain the set of items in short-term memory are bound, and integrated to create composite representations. Computers don’t do this, their short-term memories are composed of completely disconnected items that do not form any type of context. Secondly, short-term memory in the brain is used for search. In the computer it is not. A computer’s short-term memory only speeds up retrieval from long-term memory. The next state of a computer is not determined by its short-term memory. It is determined by the code in the program it is running. The next state of the brain is determined by what is currently active in short-term memory. This makes its short-term memory qualify as working memory. Computers don’t have working memory, and until they do they cannot be conscious, or display human-like intelligence.


The Differences Between Computer and Human Short-term Memory at a Glance:

1) The memory hierarchy in computers is composed of separate modules and information must be transferred between them. In the brain these modules are embedded within one another.

2) Short-term memory in the brain is bound and integrated together to create context. In a computer short-term memories appear as a list and are disconnected from each other.

3) In the brain short-term memory is used to search for the next update. In a computer it merely remains available in case it is called upon.


To retrieve information from long-term memory a computer copies bytes of information (such as 01100101) from the storage drive (HHD, or SSD) to the CPU. These bytes (which comprise data or program instructions) are processed inside the CPU. If it is likely to need this information again soon, it will save it in a form of short-term memory such as CPU cache, or RAM. It saves temporary copies of information that will be used again in smaller storage closer to the CPU. It does this because retrieval from the large capacity long-term memory is very slow. Oversimplifying, you can think of data that is expected to be needed on the order of milliseconds to seconds as being stored in CPU cache (L1, L2, L3). Data expected to be needed on the order of seconds to minutes is stored in RAM. The rest of the data, which may or may not be needed, is stored in long-term memory.

Brains do something very similar. Neurons hold information in long-term memory and when certain memories are needed the neurons that encode that information become active. Just as in the computer, retrieval from long-term memory is slow, so the brain uses short-term memory to potentiate information that is likely to be used again soon. In the brain there are also at least two levels of activation. The first level of short-term memory is sustained firing. A neuron exhibits sustained firing when it continues to physically fire at the neurons it is connected to at an elevated rate. The second level is synaptic potentiation. This is where a chemical change to the neuron’s synapses makes the information that the neuron encodes more readily available, even after it has stopped firing. Information in sustained firing is very similar to the information in the CPU’s cache (the store is small but fast), and information maintained through synaptic potentiation is similar to information in RAM (the store is larger, but slower).  

Computer
Brain
Time Scale
CPU Register
Cortical Binding
Very short-term working memory (millis.)
CPU Cache (SRAM)
Sustained Firing
Short-term working memory (seconds)
RAM
Synaptic Potentiation
Short-term memory (seconds to hours)
Virtual Memory
Short-term Potentiation
Short-term memory (minutes to hours)
SSD
Commonly used LTM
Accessible Long-term Memory
Hard Drive
Long-term Memory
Long-term Storage (days to lifetime)

Computers are often described as having a “memory hierarchy,” expressed as a pyramid with different levels of memory storage. This is also referred to as a caching hierarchy, because there are different levels of short-term memory with different associated speeds. The levels at the top are faster, smaller, and more energetically expensive. The same could be said for human memory, because there are several levels, the levels at the top are faster, and more metabolically expensive. The table above is my attempt to compare the levels of these two hierarchies.




Sustained firing and synaptic potentiation together are referred to as working memory, a psychological construct that overlaps with attention, and consciousness. Working memory is composed of concepts that are persistently coactive for a period of time. When you are in the middle of a task, information about the current step in that task is activated by sustained firing, and information about the previous steps you just took are activated by synaptic potentiation. What you are reading right now is made available for several seconds due to sustained firing. The information in the beginning of this post, and your thoughts about it, are made available due to synaptic potentiation. These two stores interact and cooperate, just like CPU cache and RAM. In both cases they allow the coordination of information processing resources over time.




A large proportion of the information in short-term memory (both in brains and computers) is not just data from long-term memory, but also newly minted information that was just created. The information processing applied to existing long-term memories creates new intermediate results. In a computer the intermediate results are often mathematical. The results of a multiplication might be stored in RAM. In the brain the intermediate results are the new ideas that intervene between the starting point of a line of thought, and its ending point.

There are many differences between the short-term memory of computers and brains though. Information held in a computer’s CPU cache or RAM is not united or integrated in any way. It is simply made available in case the instructions in the program it is running call for it. These are simply items in an unconnected list. In contrast, all of the information that is currently active in the brain is bound together in a process called neural binding. This turns a list of items into a composite scene, a scenario, or a situation. It makes for a conceptual blending of information. This creates mental content to attend to and be conscious of: declarative, semantic context.

There is no context in computer short-term memory, just a group of disconnected items. For instance, if I give you four words you will try to create a dynamic story out of them, but a computer will merely hold these four words statically. There is very little information integration in the short-term memory of a computer. In a computer, because the items active in each state are not bound together in any way there is no connective tissue within a state. This is the first reason that computers do not have consciousness, and that it doesn’t “feel” like anything to be a computer. The second reason has to do with the integration and connective tissue between states.

Both a computer program and the human mind progress serially. One state of short-term memory leads to the next state. They both usually start with a problem state, and progress through a chain of intermediate states toward a goal state. To do this they must update their short-term memory iteratively. This means that new items must be continually added, others must be subtracted, but other must remain.

All computers using the Von Neumann architecture routinely use iteration when updating cache memory. Because these stores are constantly updated with new information they must evict old information. There are many replacement policies for determining which information to evict from cache, including first-in first-out (FIFO) and least recently used (LRU). LRU is based on the observation that the most recently activated information has the highest probability of being needed again in the near future (all else being equal).

If a computer program calls an instruction once, there is a high probability that it will need it again very soon. Computer scientists refer to this as “temporal locality.” Similarly, if a mammal recalls information once, there is a high probability that it will need it again soon, and LRU may also structure the caching behavior of the various stores of working memory. LRU seems to be inherent in sustained firing and synaptic potentiation in the sense that activity in the least recently used neurons is the first to decay. There is a long history in cognitive psychology of studies showing that humans and animals prioritize recently accessed items, and that retention diminishes with passing time (Averell & Heathcote, 2011). The LRU policy may capture the fundamental structure of how physical systems change over time, so it should not be surprising that it is an organizing principle in both computers and animals.

The various memory stores in the modern computer’s memory hierarchy (static RAM,  dynamic RAM, virtual memory, etc.) are all incrementally updated where, second by second, some data remains and the least used data is replaced (Comer, 2017). Thus, the updating of computer cache memory is iterative in the same sense it is in the brain. However, modern computers do not demonstrate self-directed intelligence, so there must be more to the human thought process than LRU, and iterative updating. This unexplained factor may be found, not in the way cached memory is updated, but in the way it is used while active.

What links successive processing states together is very different in brains and computers. Digital, rule-based computing systems use cache to speed up delivery of data to the processor. However, the next bytes of data needed by the CPU are not determined by the contents of the cache itself. The instruction sequence of a computer’s operations is determined by its program. The next instruction is dictated by the next line of executable code.

Then what determines the instruction sequence of thought? Well, mammalian brains use cache (working memory) as parameters to guide searches for the long-term memories to be accessed next. The next instruction used by the brain is determined by what is currently in its short-term memory. In the brain all of the active neurons fire at the inactive neurons they are connected to, and whatever is activated the most will become part of the next state. This means that the content active in short-term memory is used (in a process called spreading activation) to search for the next update to short-term memory. The various bytes of data within computer cache memory can certainly be considered coactive, but they are not “cospreading.” That is, they do not pool their activation energy to search long-term memory for relevant data as in the brain.

The next state of a computer is determined very deterministically by its code irrespective of what is held in cache. But the next state of a brain is determined stochastically by the cache itself. Computers do not use the contents of their short-term memory to search for the updates that will be added in the next state. Brains do. Until computers or computer programs are designed to do this they will not be able to be conscious, and will not demonstrate human-level intelligence.

No comments:

Post a Comment