Thursday, April 18, 2013

Artificial Intelligence Programmed to Simulate Mental Continuity Between Processing States


I have developed a cognitive architecture for a form of artificial intelligence that I think could become conscious and could exhibit capabilities above and beyond what humans are capable of. If implemented and trained correctly I think that this system could exhibit qualities of “strong AI.” To find out more read below, or visit www.jaredreser.com/ai to see the complete patent application.

 Abstract

A modular, hierarchically organized, artificial intelligence (AI) system that features reciprocating transformations between a working memory updating function and an imagery generation system is presented here. This system features an imagery guidance process implemented by a multilayered neural network of pattern recognizing nodes. Nodes low in the hierarchy are trained to recognize and represent sensory features and are capable of combining individual features or patterns into composite, topographical maps or images. Nodes high in the hierarchy are multimodal, module independent, and have a capacity for sustained activity allowing the maintenance of pertinent, high-level features through elapsing time. The higher nodes select new features from each mapping to add to the store of temporarily maintained features. This updated set of features, that the higher nodes maintain, are fed back into lower-order sensory nodes where they are continually used to guide the construction of successive topographic maps.

Background

Typical AI systems are designed to perceive the environment, evaluate objects therein, select an action, act, and record the action, along with its efficacy and the results thereof to memory. There are no forms of artificial intelligence that do this using a succession of maps guided by a continually updating buffer of salient features. The present invention will do this with a novel information processing approach based on the architecture of the human brain, but implemented with available computer hardware and input/output devices.

It seems that the fundamental units of representation in the brain are cortical assemblies which are perhaps congruent with cortical minicolumns. This is the case because all of the cell in an assembly share tuning properties and constitute “coincidence detectors” or “pattern recognizers.” Because cortical assemblies are essentially pattern recognition nodes organized in a hierarchical system, they should be able to be modeled by computers. The best way to do this with modern technology is to use an artificial neural network. An artificial neural network is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. It is generally an adaptive system capable of complex global behavior, that alters its own structure based on the nonlinear processing of either external or internal information that flows through the network. Neural networks are usually software, generally require a massively parallel, distributed computing architecture, and are ordinarily run on conventional computers (Russel et al., 2003). The neural network ordinarily achieves intelligent behavior through parallel computations, without employing formal rules or logical structures, and thus can be used for pattern matching, classification, and other non-numeric, nonmonotonic problems (Nilsson, 1998).

To create a strong form of AI it is necessary to have an understanding of what is taking place that allows intelligence, thought, cognition, consciousness or working memory to move through space and time, or in another word, to “propagate.” Such an understanding must be grounded in physics because it must explain how the physical substrate of intelligence operates through space and time (Chalmers, 2010). The human brain is just such an intelligent physical system that AI researchers have attempted to understand and replicate using a biomimetic approach (Gurney, 2009). Features of the biological brain have been key in the evolution of neural networks, but the brain holds other information processing principles that have not been harnessed by A.I. efforts. The present device will be constructed to mimic this biological system.

Description of the Device

The device is a modular, hierarchically organized, artificial intelligence (AI) system that features reciprocating transformations between a working memory updating function and an imagery generation system. This device features a recursive, algorithmic, imagery guidance process to be implemented by a multilayered neural network of pattern recognizing nodes. The software models a large set of programming constructs or nodes that work together to continually determine, in real time, which from their population should be newly activated, which should be deactivated and which should remain active to best inform imagery generation.

The device necessitates a highly interconnected neural network that features a hierarchically organized collection of pattern recognizers capable of both transient and sustained activity. These pattern recognition nodes mimic assemblies (minicolumns) of cells in the mammalian neocortex and are arranged with a similar connection geometry. Like neural assemblies the nodes exhibit a continuous gradient from low-order nodes that code for sensory features, to high-order nodes that code for temporally or spatially extended relationships between such features. The lower order nodes are organized into modules by sensory modality. In each module, nodes work both competitively and cooperatively to create topographic maps. Nodes are grouped according to the feature they are being trained to recognize. These maps can be generated by external input, by internal input from higher-order nodes, or a mix of the two. The architecture will feature backpropagation, self-organizing maps, bidirectionality, Hebbian learning as well as a combination between principal-components learning and competitive learning. The program will have an embedded processing hierarchy composed of many content feature nodes between the input modalities and its output functions.

Nodes lower in the hierarchy are trained to recognize and represent sensory features and are capable of combining individual features or patterns into metric, topographical maps or images. Lower-order nodes are unimodal, and organized by sensory modality (visual, auditory, etc.) into individual modules. Nodes high in the hierarchy are multimodal, module independent, and have a capacity for sustained activity allowing the conservation of pertinent, high-level features through elapsing time. The higher nodes are integrated into the architecture in a way that makes them capable of identifying a plurality of goal-relevant features from both internal imagery and environmental input, and temporarily maintaining these as a form prioritized information. The system is structured to allow repetitive, reciprocal interactions between the lower, bottom-up, and higher, top-down nodes. The features that the higher nodes encode are utilized as inputs that are fed back into lower-order sensory nodes where they are continually used for the construction of successive topographic maps. The higher nodes select new features from each mapping to add to the store of temporarily maintained features. Thus the most salient or goal-relevant features from the last several mappings are maintained. The group of active, higher-order nodes is constantly updated, where some nodes are newly added, some are removed, yet a relatively large number are retained. This updated list is then used to construct the next sensory image which will be necessarily similar, but not identical to, the previous image. The differential, sustained activity of a subset of high-order nodes allows thematic continuity to persist over sequential processing states.

The agent discussed here would be capable of integrating multiple specialized AI programs into a larger composite of coordinated systems. To do this, it would be necessary to interface these systems with the input side of the imagery generation system. Existing AI technology could be integrated with the system that is described here in order to more quickly expand its behavioral repertoire and knowledge base. For example, databases and encyclopedic content could be used as sensory input and the functions of other AI, adaptive control and robotics systems could be added to its repertoire of available motor outputs and premotor representations. The system should have open access to a memory bank of text including dictionaries, thesauri, newswire articles, literary works and encyclopedic entries. The system should be able to integrate with multiple applications such as rule-based systems, expert systems, fuzzy logic systems, genetic algorithms, and archived digital text. The present system would benefit from the integration of existing programs for input and output e.g. visual perception programs, and robotic movement programs. This patent does not laboriously describe these components because they already exist in well-developed forms.

Claims:

1.      A modular, hierarchically organized, artificial intelligence (AI) system that features a working memory updating function and the capacity for imagery generation. The system comprises an algorithmic, imagery guidance process to be implemented by neural network software that will simulate the neurocognitive functioning of the mammalian prefrontal cortex.

2.      The network’s connectivity allows reciprocating cross-talk between fleeting bottom-up imagery in early sensory networks and lasting top-down priming in association and PFC networks. The features that are maintained over time by sustained neural firing are used to create and guide the construction of topographic maps (imagery). The PFC and other association area neural networks direct progressive sequences of mental imagery in the visual, auditory and somatosensory networks.

3.      Cognitive control stems from the active maintenance of features/patterns in the PFC module that allow the orchestration of processing and the generation of imagery in accordance with internally selected priorities.

4.      The network contains nodes that are capable of “sustained firing,” allowing them to bias network activity, transmit their weights, or otherwise contribute to network processing for several seconds at a time (generally 1-30 seconds).

5.      The network is an information processing system that has the ability to maintain a large list of representations that is constantly in flux as new representations are constantly being added, some are being removed and still others are being maintained. This distinct pattern of activity, where some individual nodes persist during processing makes it so that particular features of the overall pattern will be uninterrupted or conserved over time.

6.      Because nodes in the PFC network are sustained, and do not fade away before the next instantiation of topographic imagery, there is a continuous and temporally overlapping pattern of features that mimics consciousness and the psychological juggling of information in working memory. This also allows consecutive topographic maps to have related and progressive content.

7.      If this sustained firing is programmed to happen at even longer intervals, in even larger numbers of nodes, the system will exhibit even more mental continuity over elapsing time. This would increase the ability of the network to make associations between temporally distant stimuli and allow its actions to be informed by more temporally distant features and occurrences.

 
FIG.1 is a diagram depicting how high-level features are displaced, newly activated, and coactivated in the neural network to form a “stream” or “train” of thought. Each feature is represented by a letter. 1) Shows that feature A has already been deactivated and that now B, C, D and E are coactivated. When coactivated, these features spread their activation energy resulting in the convergence of activity onto a new feature, F. Once F is active it immediately becomes a coactivate, restarting the cycle. 2) Shows that feature B has been deactivated, that C, D, E and F are coactivated, and G is newly activated. 3) Shows that feature D, but not C has been deactivated. In other words, what is deactivated is not necessarily what entered first, but what has proven, within the network, to receive the most converging activity. C, E, F and G coactivate and converge on H causing it to become active.






FIG 2. is a diagram depicting the reciprocal transformations of information between lower-order sensory nodes and higher-order PFC nodes. Sensory areas can only create one sensory image at a time, whereas the PFC is capable of holding the salient or goal-relevant features of several sequential images at the same time.





FIG. 3 is a diagram depicting the behavior of features that are held active in the PFC. 1) Shows that features B, C, D and E which are held active in the PFC all spread their activation energy to lower-order sensory areas where a composite image is built that is based on prior experience with these features. 2) Shows that features involved in the retinotopic imagery from time sequence 1 converge on the PFC neurons responsible for feature F. Feature B drops out of activation, and C, D, E and F remain active and diverge back onto visual cortex. 3) Shows that this same process leads to G being activated and D being deactivated.





FIG. 4. is a list of processes involved in the central AI algorithm implemented by the present device.

1)      Either sensory information from the environment, or top-down internally held specifications, or both are sent to low-order sensory neural network layers that contain feature extracting cells. This includes either feedforward sensory information from sense receptors (experiential perception) or from downstream retroactivation from higher-level nodes (internally guided imagery).

2)      A topographic sensory map is made by each low-order, sensory neural network. These topographic maps represent the networks best attempt at integrating and reconciling the disparate stimulus and feature specifications into a single composite, topographic depiction. The map that is created is based on prior probability and training experience with these features.

3)      In order to integrate the disparate features into a meaningful image, the map making neurons will usually be forced to introduce new features. The salient or goal-relevant features that have been introduced are extracted through a perceptual process where active, lower-order nodes spread their activity to higher-order nodes. As the new features pass through the neural networks, some are given priority and are used to update the limited-capacity, working memory, storage buffer that is composed of active high-level nodes.

4)      Salient features that cohere with features that are already active in the higher-order nodes are added to the active features there. The least relevant, least fired upon features in higher-order areas are dropped from activation. The sustained firing of a subset of higher-order nodes allows the important features of the last few maps to be maintained in an active state.

5)      At this point it is necessary for the system to implement a program that allows it to decide if it will continue operating on the previously held nodes or redirect its attention to the newly introduced nodes. Each time the new features garnered from the topographic maps are used to update the working memory store, the agent must decide what percentage of previously active higher-order nodes should be deactivated in order to reallocate processing resources to the newest set of salient features. Prior probability with saliency training will determine the extent to which previously active nodes will continue to remain active.

6)      The updated subset of higher-order nodes will then spread its activity backwards toward lower-order sensory nodes in order to activate a different set of low-order nodes culminating in different topographic sensory map.

7)      A. The process repeats.

B. Salient sensory information from the actual environment interrupts the process. The lower-order nodes and their imagery, as well as the higher-order nodes and their priorities, are refocused on the new incoming stimuli.


FIG 5. Demonstrates the architecture of the interfacing neural networks.






 
FIG 6: illustrates how relevant features can be maintained through time using nodes with sustained firing. The figure compares the number of past nodes that remain active at the present time period (“now”) in a normal human, a human with PFC dysfunction, and the hypothetical AI agent. The AI agent is able to maintain a larger number of higher-order nodes though a longer time span, ensuring that its perceptions and actions, now, will be informed by a larger amount of recent information. Note how the lower-order sensory and motor features are the same in each graph with respect to their number and duration, yet those in association areas are highest in both number and duration for agent C.



 
FIG 7. Depicts an octopus within a brain in an attempt to communicate how continuity is made possible in the brain and the in present device. When an octopus exhibits seafloor walking, it places most of its arms on the sand and gradually repositions arms in the direction of its movement. Similarly, the mental continuity exhibited by the present device is made possible because even though some representations are constantly being newly activated and others deactivated, a large number of representations remain active together. This process allows the persistence of “cognitive” content over elapsing time, and thus over machine processing states.


Read the full article that I wrote on this topic here:

http://www.sciencedirect.com/science/article/pii/S0031938416308289

1 comment:

  1. I have developed a cognitive architecture for a form of artificial intelligence that I think could become conscious and could exhibit speech recognition program

    ReplyDelete