Friday, April 26, 2013

What Autism Tells Us About Artificial Intelligence

As I think more and more about the creation of artificial intelligence, I can see how the study of autism might have very important implications for the design of intelligent agents. My ideas of how to create an artificially intelligent being all revolve around a learning system that can model and systemize its environment. At one point though, I realized that what I was designing, after training and learning, would amount to an autistic agent. The agent may learn how to systemize its environment, but it wouldn’t have the social inclinations necessary to develop social skills, empathy and theory of mind. For this reason I think that “strong AI” necessitates a computer equivalent of the mammalian social modules. This means that AI researchers will need to acquaint themselves with concepts like oxytocin and vasopressin signaling, their effect on the nucleus accumbens, the endogenous opioid system, the HPA axis, the cingulate and orbitofrontal cortex and the way these are all affected by social encounters, social constraints and social expectations. Research into the neurological, cellular and molecular basis of mammalian social neuroscience may provide tremendous insight into how best to organize AI efforts such as pattern recognition, analytics, prediction, adaptive control, decision making, and response to query.

Brain Representations Are Reflections of Past Environmental Input

I think that brain cells create a theatre of the mind because they have “taken on” specific external properties. I assume that they take on experiential qualities because they have become highly correlated with the actual experience. For example, I believe that activity in visual cortex allows the creation of vibrant and captivating internal imagery simply because activity here has become correlated with the appearance of this imagery in the environment. Like the neurons responsible for the sensations in a phantom limb, early visual neurons “hold” the experiential properties of experiences that they have been correlated with in the past. But imagery is held throughout the cortex, in association areas as well as sensory areas, because each part of the brain has become correlated with some type of environmentally induced experience. Surely anterior association areas have been similarly correlated with experiences, albeit more highly abstract ones. The firing of neurons is not just correlated with sensory experience, it practically IS sensory experience. When you imagine something, you experience it again, you fire the same neurons that fire when it is experienced in the environment. At first it is hard to appreciate that what feels like a novel thought is actually a de novo conglomeration of many memory fragments from the past. Our brains do a fantastic job of mixing preexisting microrepresentations from a variety of different previous experiences into fantastic composites of never-before-seen imagery and sensations.

Read the full article that I wrote on this topic here:

A Preventive Method for Scalp Soreness and Hair Loss

A major contributor to hair loss is reduced blood supply to the scalp. Stress and aging can reduce circulation and vasculature in subcutaneous tissues all over the head. Does your scalp feel sore to the touch? If so, you may be losing hair due to reduced blood flow and the accompanying reductions in oxygen and nutrients.

Try a few things first:

1)      Poke your scalp with a knuckle in different places

2)      Massage your scalp pressing firmly

3)      Hold an electric massager to different points on your scalp

If any of these feel sore or painful, you should do them more often to improve vascularity and circulation to your poor, deprived hair follicles. Your scalp is sore because the circulation is poor. Sore scalp has been closely associated with alopecia (androgenic and others), male pattern baldness, diminished microcirculation, reduced microcapillary perfusion, and the resultant miniaturization of follicles. Scalp massage is indicated by some doctors and there are other angiogenic methods of increasing circulation but these are usually performed in a less-than-optimal way and do not demonstrate clinically significant results. I think that there is one good way to effectively stimulate increased circulation.

My intervention is very simple and quick to perform, and largely reduces scalp soreness in only one week. Do the following:

1)      Take the heel of your palm and press firmly down on the top of your head.

2)      Move your hand in a circular motion while pressing very firmly.

3)      Try to move your hand in very wide circles, attempting to stretch the scalp as far as it will go in each direction.

4)      Take about one second’s time to circle in 360 degrees. It should feel sore and achy. 

5)      Do this all over the scalp focusing on the areas that are the most sore for a total of 2-5 minutes.

6)      Be sure to focus on the hairline and forehead.

This area is tight in most people because the forehead muscles become strained and tense from social signaling. When we constantly raise and furrow our eyebrows these muscles remain tonically active selfishly eating up the blood supply to the general area. Relaxing the muscles in the face and head, especially in the forehead can mitigate this problem. 

My forehead, hairline and temples went from being very sore, and painful to massage, to being pain-free within 2 weeks of the palm heel massage method described above.

Find out much more at my self care website:

Anything that induces that sore, achy feeling in your scalp will be providing relief, stimulating the formation of new blood vessels and increasing circulation. Here are a few tools that I use on my scalp and body in general to achieve this:

For other advice and activities to combat stress see these posts:

Breathing for Calmness

Posture for Calmness

Breathing Deeply

Look Upwards

Thursday, April 18, 2013

Artificial Intelligence Programmed to Simulate Mental Continuity Between Processing States

I have developed a cognitive architecture for a form of artificial intelligence that I think could become conscious and could exhibit capabilities above and beyond what humans are capable of. If implemented and trained correctly I think that this system could exhibit qualities of “strong AI.” To find out more read below, or visit to see the complete patent application.


A modular, hierarchically organized, artificial intelligence (AI) system that features reciprocating transformations between a working memory updating function and an imagery generation system is presented here. This system features an imagery guidance process implemented by a multilayered neural network of pattern recognizing nodes. Nodes low in the hierarchy are trained to recognize and represent sensory features and are capable of combining individual features or patterns into composite, topographical maps or images. Nodes high in the hierarchy are multimodal, module independent, and have a capacity for sustained activity allowing the maintenance of pertinent, high-level features through elapsing time. The higher nodes select new features from each mapping to add to the store of temporarily maintained features. This updated set of features, that the higher nodes maintain, are fed back into lower-order sensory nodes where they are continually used to guide the construction of successive topographic maps.


Typical AI systems are designed to perceive the environment, evaluate objects therein, select an action, act, and record the action, along with its efficacy and the results thereof to memory. There are no forms of artificial intelligence that do this using a succession of maps guided by a continually updating buffer of salient features. The present invention will do this with a novel information processing approach based on the architecture of the human brain, but implemented with available computer hardware and input/output devices.

It seems that the fundamental units of representation in the brain are cortical assemblies which are perhaps congruent with cortical minicolumns. This is the case because all of the cell in an assembly share tuning properties and constitute “coincidence detectors” or “pattern recognizers.” Because cortical assemblies are essentially pattern recognition nodes organized in a hierarchical system, they should be able to be modeled by computers. The best way to do this with modern technology is to use an artificial neural network. An artificial neural network is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. It is generally an adaptive system capable of complex global behavior, that alters its own structure based on the nonlinear processing of either external or internal information that flows through the network. Neural networks are usually software, generally require a massively parallel, distributed computing architecture, and are ordinarily run on conventional computers (Russel et al., 2003). The neural network ordinarily achieves intelligent behavior through parallel computations, without employing formal rules or logical structures, and thus can be used for pattern matching, classification, and other non-numeric, nonmonotonic problems (Nilsson, 1998).

To create a strong form of AI it is necessary to have an understanding of what is taking place that allows intelligence, thought, cognition, consciousness or working memory to move through space and time, or in another word, to “propagate.” Such an understanding must be grounded in physics because it must explain how the physical substrate of intelligence operates through space and time (Chalmers, 2010). The human brain is just such an intelligent physical system that AI researchers have attempted to understand and replicate using a biomimetic approach (Gurney, 2009). Features of the biological brain have been key in the evolution of neural networks, but the brain holds other information processing principles that have not been harnessed by A.I. efforts. The present device will be constructed to mimic this biological system.

Description of the Device

The device is a modular, hierarchically organized, artificial intelligence (AI) system that features reciprocating transformations between a working memory updating function and an imagery generation system. This device features a recursive, algorithmic, imagery guidance process to be implemented by a multilayered neural network of pattern recognizing nodes. The software models a large set of programming constructs or nodes that work together to continually determine, in real time, which from their population should be newly activated, which should be deactivated and which should remain active to best inform imagery generation.

The device necessitates a highly interconnected neural network that features a hierarchically organized collection of pattern recognizers capable of both transient and sustained activity. These pattern recognition nodes mimic assemblies (minicolumns) of cells in the mammalian neocortex and are arranged with a similar connection geometry. Like neural assemblies the nodes exhibit a continuous gradient from low-order nodes that code for sensory features, to high-order nodes that code for temporally or spatially extended relationships between such features. The lower order nodes are organized into modules by sensory modality. In each module, nodes work both competitively and cooperatively to create topographic maps. Nodes are grouped according to the feature they are being trained to recognize. These maps can be generated by external input, by internal input from higher-order nodes, or a mix of the two. The architecture will feature backpropagation, self-organizing maps, bidirectionality, Hebbian learning as well as a combination between principal-components learning and competitive learning. The program will have an embedded processing hierarchy composed of many content feature nodes between the input modalities and its output functions.

Nodes lower in the hierarchy are trained to recognize and represent sensory features and are capable of combining individual features or patterns into metric, topographical maps or images. Lower-order nodes are unimodal, and organized by sensory modality (visual, auditory, etc.) into individual modules. Nodes high in the hierarchy are multimodal, module independent, and have a capacity for sustained activity allowing the conservation of pertinent, high-level features through elapsing time. The higher nodes are integrated into the architecture in a way that makes them capable of identifying a plurality of goal-relevant features from both internal imagery and environmental input, and temporarily maintaining these as a form prioritized information. The system is structured to allow repetitive, reciprocal interactions between the lower, bottom-up, and higher, top-down nodes. The features that the higher nodes encode are utilized as inputs that are fed back into lower-order sensory nodes where they are continually used for the construction of successive topographic maps. The higher nodes select new features from each mapping to add to the store of temporarily maintained features. Thus the most salient or goal-relevant features from the last several mappings are maintained. The group of active, higher-order nodes is constantly updated, where some nodes are newly added, some are removed, yet a relatively large number are retained. This updated list is then used to construct the next sensory image which will be necessarily similar, but not identical to, the previous image. The differential, sustained activity of a subset of high-order nodes allows thematic continuity to persist over sequential processing states.

The agent discussed here would be capable of integrating multiple specialized AI programs into a larger composite of coordinated systems. To do this, it would be necessary to interface these systems with the input side of the imagery generation system. Existing AI technology could be integrated with the system that is described here in order to more quickly expand its behavioral repertoire and knowledge base. For example, databases and encyclopedic content could be used as sensory input and the functions of other AI, adaptive control and robotics systems could be added to its repertoire of available motor outputs and premotor representations. The system should have open access to a memory bank of text including dictionaries, thesauri, newswire articles, literary works and encyclopedic entries. The system should be able to integrate with multiple applications such as rule-based systems, expert systems, fuzzy logic systems, genetic algorithms, and archived digital text. The present system would benefit from the integration of existing programs for input and output e.g. visual perception programs, and robotic movement programs. This patent does not laboriously describe these components because they already exist in well-developed forms.


1.      A modular, hierarchically organized, artificial intelligence (AI) system that features a working memory updating function and the capacity for imagery generation. The system comprises an algorithmic, imagery guidance process to be implemented by neural network software that will simulate the neurocognitive functioning of the mammalian prefrontal cortex.

2.      The network’s connectivity allows reciprocating cross-talk between fleeting bottom-up imagery in early sensory networks and lasting top-down priming in association and PFC networks. The features that are maintained over time by sustained neural firing are used to create and guide the construction of topographic maps (imagery). The PFC and other association area neural networks direct progressive sequences of mental imagery in the visual, auditory and somatosensory networks.

3.      Cognitive control stems from the active maintenance of features/patterns in the PFC module that allow the orchestration of processing and the generation of imagery in accordance with internally selected priorities.

4.      The network contains nodes that are capable of “sustained firing,” allowing them to bias network activity, transmit their weights, or otherwise contribute to network processing for several seconds at a time (generally 1-30 seconds).

5.      The network is an information processing system that has the ability to maintain a large list of representations that is constantly in flux as new representations are constantly being added, some are being removed and still others are being maintained. This distinct pattern of activity, where some individual nodes persist during processing makes it so that particular features of the overall pattern will be uninterrupted or conserved over time.

6.      Because nodes in the PFC network are sustained, and do not fade away before the next instantiation of topographic imagery, there is a continuous and temporally overlapping pattern of features that mimics consciousness and the psychological juggling of information in working memory. This also allows consecutive topographic maps to have related and progressive content.

7.      If this sustained firing is programmed to happen at even longer intervals, in even larger numbers of nodes, the system will exhibit even more mental continuity over elapsing time. This would increase the ability of the network to make associations between temporally distant stimuli and allow its actions to be informed by more temporally distant features and occurrences.

FIG.1 is a diagram depicting how high-level features are displaced, newly activated, and coactivated in the neural network to form a “stream” or “train” of thought. Each feature is represented by a letter. 1) Shows that feature A has already been deactivated and that now B, C, D and E are coactivated. When coactivated, these features spread their activation energy resulting in the convergence of activity onto a new feature, F. Once F is active it immediately becomes a coactivate, restarting the cycle. 2) Shows that feature B has been deactivated, that C, D, E and F are coactivated, and G is newly activated. 3) Shows that feature D, but not C has been deactivated. In other words, what is deactivated is not necessarily what entered first, but what has proven, within the network, to receive the most converging activity. C, E, F and G coactivate and converge on H causing it to become active.

FIG 2. is a diagram depicting the reciprocal transformations of information between lower-order sensory nodes and higher-order PFC nodes. Sensory areas can only create one sensory image at a time, whereas the PFC is capable of holding the salient or goal-relevant features of several sequential images at the same time.

FIG. 3 is a diagram depicting the behavior of features that are held active in the PFC. 1) Shows that features B, C, D and E which are held active in the PFC all spread their activation energy to lower-order sensory areas where a composite image is built that is based on prior experience with these features. 2) Shows that features involved in the retinotopic imagery from time sequence 1 converge on the PFC neurons responsible for feature F. Feature B drops out of activation, and C, D, E and F remain active and diverge back onto visual cortex. 3) Shows that this same process leads to G being activated and D being deactivated.

FIG. 4. is a list of processes involved in the central AI algorithm implemented by the present device.

1)      Either sensory information from the environment, or top-down internally held specifications, or both are sent to low-order sensory neural network layers that contain feature extracting cells. This includes either feedforward sensory information from sense receptors (experiential perception) or from downstream retroactivation from higher-level nodes (internally guided imagery).

2)      A topographic sensory map is made by each low-order, sensory neural network. These topographic maps represent the networks best attempt at integrating and reconciling the disparate stimulus and feature specifications into a single composite, topographic depiction. The map that is created is based on prior probability and training experience with these features.

3)      In order to integrate the disparate features into a meaningful image, the map making neurons will usually be forced to introduce new features. The salient or goal-relevant features that have been introduced are extracted through a perceptual process where active, lower-order nodes spread their activity to higher-order nodes. As the new features pass through the neural networks, some are given priority and are used to update the limited-capacity, working memory, storage buffer that is composed of active high-level nodes.

4)      Salient features that cohere with features that are already active in the higher-order nodes are added to the active features there. The least relevant, least fired upon features in higher-order areas are dropped from activation. The sustained firing of a subset of higher-order nodes allows the important features of the last few maps to be maintained in an active state.

5)      At this point it is necessary for the system to implement a program that allows it to decide if it will continue operating on the previously held nodes or redirect its attention to the newly introduced nodes. Each time the new features garnered from the topographic maps are used to update the working memory store, the agent must decide what percentage of previously active higher-order nodes should be deactivated in order to reallocate processing resources to the newest set of salient features. Prior probability with saliency training will determine the extent to which previously active nodes will continue to remain active.

6)      The updated subset of higher-order nodes will then spread its activity backwards toward lower-order sensory nodes in order to activate a different set of low-order nodes culminating in different topographic sensory map.

7)      A. The process repeats.

B. Salient sensory information from the actual environment interrupts the process. The lower-order nodes and their imagery, as well as the higher-order nodes and their priorities, are refocused on the new incoming stimuli.

FIG 5. Demonstrates the architecture of the interfacing neural networks.

FIG 6: illustrates how relevant features can be maintained through time using nodes with sustained firing. The figure compares the number of past nodes that remain active at the present time period (“now”) in a normal human, a human with PFC dysfunction, and the hypothetical AI agent. The AI agent is able to maintain a larger number of higher-order nodes though a longer time span, ensuring that its perceptions and actions, now, will be informed by a larger amount of recent information. Note how the lower-order sensory and motor features are the same in each graph with respect to their number and duration, yet those in association areas are highest in both number and duration for agent C.

FIG 7. Depicts an octopus within a brain in an attempt to communicate how continuity is made possible in the brain and the in present device. When an octopus exhibits seafloor walking, it places most of its arms on the sand and gradually repositions arms in the direction of its movement. Similarly, the mental continuity exhibited by the present device is made possible because even though some representations are constantly being newly activated and others deactivated, a large number of representations remain active together. This process allows the persistence of “cognitive” content over elapsing time, and thus over machine processing states.

Read the full article that I wrote on this topic here:

Friday, April 5, 2013

Psychopaths are Rewarded but not Punished, and Autistics are Punished but not Rewarded by Social Interaction

I believe that most psychopaths are not innately evil, perverse or malevolent. For neurological reasons they are not easily punished during social situations but are readily rewarded by them. For instance, the fear and apprehension centers (including the amygdala and the anterior cingulate cortex) are not activated by social stressors in psychopaths to the extent that they are in nonpsychopaths. Similarly, I believe that most individuals with autism are not innately cognitively impaired, they are simply less attentive to social cues because they are less readily rewarded by them.

For this reason, I think that psychopathy (as well as antisocial personality disorder) and autism can be meaningfully compared.

An animal’s brain is wired up from experiences involving environmental reward and punishment, and a genetic disinclination from social punishment fundamentally changes the way working memory operates. I believe that psychopaths represent an evolutionary strategy that worked well when social ties were tenuous and it was best to work cooperatively with others but only so far as it reaped benefits. In other words, the psychopath is hardwired to benefit from others (potentially in a mutually beneficial relationship) but not to worry about or become concerned with the other person’s perspective. Environmental experience can turn this simple inclination into many things including charisma and assertiveness, but also antipathy and remorselessness. 

The opposite may be true of autism. People with autism are less likely to be rewarded by social situations and more likely to be punished by them. This makes it sound like people with autism should grow to be self-loathing and codependent, but of course this isn’t the case. The strong social punishment actually causes them to severely limit their social interactions from a very early age and the lack of reward keeps them from attempting to gain positive experiences from social interaction. Interestingly, in autism the amygdala is overactive during social situations and the ventral straitum (the pleasure center) is missing oxytocin receptors (the social pleasure hormone).