I just reread Dr. Kaspar Meyer’s recently published paper entitled: “Primary Sensory Cortices, Top-down Projections and Conscious Experience.” It is a great article, published in Progress in Neurobiology, that I highly recommend. In it he covers a lot of ground focusing on the role of primary visual cortex and addressing the role it may play in consciousness. He concludes, insightfully in my estimation, that activity induced in primary visual cortex (and other primary sensory processing areas) from bottom-up signaling originating in the thalamus are not consciously accessible, whereas activity induced in these areas from cortico-cortical top-down signaling can become accessible to consciousness. After reviewing a large amount of pertinent experimental results he says: “Thus, top-down projections, contrary to received opinion, do not merely modulate bottom-up signals but are able, all by themselves, to construct sensory activity patterns of considerable resolution in the earliest cortical sectors.” I agree with this notion wholeheartedly. I think that even my limited imagination is too intense and visually vivid not to be a product of top-down hegemony. Meyer goes on to say: “Interestingly, in a very general sense, the finding that conscious experience relies heavily on top-down signals supports the ideas of those who have argued that perception should be conceptualized as an active interpretive process, in which the element of prediction (sometimes formalized in terms of Bayesian inference) is key.” I like this last quote a lot and it makes me think that neither late associative, nor early visual cortex can identify an abstract concept on its own but instead must work in together in lockstep.
You can find the article, and read the abstract here:
While reading the article I was inspired to jot down the following in the margins:
I can see my computer keyboard in front of me, in its entirety, in early visual cortex. Interestingly though, it is not a keyboard there. In early visual cortex it is only an image of a keyboard. The letters on the keyboard are not true symbols there, they are only arrangements of lines and curves. The keyboard represented in early visual cortex cannot yet be identified as a large, black rectangle, and even though this early cortex holds information about the color of the board, it can’t name that color. The representation of the keyboard in early visual cortex has keys that are not yet identified as keys, and are certainly not meant to be pressed. Early visual cortex processes the visual imagery, but none of the identities of the board, or the affordances offered by it. This type of higher-order imagery is processed by and appreciated in brain areas, visual and otherwise, higher in the processing hierarchy. These higher-order areas, in the same vein, process more complex and abstracted elements of the keyboard without holding what is held by the earlier visual areas. Different areas of the brain are specialized to handle their own unique processes and they don’t need to represent what is already represented elsewhere. There is no need for redundant representation simply because all of these areas of the cortex are interconnected by vast networks of projections allowing them to share their unique and eccentric knowledge. An important question then is: how does consciousness arise from communicatory yet compartmentalized modules if no one module can see the big picture? This view makes consciousness seem, not only distributed in time, but also decentralized in space.