Monday, March 23, 2020

Artificial Intelligence Needs to Utilize the Process of Myelination




Modern AI is capable of some fantastic feats, yet is still very limited compared to the human mind. The disciplines of machine learning and deep learning have shown us that with a powerful computer and a bunch of neurons organized into a network we can throw easy psychological problems at these networks and expect good answers. However, when researchers try to construct more complex networks, to tackle more cognitively complex problems, the networks fail to deliver. This is because they have not yet used a simple trick that animals have been using for hundreds of millions of years: gradual and progressive myelination.   

As we progress from infanthood through childhood our brains make various biological changes. These changes cause our level of analysis to slowly progress from analyzing brief sensory experiences, to analyzing complex, abstract scenarios. We begin our lives only being able to notice and attend to interactions occurring on short time scales. By adulthood, with the prefrontal cortex fully developed, we find ourselves able to follow interactions occurring on long time scales. In order to develop the ability to think about complex things we had to spend almost two decades gradually altering our brain’s processing strategy. It is a scaffolding process where we focus on the simplest things first, and use basic knowledge about them to advance incrementally to more complex things. The fact that all humans, and mammals in general, do this strongly suggests that it plays a role in the acquisition of advanced intelligence. In this entry I will argue that this developmental process will be instrumental in training superintelligent AI.  

This gradual process of brain development is made possible by myelination. Myelin is a fatty substance surrounding the connections between neurons (axons). Vertebrate animals use it to speed up information transmission between cells. The myelin increases the rate at which the electrical impulses travel. But vertebrates aren’t born with all the myelin that they will need as adults. Instead myelin develops slowly in specific areas, one at a time. Once a brain area has developed valuable, reliable, and consistent knowledge the connections formed by learning are solidified by the introduction of myelin.

The order of brain areas affected by myelin is consistent across all mammals. The early sensory areas are the first cortical areas to develop myelin. One of these, the primary visual area, starts to myelinate shortly after birth as the infant gains visual experiences. These early visual areas are responsible for basic visual perception and don’t rely on trial and error interactions with the environment. Rather they involve responding to visual stimuli that are presented simultaneously without any time delay between appearances. This happens when you see a picture of a house; you generally see the roof, windows, and door all at once without experiencing much of a time delay between these stimuli.

The last areas to myelinate are the association cortices and the prefrontal cortex (PFC). The PFC does not generally finish myelinating until one reaches the age of 18 or older. This means that the PFC does not “trust” that it has been wired up correctly until almost two decades into life. Whereas the visual system “trusts” that it has been wired correctly before the first two years. This is because sensory stimuli are generally honest, and all show up at the same time. Whereas complex events are constructed from stimuli that are removed from each other by delays in time. Understanding the relationships between events that are not simultaneous requires careful, logical inferences about causality. For example, the sale of a house is an abstract concept that involves parties, contracts, and delays that can last for weeks or months. This is why children aren’t licensed to sell houses.

It takes time to learn to make complex inferences that involve delays in time. It is probably the case that the process of myelination during development involves the progressive accumulation of knowledge that supports and buttresses more complex knowledge. In other words, as simple things are mastered in early cortical areas they provide the basis for new learning in the late cortical areas. In the same way, many brief, simple experiences create the knowledgebase to start to understand long, complex experiences with more advances probabilistic structures. The layers at the bottom of the hierarchy must be trained before the higher layers can find regularities and statistical structure within them. But as you can see in the diagram below the top of the hierarchy falls in the middle between sensory input and motor output. To properly train sensory input and motor output it is imperative that they be connected to each other, and can interact with each other to drive behavior, long before the association areas interposed between them are brought to the table.   



Many AI researchers point out that the things that AI and neural network systems today can accomplish are things that can generally be accomplished by an adult human brain in under a second. This means that they can only do things that we do unconsciously, such as near instantaneous pattern recognition. Today’ AIs can recognize houses but could not recognize, understand, or broker, the sale of a house. What AI is able to do are the kinds of things that we are able to do with our primary sensory and motor areas. This is because they are designed like a primary cortical areas. They do not feature reciprocal interactions between various structures organized into a brain like hierarchy. Very few AI architectures exist today that connect primary areas with association areas and a PFC. Those that do, don’t use anything like the process of myelination. Rather, in existing AI all of the areas from the simple to the advanced come online at the same time. I think these systems should use something analogous to the process of myelination because it would help them in their acquisition of knowledge. If they did, here’s how they should go about it:

First you would need a number of neural networks of pattern recognizing nodes. These networks must take inputs from the environment, each corresponding to a different sensory modality. These early networks must be linked to one another. Then these would have to be linked together in a hierarchy where unimodal networks form inputs to multimodal networks, which then form inputs themselves to even more densely multimodal networks above them. This “multimodal fusing” is depicted in the figure. The nodes of the densely multimodal networks would be the association networks and at the top of this hierarchy would be the PFC which would also be connected directly to the early motor networks. The nodes of the association and PFC networks would exhibit sustained firing. Importantly this sustained firing, the activity of the association networks, and their influence over ongoing processing elsewhere would start out extremely meager, and increase over time. These capacities could be increased as the system exhibits proficiency at simple tasks, such as object recognition, scene classification, and simple motor movements. As the association areas are added to the system a capacity to plan, and make higher order inferences and classifications could be expected.  

One important concept that I haven’t explained yet is that the first areas to myelinate in the brain, the sensory areas, have neurons of a single modality (e.g. either vision or hearing) that fire for short durations. The association areas and the PFC on the other hand have multimodal neurons (e.g. both vision and hearing) that fire for long durations. As in the mammalian brain (Huttenlocher & Dabholkar, 1997), sensory areas should mature (myelinate) early in development, and association areas should mature late. This will cause the capacity for sustained firing to start low, but increase over developmental time.

Postponing the initialization of association networks in this way would allow the formation of low-order associations between causally linked events that typically occur close together in time. This would focus the system on easy-to-predict aspects of its reality (e.g. correlations between occurrences in close temporal proximity). The consequent learning would erect a reliable scaffolding of highly probable associations that can be used to substantiate higher-order, time-delayed associations later in development (Reser, 2016). In other words, the rate of iterative updating from one state to the next (Fig. 9) would start very high. This would be reversed over the course of weeks to years as an increasing capacity for working memory would be folded in to the system.

Nature has found that it doesn’t pay to let the multimodal, neurons capable of sustained firing come online until the basics are learned first. I strongly suspect that AI network engineers will find this too. For the sake of progress I just hope that this myelination/development feature is implemented and perfected sooner rather than later. Given the rapid processing in computers, and the sheer amount of data available to them I don’t think that this process will take 18 in an AI as it does in a human. But I strongly believe that it is necessary for any developing thinker to start with the elementary inferences first.


An article that I wrote which can be found here explains this in more detail. 

https://www.sciencedirect.com/science/article/pii/S0031938416308289

Here is an excerpt from that article. 



"Due to their sustained activity, neurons in the PFC can span a wide delay time or input lag between associated occurrences [35][89] and thereby allow elements of prior events to become coactive with elements of subsequent events. Sustained activity allows neurons that would otherwise never fire together to both fire and wire together, and also allows features that never co-occur in the environment to be present together in topographic imagery. Thus, it may be reasonable to assume that SSC underlies the brain's ability to make internally derived associations between representations that never occur simultaneously in the environment. The longer sustained firing in association cortex lasts, the better the animal will be at capturing information about causally linked stimuli that present apart in time. The longer the sustained firing, the longer the delay can be. The same regularity may happen persistently in the environment, where a stimulus is followed several seconds later by another stimulus, concern, or opportunity; however, if the animal lacks sufficient sustained firing, this statistical regularity will not be captured by the neocortical system because the ensembles for them will never be exposed to each other.
Few if any mammals have evolved a human-like capacity for sustained firing in PFC neurons, and thus the mental lives of most mammals likely involve associations made between temporally proximate stimuli and concepts. This may suggest that in most ecological niches it is not helpful to create memories for relationships between stimuli that occur in delayed succession and instead it is better to focus on analyzing stimuli that present in quick succession [68][72]. There may therefore be two strategies, on opposite ends of a continuum, for holding recent information active: immediate and delayed succession strategies. The delayed succession strategy, involving high sustained firing and a low rate of working memory updating, is optimal for environmental scenarios that are prolonged over time, where temporally distant cues may retain contextual relevance. This strategy is likely associated with certain ecological or life-history conditions such as low extrinsic mortality, intergenerational resource flows, meme transference, and the K-selection strategy in general.
How can the brain trust that an association between two concepts that are removed in time and never co-occur simultaneously in the environment is valid? Each of the contents of working memory contribute to the selection of the next addition to working memory, and this may help to ensure that the contents held in working memory at any moment are veridically concordant rather than incongruous. This is because the system is narrowly constrained to only combining ensembles that have been highly associated in the past. If this is true, it suggests that at an early age the first associations are between stimuli that are nearly simultaneous, but that these can create foundational knowledge upon which to base reliable inferences about associations between stimuli that are removed from each other by a delay in time.
Because the frontal lobes of infants are underdeveloped, their brains probably exhibit far less continuity between brain states. Very young children can trust the connections that their early sensory areas have made concerning the spatiotemporal associations between near simultaneous features because these events show high order and regularity. This may be why sensory areas myelinate so early in life. Perhaps association areas are programmed genetically not to finish myelinating until early adulthood because it is a time-intensive process to form and test higher-order hypotheses about relationships between constructs that are more distributed through time."



Wednesday, March 4, 2020

How To Build Your Own AI-Ready Computer


My interest in artificial intelligence has driven me to learn more about computers. In 2019 it influenced me to build my own computer. I learned as much as I could for several months, got a few certifications in the process, and then dove right into a build. It was much easier than I thought, very fun, and very rewarding. Building a computer is a joy and this blog post recounts how you can do it yourself. The post is divided into four different sections: 1) the computer that I built, and how I built it, 2) the software I installed on it to help me learn more about AI, 3) the software I installed for recreation and productivity, and 4) the learning path that I took to prepare me to build it, that you can take too. If you would like to build your own computer, and are interested in tinkering with machine learning, neural networks, or AI, then I hope this short guide helps.

Building Your Own Computer

I strongly suggest watching a computer build before you purchase your parts. Put the words “how to build a PC” into youtube and plan to spend an hour watching a youtuber put a computer together. This was the video I watched first that got me inspired: https://youtu.be/IhX0fOUYd8Q You will learn a lot, and be really glad that you did it. You might want to take notes. After you do this you will be ready to tinker meaningfully with a used computer. I advise you disassemble a used computer first because it is good, cheap preparation. Below is a map to some of the basic components on the motherboard that you are going to come across.



Before I built the computer that I am using now, I bought two cheap, used computers from a dingy computer store in a bad part of town. The guy was selling old Dells and HPs with pirated copies of Windows 10. He was selling them for around $25-100, but you should be able to find something cheaper. The first computer that I bought from him I simply dismantled completely. I took every nut and screw out of it, and took everything that came apart, apart without actually breaking anything. This was a fantastic learning experience and I strongly recommend it.


  

The second used computer I bought I took apart with some friends and their kids. Again, it was fun and it felt empowering to explain to them what the electronic components were and how they worked together. I took notes about what screws go where, and I took before and after pictures because I wanted to be able to put it back together.








I didn’t; however, put these components back in the original Dell case. Instead I used different parts from both computers and assembled them into a new computer case. I got this new case from Fry’s Electronics. It was an attractive white box, with a clear viewing window and fans. I picked it up for $40 on sale. It took some trial and error, but once everything was plugged in correctly it booted up fine. I still use this computer today and I have connected it via an ethernet cable to the new build (that I am going to describe next) so that I can have both computers running and communicating in tandem.   
 


Next it was time to buy all new parts and create my own build from scratch. It takes a lot of research to determine if all of the parts you want are interoperable. Many parts that you find on Amazon, or Newegg don’t play nicely together, but lucky for us there is www.pcpartpicker.com. This helps to double check your parts list and make sure that all of the components you think look cool, and have the specs you are willing to pay for, are going to work together flawlessly. Once you ensure this you can go to your local computer store, or go to Amazon, and buy the items you need. These are the items that you are going to need.

Parts You Will Need To Buy
·       Central processing unit (CPU) with heatsink and fan
·       Motherboard
·       Hard Drive (HDD or SSD)
·       RAM
·       Case
·       Fans
·       Video card
·       Power Supply Unit (PSU)
·       Network Interface Card (NIC)
·       Optical Drive, Bluray or DVD (optional)
·       Keyboard
·       Mouse
·       A copy of Windows
·       Antivirus software
·       Productivity Software
·       HDMI cord
·       KVM, TPM, colored cables, colored power cord  (all optional)


 





I recommend choosing your CPU first. The CPU really is the brain of your computer and you are going to want to carefully select the processing speed, cache memory, and performance that meets your budget. You will likely choose an AMD or Intel processor. I chose the AMD 2700x (because I liked the flashy colored fan). After you choose a CPU you then select a compatible motherboard. You carefully seat the CPU into the motherboard, apply a little thermal paste to the lid, slam on the included heatsink and fan, and then place the whole thing into your case. Then you tighten all the screws to firmly connect the two. Next you snap your RAM sticks you into the motherboard. You must then connect the powersupply to the case, the videocard to the motherboard, and the hard drive to the motherboard. All of this is depicted in my rough, hand drawn figure below. If you make all of these connections as shown in the drawing, its probably going to turn on.




After putting all of the parts together, you plug it in, press the power button, and hope that the lights come on, and the fans and the hard drive start spinning. If they do then you connect it to a monitor and see if the motherboard’s BIOS will start. If it does then you can introduce the computer to an operating system like Windows or Linux saved on a DVD, or a USB thumb drive. You follow the directions, and install the OS. Then you can start thinking about the next two sections of this blog: installing AI software, and installing software for entertainment, productivity, and ease of use.





If you want to run neural networks, or another form of robust machine learning algorithm on your computer you want high quality components. CPU cache, RAM, and the GPU are the most important aspects. You might even want to invest in a new m.2 SSD. However, if you are just learning AI you can play around with most AI software using even very basic, low-budget computers. If, on the other hand, you are very serious about running computationally expensive models you will probably eventually want to run them remotely on a Cloud provider platform (like Amazon or Google).

Software You Can Use to Tinker with Artificial Intelligence

Lucky for us, most of the important AI related software is free. This is a quick guide to some of the applications that you might be interested in.

Neuronify: I recommend an app called Neuronify that you can find on the W. It creates a simple workspace where you can build neurons, connect them, and watch them fire at, and respond to each other. Playing with the options, and completing the tutorials helps to build important intuitions about how neural networks work. You can visit the website here: https://ovilab.net/neuronify/ Before you get bored of it, definitely download some of the highest rated workspaces by other users and you will be treated to some complex and fascinating models.

Tensor Flow Playground: You can play with a neural network straight from the internet without having to download any software. To do this use Google’s excellent tutorial here: https://playground.tensorflow.org/  First watch a youtube video explaining how to use this resource and then play with it as much as you can to develop first hand knowledge about how machines can use neural networks to learn, and how fundamental AI algorithms work. It is all about little entities that talk to each other, collectively produce an output, and then learn from their mistakes.   

Nengo: A favorite researcher of mine named Chris Eliasmith has created a spiking neural network simulation application called Nengo. This is an excellent “brain making package” that lets you build, test, and deploy your own neural networks using Python. The tutorials are excellent, and make you feel like you have a foot in the door with artificial intelligence. Find out more at: https://www.nengo.ai/ You have to download Python before you can run it. Which brings us to Python…

Python: If you are serious about machine learning, neural networks, or other popular forms of AI or data science software, I strongly recommend learning Python.The newest version of Python can be downloaded for free from the official website. Python is one of the hottest programming languages, and one of the easiest to learn. It is fantastic for automating things, and is necessary for anyone who wants a future in AI, especially neural network engineers. The download link is here https://www.python.org/downloads/

There are many software packages that you can use along with Python to start building world-class AI projects. These include Tensor Flow, Keras, Pandas, SciKit Learn, and many more. You will probably want to look into these for yourself.

PyCharm: Is a great developer’s environment for Python. It makes writing and keeping track of your Python code much easier, and it looks spiffy. 

Codeblocks: A free IDE to code in C++ http://www.codeblocks.org/downloads

GitHub: You can host all of the code that you will be writing on GitHub so other people can access it. I have posted a few beginner’s tutorial Python scripts on GitHub, and you can see them here: https://gist.github.com/jaredreser

Octave: Is a free version of Matlab, one of the most powerful mathematics software packages available. Anything you save in Octave can be run in Matlab and vice versa. Again, there are many helpful online tutorials for Octave that can help you learn it in no time. https://www.gnu.org/software/octave/download.html

MySQL: If you are interested in data science or database management you can download a number of free versions of “Structured Query Language” database management software.

R Studio: Free top of the line statistical software, with many free tutorials online. https://rstudio.com/products/rstudio/download/

Arduino: I strongly recommend ordering an Arduino starters kit. They will send you a number of electronics parts, sensors, and motors. You use them to build you own microcomputer, perform a number of experiments, and build your own gadgets and robots. You upload the code from your computer to the Arduino microcontroller and get your Arduino to do all sorts of interesting things. The best part is that you can see all of the code, and can rewrite or alter the code if you wish. I ordered an Arduino and an Elegoo (cheaper, and Arduino compatible) kit on Amazon, and completed all of the lessons. I got my friends involved, I built a ton of things I never thought I would, it was a blast. Check it out here: https://www.arduino.cc/

You might want to download a cheap copy of Go. It is a complex checkers-like board game that has many times more moves than chess. It is not easy to become a grandmaster, but it is very easy to start playing against a friend or the computer (on easy). Playing the game of Go a few times against a computer will really help to give you an understanding of how Google’s AI “Alpha Go” defeated the best Go players in the world.

Linux: Did you know that you can download distributions of Linux from the Windows Store and run them straight from Windows? This is an excellent way to become familiar with command line programming and to familiarize yourself with Linux, the world’s most popular, open source operating system.

Now I am learning everything I can about AI on Coursera. Try Andrew Ng’s “AI for Everyone” then try out his “Machine Learning” course. They are free, although if you want a printable certification you have to pay a little. Now I am working on IBM’s Data Science, and AI specialist courses on Coursera. They are great! https://www.coursera.org/

Software You Will Want to Have Your New Computer Running Smoothly

You should download the software intended for your CPU and GPU. That way you can keep track of temperature, overclocking, and you can come to understand more about their inner workings. For me this was “AMD Ryzen Master” software for my CPU, and “GeForce Experience” for my GPU.

Chrome and/or Firefox should be downloaded for an excellent, free internet browser.

Docs, Sheets, and Slides: If you don’t feel like paying Microsoft for Word, Excel, and Powerpoint, then you can simply download Google Docs, Google Sheets, Google Slides, and Google Forms for free. This is a web-based office suite offered by Google and integrated with Google Drive. It allows users to create and edit documents, spreadsheets, and presentations online.

You should download Adobe Reader from the Adobe website so that you can read and manipulate PDFs. https://get.adobe.com/reader/

I recommend CPU-Z, which is a simple but interesting application that allows you to find out all of the specs for your CPU, and watch the performance metrics change in real time.

You  will probably want to download something like Steam so that you can stream and play computer games.

You should download DVD playing software to read discs from your optical drive if you have one.

Retroarch: This is free software that you can use to play emulations of old videogames.  https://www.retroarch.com/

FL Studio: For around $100 you can download a state of the art music producing software package that is really fun to work with. Watch a 15 minute tutorial video and you’ll be creating your own beats in a matter of seconds. You might want to download the free version of Virtual DJ while you are at it.

How to Prepare For Building a Computer and Get Certified in the Process

I knew that before I built myself a new computer, that I wanted to be more informed about computer science in general. I figured that I should get A+ certified through CompTIA. The A+ certification is a basically a computer repair technician’s license and takes about 100-200 hours of studying to prepare for. To prepare you for this you can take the ITF+ which is a great introduction to the concepts and takes merely 20-40 hours of study. The information that you gain is very empowering and will teach you, not only how to build a computer but how to take care of it, troubleshoot it, customize it, and set up your own home network. You will know learn how to connect every computer in your house, the best ways to backup your data, and how to fix your friend’s computers too. If you are interested in AI but do not already have a computer science degree, then a firm understanding of computer hardware and OS usage is very helpful.  

In studying for the A+ computer repair technician certification I also acquired the “information technology fundamentals” cert, the ITF+ and the Project + (project management) cert as well. It was fun preparing for these exams and I used youtube.com and classic study guides by Mike Meyers and Quentin Doctor. I strongly recommend taking the PBS computer science Crash Course on youtube: https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo After finishing the eight hours you will feel like you earned a BA in CS. My other favorite youtube prep guide for the A+ was PowerCert, which was very helpful. https://youtu.be/2eLe7uz-7CM  I watched tech videos from youtube every night before bed and I was introduced to countless fascinating new ideas and concepts.

Here is the playlist for some of my favorite AI youtube videos:


And here is the playlist for some of my favorite computer science videos:


I also used several different apps on my phone to learn about coding and computer science, such as Mimo, Codeacademy, Grasshopper, Sololearn, Py, and Code Playground. These all have free content and were very informative. Start by downloading the trial version of Mimo, and see what you think.

Finally I want to leave you with a table that shows some of the computers that I have owned over the years. You can see how the specs have advanced due to accelerating returns in the technology sector. I find the exponential progress in computing to be fascinating and exciting. We are in the information age. Get caught up in it, you may be glad you did.

Computer
CPU Speed

(Hz)
RAM Memory
 (Bytes)
Hard Drive Storage
 (Bytes)
1986
Apple II GS
2,800,000
256,000
20,000,000
2004
Dell Desktop
3,200,000,000
3,500,000,000
80,000,000,000
2006
Sony Desktop
2,800,000,000
2,000,000,000
150,000,000,000
2009
HP Desktop
3,200,000,000
8,000,000,000
400,000,000,000
2009
HP HDX Laptop
2,130,000,000
8,000,000,000
500,000,000,000
2013
Dell Laptop
2,000,000,000
8,000,000,000
200,000,000,000
2014
Sony Laptop
1,800,000,000
8,000,000,000
900,000,000,000
2014
Dell XPS Desktop
3,600,000,000
16,000,000,000
1,000,000,000,000
2015
Apple MacBook
1,100,000,000
8,000,000,000
128,000,000,000
2020
Home Made PC
4,300,000,000
32,000,000,000
2.000.000.000.000