Tuesday, June 16, 2020

The Stream of Thought is a Looping Vector


In 2005 I was waiting for the bus, wondering about consciousness, and doodling on the back of a journal article I had printed out. In doing so, I convinced myself that the stream of thought can be described by a single line that loops every few centimeters. A rational line of reasoning that stays on topic would loop and then continue to run in the same direction (vector) that it was in originally. A series of loose associations, on the other hand, would exit the loop at an unpredictable tangent and start off in a new direction, irrespective of the direction of the line before it. In this model, the direction of the line indicated the direction of the stream of thought. Thus a line that continues in the same direction will progress toward a goal, whereas a line that is constantly changing directions might be making interesting new associations, but has no long-term intention or goal. The pictures below illustrate the difference between these two strategies or modes of operation.



This idea, and these doodles, strongly influenced my understanding of the thought process. It went on to influence my model of working memory and consciousness (Reser, 2011; 2016). However, I never wrote anything about it or elaborated on it. Let’s do so here. Specifically, let’s consider this model relative to the current cognitive neuroscience of working memory, and my 2016 model of the stream of thought.

A line that loops is changing course, and experiencing a detour, perhaps fleshing out a related problem. This means that the items or representations coactive in the focus of attention have changed and are iterating though a different scenario. But if after the loop the line of thought regains its original course then it will return to the items it was previously holding and continue to iterate. The detour (loop) may have introduced important new representations that will help it towards its goal. For example, you may be thinking about what will be involved on your bus trip to another city. Once you board the bus in your imagination you realize that you have to pay and you model what it will be like to purchase a ticket to ride. Your interaction with the bus driver, and your wallet pulls you away from your thoughts about the trip itself. Now you could forget about the trip and go on thinking about your wallet, how old and worn it is getting, and what kind of wallet you want to replace it with. This would involve a change of course for your vector, or line of thought. Or you could imagine yourself paying the fare and resuming the next step related to your trip, such as finding out where you need to get off. This would involve your line of thought looping around another set of mental representations, but then returning to the original representations (bus, trip, destination, etc.).

Working memory is thought to have two major components: 1) the focus of attention, and 2) the short-term store. As you transitioned from thinking about your trip, to thinking about paying the fare, and then back to thinking about the trip again, the items related to thinking about the trip were transferred between stores. They would have gone from the focus of attention, to being saved temporarily in the short-term store, and then back into the focus of attention. In other words, the short-term store is a holding depot for lines of thought that are deemed to be important that we may need to return to. If instead, you had just kept thinking about your wallet, that would not have necessitated the short-term store and would have amounted to a loose association with no associative connection to the recent past. Schizophrenia, Alzheimer’s and many other brain disorders are characterized by a reduction in the capacity and duration of the short-term store, and that is why thought is often constantly derailed in people that have them.

In my 2016 model of working memory (Reser, 2016) I use uppercase letters to denote items of thought. When two successive thoughts share a large amount of content they share a large proportion of letters (e.g. thought one = A, B, C, D and thought two = B, C, D, E). When two successive thought share less content, they share fewer letters, and thus carry less continuity (e.g. thought one = A, B, C, D and thought two = D, C, E, F). In the first example above the two states shared most of their active representations in common (B, C, and D). In the second example though, the two states only shared one common representation (D). The next figure applies this general model to the discussion about lines and loops discussed earlier.



As you can see the first figure uses a single new representation introduced by each loop, but then returns to A, B, and C. This represents a prolonged process of thinking about the same concepts in different, connected contexts. In the second figure none of the concepts under consideration are maintained after the loop. There is still continuity between two states, but not between three states. This is clearly more chaotic. I think that these two modes of operation represent two sides of a continuum. I think they corresponds to type one (Kahneman’s thinking fast), and type two (thinking slow), with intermediate rates of updating between the two.

What do you think? Can you see how thought might be taking constant detours, that temporarily interrupt continuity, so that it can introduce new content to the line of persistent content? Do you ever notice that your train of thought breaks away for a few seconds only to comeback more inspired and informed? How about the opposite. Can these respites and detours be distracting?


Reser, J. 2016. Incremental change in the set of coactive cortical 
assemblies enables mental continuity. Physiology and Behavior. 
167: 222-237.

Why Having Right and Left Cortical Hemispheres Might Be Important for Superintelligent AI


The cerebral cortex can be divided right down the middle (sagittally) into two, nearly identical hemispheres. It has become clear to neuroscientists in the last few decades why our brain has right and left halves. Far from being redundant, they each process much of the same information in slightly different ways, leading to two complementary and cooperating worldviews. I will explain here why this organization (called hemispheric lateralization) would be beneficial for AI.

Scientists that design AI systems are interested in how to implement important features of the brain inside a computer. Neural networks simulate many of these features today including neurons, axons, and their hierarchical structure. However, neural networks are missing many of the human brain’s key information processing principles. Hemispheric laterality may be one such engineering principle that could give current AI systems the boost they need to reach artificial general intelligence. The main benefit would be that over developmental time these two innately different networks would reprogram each other by being exposed to each other's outputs. In essence, two dissimilar heads are better than one. 



Getting a neural network architecture (e.g. Reser, 2016) to benefit from hemispheric laterality would actually be very easy. All you have to do is duplicate the network you already have and then connect the two of them via a large number of high bandwidth weighted links that would act as the corpus callosum (the bundle of tracts that connect the right and left hemispheres in mammals). The connections between them should respect the brain’s natural bilateral symmetry, connecting similar and dissimilar areas across both hemispheres. If you had a million dollar supercomputer running your AI system and you wanted to create a whole other hemisphere, you would have to buy another million dollars worth of hardware, but it may well be worth it. Next let's talk about how they would be different

Each hemisphere of the brain processes information slightly differently. Despite the fact that the macrostructure of the two hemispheres is almost identical, the two are different microstructurally. Many researchers believe that this is because the right hemisphere has an average axonal length slightly (microscopically) longer than the left. This means that the right brain has relatively more white matter (axons), and the left hemisphere has relatively more grey matter (cell bodies). This also means that on average the cells of the right brain are further away from one another. There are many theories as to why the longer-ranging wiring is responsible for the right hemisphere’s tendency for broad generalization and holistic perspective. The left brain is more densely woven and this might underlie its ability for detailed work and close, quick cooperation between neurons.

People with a left hemisphere injury may have impaired perception of high resolution, or detailed aspects of an image, whereas those with right hemisphere injury may have trouble seeing the low resolution, or big picture aspects of an image. In other words, they miss the forest for the trees. Moreover, attending to the Ds in the figure below activates the left hemisphere, whereas concentrating on the “L” that these Ds form activates the right.


D
D
D
D
D      D      D      D      D     


It takes messages just a tiny bit longer to travel from one neuron to another in the right hemisphere. Neuroscientists think that this causes the two hemispheres to process the same informational inputs slightly differently. Each side learns during its development while organizing its thoughts according to a different algorithm and this leads each hemisphere to become a master at its unique way of perceiving and responding to the world. This discrepancy in temporal processing parameters makes the feedback and crosstalk between these two non-identical specialists meaningful. These would be commensurate to specialists that could check, balance, reconcile, compare, and contrast their approaches. If they processed information in exactly the same way it would be unnecessary to have two, but because they don’t they provide a type of stereoscopic view on the world similar to the view provided by our two offset eyes. This enables them to form their own perceptions and opinions and then reconcile with one another. Right now, no AI systems have anything like this.


It would be easy for the AI architect to take two identical neural networks and then alter each one so that they are each capable of generating different, but equally valid perspectives on reality. There are countless parameters that could be fine-tuned to do this. However, they should probably start with the average connectional distance between neural nodes in the network. If the hardware was neuromorphic they could literally increase the length of the axons, or if the software was responsible for rendering the network then brief variable pauses could be introduced between the links. In a computer these variables could be changed at any time according to processing priorities. In other words, if the AI system anticipated that its left hemisphere should “lean” even more to the left to help it solve a particular problem, it could alter these “temporal weights” in real time. There would be too much complexity in finding the optimal parameters to use human trial and error, and instead genetic algorithms would have to be used.

Our brain’s two hemispheres also differ as to their value systems. Our left hemisphere is dedicated to approach behaviors, and our right hemisphere is dedicated to withdrawal behaviors. Stimulating the left hemisphere of a rat will make it go toward a new object, whereas stimulating the right side will make it back away from that object. The fact that vertebrate brains have made this fundamental differentiation between approach and withdrawal for hundreds of millions of years suggests that it might represent an organizing principle that should be used in AI. One way to do this would be to wire up the AI’s “subcortical” appetitive and motivational centers (like the ventral tegmental area and the nucleus accumbens) involved in reinforcement learning with the left hemisphere, and to wire up the threat detection centers (like the amygdala) involved in punishment learning with the right hemisphere. Can you think of a more important distinction between two fundamentally important behavioral influencers? I can’t. AI needs two, functionally equivalent, dedicated processors, one for approach/liking/curiosity, and one for withdrawal/disliking/fear. As I have explained elsewhere both hemispheres should influence the dopaminergic system and sustained firing so that important rewards and threats can be maintained in mind; however, the left should be focused on approaching those rewards, and the right should be focused on withdrawing from the threats.

In the mammal brain, at any moment in time one hemisphere will be more active than the other (dominant). Its behavior at that point in time will be influenced by the dominant hemisphere. Approach and withdrawal form a pivoting scale for how we act in the world. They also structure our conscious attention even when we aren’t moving or behaving by allowing us to pivot between interest and disinterest. The AI agent would also have to continually select a dominant hemisphere to give priority to either approach or withdrawal at every point in time.

As in vertebrate animals, the left hemisphere could be used to control the right half of the AI’s robot body and the right hemisphere could be used to control the left half. This could be a good way to ensure that both approach and withdrawal have equal potential behavioral outputs. It seems that this would create a robot whose two sides would pull it in different directions. But this is how it works in the brain and you and I aren’t pulled in two directions. The fact that both networks are so densely connected through the corpus callosum, and that they developed together side by side probably play a big role in their cooperativity.

Certainly, widespread hemispheric lateralization in vertebrate animals indicates that laterality is associated with an evolutionary advantage. It is pretty clear to see that its unique features could also contribute to consciousness. This dichotomous organization might be similarly advantageous in the creation of human-like intelligence, and superintelligence in computers.


Reser, J. 2016. Incremental change in the set of coactive cortical assemblies enables mental continuity.
Physiology and Behavior. 167: 222-237.

Monday, March 23, 2020

Artificial Intelligence Needs to Utilize the Process of Myelination




Modern AI is capable of some fantastic feats, yet is still very limited compared to the human mind. The disciplines of machine learning and deep learning have shown us that with a powerful computer and a bunch of neurons organized into a network we can throw easy psychological problems at these networks and expect good answers. However, when researchers try to construct more complex networks, to tackle more cognitively complex problems, the networks fail to deliver. This is because they have not yet used a simple trick that animals have been using for hundreds of millions of years: gradual and progressive myelination.   

As we progress from infanthood through childhood our brains make various biological changes. These changes cause our level of analysis to slowly progress from analyzing brief sensory experiences, to analyzing complex, abstract scenarios. We begin our lives only being able to notice and attend to interactions occurring on short time scales. By adulthood, with the prefrontal cortex fully developed, we find ourselves able to follow interactions occurring on long time scales. In order to develop the ability to think about complex things we had to spend almost two decades gradually altering our brain’s processing strategy. It is a scaffolding process where we focus on the simplest things first, and use basic knowledge about them to advance incrementally to more complex things. The fact that all humans, and mammals in general, do this strongly suggests that it plays a role in the acquisition of advanced intelligence. In this entry I will argue that this developmental process will be instrumental in training superintelligent AI.  

This gradual process of brain development is made possible by myelination. Myelin is a fatty substance surrounding the connections between neurons (axons). Vertebrate animals use it to speed up information transmission between cells. The myelin increases the rate at which the electrical impulses travel. But vertebrates aren’t born with all the myelin that they will need as adults. Instead myelin develops slowly in specific areas, one at a time. Once a brain area has developed valuable, reliable, and consistent knowledge the connections formed by learning are solidified by the introduction of myelin.

The order of brain areas affected by myelin is consistent across all mammals. The early sensory areas are the first cortical areas to develop myelin. One of these, the primary visual area, starts to myelinate shortly after birth as the infant gains visual experiences. These early visual areas are responsible for basic visual perception and don’t rely on trial and error interactions with the environment. Rather they involve responding to visual stimuli that are presented simultaneously without any time delay between appearances. This happens when you see a picture of a house; you generally see the roof, windows, and door all at once without experiencing much of a time delay between these stimuli.

The last areas to myelinate are the association cortices and the prefrontal cortex (PFC). The PFC does not generally finish myelinating until one reaches the age of 18 or older. This means that the PFC does not “trust” that it has been wired up correctly until almost two decades into life. Whereas the visual system “trusts” that it has been wired correctly before the first two years. This is because sensory stimuli are generally honest, and all show up at the same time. Whereas complex events are constructed from stimuli that are removed from each other by delays in time. Understanding the relationships between events that are not simultaneous requires careful, logical inferences about causality. For example, the sale of a house is an abstract concept that involves parties, contracts, and delays that can last for weeks or months. This is why children aren’t licensed to sell houses.

It takes time to learn to make complex inferences that involve delays in time. It is probably the case that the process of myelination during development involves the progressive accumulation of knowledge that supports and buttresses more complex knowledge. In other words, as simple things are mastered in early cortical areas they provide the basis for new learning in the late cortical areas. In the same way, many brief, simple experiences create the knowledgebase to start to understand long, complex experiences with more advances probabilistic structures. The layers at the bottom of the hierarchy must be trained before the higher layers can find regularities and statistical structure within them. But as you can see in the diagram below the top of the hierarchy falls in the middle between sensory input and motor output. To properly train sensory input and motor output it is imperative that they be connected to each other, and can interact with each other to drive behavior, long before the association areas interposed between them are brought to the table.   



Many AI researchers point out that the things that AI and neural network systems today can accomplish are things that can generally be accomplished by an adult human brain in under a second. This means that they can only do things that we do unconsciously, such as near instantaneous pattern recognition. Today’ AIs can recognize houses but could not recognize, understand, or broker, the sale of a house. What AI is able to do are the kinds of things that we are able to do with our primary sensory and motor areas. This is because they are designed like a primary cortical areas. They do not feature reciprocal interactions between various structures organized into a brain like hierarchy. Very few AI architectures exist today that connect primary areas with association areas and a PFC. Those that do, don’t use anything like the process of myelination. Rather, in existing AI all of the areas from the simple to the advanced come online at the same time. I think these systems should use something analogous to the process of myelination because it would help them in their acquisition of knowledge. If they did, here’s how they should go about it:

First you would need a number of neural networks of pattern recognizing nodes. These networks must take inputs from the environment, each corresponding to a different sensory modality. These early networks must be linked to one another. Then these would have to be linked together in a hierarchy where unimodal networks form inputs to multimodal networks, which then form inputs themselves to even more densely multimodal networks above them. This “multimodal fusing” is depicted in the figure. The nodes of the densely multimodal networks would be the association networks and at the top of this hierarchy would be the PFC which would also be connected directly to the early motor networks. The nodes of the association and PFC networks would exhibit sustained firing. Importantly this sustained firing, the activity of the association networks, and their influence over ongoing processing elsewhere would start out extremely meager, and increase over time. These capacities could be increased as the system exhibits proficiency at simple tasks, such as object recognition, scene classification, and simple motor movements. As the association areas are added to the system a capacity to plan, and make higher order inferences and classifications could be expected.  

One important concept that I haven’t explained yet is that the first areas to myelinate in the brain, the sensory areas, have neurons of a single modality (e.g. either vision or hearing) that fire for short durations. The association areas and the PFC on the other hand have multimodal neurons (e.g. both vision and hearing) that fire for long durations. As in the mammalian brain (Huttenlocher & Dabholkar, 1997), sensory areas should mature (myelinate) early in development, and association areas should mature late. This will cause the capacity for sustained firing to start low, but increase over developmental time.

Postponing the initialization of association networks in this way would allow the formation of low-order associations between causally linked events that typically occur close together in time. This would focus the system on easy-to-predict aspects of its reality (e.g. correlations between occurrences in close temporal proximity). The consequent learning would erect a reliable scaffolding of highly probable associations that can be used to substantiate higher-order, time-delayed associations later in development (Reser, 2016). In other words, the rate of iterative updating from one state to the next (Fig. 9) would start very high. This would be reversed over the course of weeks to years as an increasing capacity for working memory would be folded in to the system.

Nature has found that it doesn’t pay to let the multimodal, neurons capable of sustained firing come online until the basics are learned first. I strongly suspect that AI network engineers will find this too. For the sake of progress I just hope that this myelination/development feature is implemented and perfected sooner rather than later. Given the rapid processing in computers, and the sheer amount of data available to them I don’t think that this process will take 18 in an AI as it does in a human. But I strongly believe that it is necessary for any developing thinker to start with the elementary inferences first.


An article that I wrote which can be found here explains this in more detail. 

https://www.sciencedirect.com/science/article/pii/S0031938416308289

Here is an excerpt from that article. 



"Due to their sustained activity, neurons in the PFC can span a wide delay time or input lag between associated occurrences [35][89] and thereby allow elements of prior events to become coactive with elements of subsequent events. Sustained activity allows neurons that would otherwise never fire together to both fire and wire together, and also allows features that never co-occur in the environment to be present together in topographic imagery. Thus, it may be reasonable to assume that SSC underlies the brain's ability to make internally derived associations between representations that never occur simultaneously in the environment. The longer sustained firing in association cortex lasts, the better the animal will be at capturing information about causally linked stimuli that present apart in time. The longer the sustained firing, the longer the delay can be. The same regularity may happen persistently in the environment, where a stimulus is followed several seconds later by another stimulus, concern, or opportunity; however, if the animal lacks sufficient sustained firing, this statistical regularity will not be captured by the neocortical system because the ensembles for them will never be exposed to each other.
Few if any mammals have evolved a human-like capacity for sustained firing in PFC neurons, and thus the mental lives of most mammals likely involve associations made between temporally proximate stimuli and concepts. This may suggest that in most ecological niches it is not helpful to create memories for relationships between stimuli that occur in delayed succession and instead it is better to focus on analyzing stimuli that present in quick succession [68][72]. There may therefore be two strategies, on opposite ends of a continuum, for holding recent information active: immediate and delayed succession strategies. The delayed succession strategy, involving high sustained firing and a low rate of working memory updating, is optimal for environmental scenarios that are prolonged over time, where temporally distant cues may retain contextual relevance. This strategy is likely associated with certain ecological or life-history conditions such as low extrinsic mortality, intergenerational resource flows, meme transference, and the K-selection strategy in general.
How can the brain trust that an association between two concepts that are removed in time and never co-occur simultaneously in the environment is valid? Each of the contents of working memory contribute to the selection of the next addition to working memory, and this may help to ensure that the contents held in working memory at any moment are veridically concordant rather than incongruous. This is because the system is narrowly constrained to only combining ensembles that have been highly associated in the past. If this is true, it suggests that at an early age the first associations are between stimuli that are nearly simultaneous, but that these can create foundational knowledge upon which to base reliable inferences about associations between stimuli that are removed from each other by a delay in time.
Because the frontal lobes of infants are underdeveloped, their brains probably exhibit far less continuity between brain states. Very young children can trust the connections that their early sensory areas have made concerning the spatiotemporal associations between near simultaneous features because these events show high order and regularity. This may be why sensory areas myelinate so early in life. Perhaps association areas are programmed genetically not to finish myelinating until early adulthood because it is a time-intensive process to form and test higher-order hypotheses about relationships between constructs that are more distributed through time."



Wednesday, March 4, 2020

How To Build Your Own AI-Ready Computer


My interest in artificial intelligence has driven me to learn more about computers. In 2019 it influenced me to build my own computer. I learned as much as I could for several months, got a few certifications in the process, and then dove right into a build. It was much easier than I thought, very fun, and very rewarding. Building a computer is a joy and this blog post recounts how you can do it yourself. The post is divided into four different sections: 1) the computer that I built, and how I built it, 2) the software I installed on it to help me learn more about AI, 3) the software I installed for recreation and productivity, and 4) the learning path that I took to prepare me to build it, that you can take too. If you would like to build your own computer, and are interested in tinkering with machine learning, neural networks, or AI, then I hope this short guide helps.

Building Your Own Computer

I strongly suggest watching a computer build before you purchase your parts. Put the words “how to build a PC” into youtube and plan to spend an hour watching a youtuber put a computer together. This was the video I watched first that got me inspired: https://youtu.be/IhX0fOUYd8Q You will learn a lot, and be really glad that you did it. You might want to take notes. After you do this you will be ready to tinker meaningfully with a used computer. I advise you disassemble a used computer first because it is good, cheap preparation. Below is a map to some of the basic components on the motherboard that you are going to come across.



Before I built the computer that I am using now, I bought two cheap, used computers from a dingy computer store in a bad part of town. The guy was selling old Dells and HPs with pirated copies of Windows 10. He was selling them for around $25-100, but you should be able to find something cheaper. The first computer that I bought from him I simply dismantled completely. I took every nut and screw out of it, and took everything that came apart, apart without actually breaking anything. This was a fantastic learning experience and I strongly recommend it.


  

The second used computer I bought I took apart with some friends and their kids. Again, it was fun and it felt empowering to explain to them what the electronic components were and how they worked together. I took notes about what screws go where, and I took before and after pictures because I wanted to be able to put it back together.








I didn’t; however, put these components back in the original Dell case. Instead I used different parts from both computers and assembled them into a new computer case. I got this new case from Fry’s Electronics. It was an attractive white box, with a clear viewing window and fans. I picked it up for $40 on sale. It took some trial and error, but once everything was plugged in correctly it booted up fine. I still use this computer today and I have connected it via an ethernet cable to the new build (that I am going to describe next) so that I can have both computers running and communicating in tandem.   
 


Next it was time to buy all new parts and create my own build from scratch. It takes a lot of research to determine if all of the parts you want are interoperable. Many parts that you find on Amazon, or Newegg don’t play nicely together, but lucky for us there is www.pcpartpicker.com. This helps to double check your parts list and make sure that all of the components you think look cool, and have the specs you are willing to pay for, are going to work together flawlessly. Once you ensure this you can go to your local computer store, or go to Amazon, and buy the items you need. These are the items that you are going to need.

Parts You Will Need To Buy
·       Central processing unit (CPU) with heatsink and fan
·       Motherboard
·       Hard Drive (HDD or SSD)
·       RAM
·       Case
·       Fans
·       Video card
·       Power Supply Unit (PSU)
·       Network Interface Card (NIC)
·       Optical Drive, Bluray or DVD (optional)
·       Keyboard
·       Mouse
·       A copy of Windows
·       Antivirus software
·       Productivity Software
·       HDMI cord
·       KVM, TPM, colored cables, colored power cord  (all optional)


 





I recommend choosing your CPU first. The CPU really is the brain of your computer and you are going to want to carefully select the processing speed, cache memory, and performance that meets your budget. You will likely choose an AMD or Intel processor. I chose the AMD 2700x (because I liked the flashy colored fan). After you choose a CPU you then select a compatible motherboard. You carefully seat the CPU into the motherboard, apply a little thermal paste to the lid, slam on the included heatsink and fan, and then place the whole thing into your case. Then you tighten all the screws to firmly connect the two. Next you snap your RAM sticks you into the motherboard. You must then connect the powersupply to the case, the videocard to the motherboard, and the hard drive to the motherboard. All of this is depicted in my rough, hand drawn figure below. If you make all of these connections as shown in the drawing, its probably going to turn on.




After putting all of the parts together, you plug it in, press the power button, and hope that the lights come on, and the fans and the hard drive start spinning. If they do then you connect it to a monitor and see if the motherboard’s BIOS will start. If it does then you can introduce the computer to an operating system like Windows or Linux saved on a DVD, or a USB thumb drive. You follow the directions, and install the OS. Then you can start thinking about the next two sections of this blog: installing AI software, and installing software for entertainment, productivity, and ease of use.





If you want to run neural networks, or another form of robust machine learning algorithm on your computer you want high quality components. CPU cache, RAM, and the GPU are the most important aspects. You might even want to invest in a new m.2 SSD. However, if you are just learning AI you can play around with most AI software using even very basic, low-budget computers. If, on the other hand, you are very serious about running computationally expensive models you will probably eventually want to run them remotely on a Cloud provider platform (like Amazon or Google).

Software You Can Use to Tinker with Artificial Intelligence

Lucky for us, most of the important AI related software is free. This is a quick guide to some of the applications that you might be interested in.

Neuronify: I recommend an app called Neuronify that you can find on the Windows store. The software creates a simple workspace where you can build neurons, connect them, and watch them fire at, and respond to each other. Playing with the options, and completing the tutorials helps to build important intuitions about how neural networks work. You can visit the website here: https://ovilab.net/neuronify/ Before you get bored of it, definitely download some of the highest rated workspaces by other users and you will be treated to some complex and fascinating models.

Tensor Flow Playground: You can play with a neural network straight from the internet without having to download any software. To do this use Google’s excellent tutorial here: https://playground.tensorflow.org/  First watch a youtube video explaining how to use this resource and then play with it as much as you can to develop first hand knowledge about how machines can use neural networks to learn, and how fundamental AI algorithms work. It is all about little entities that talk to each other, collectively produce an output, and then learn from their mistakes.   

Nengo: A favorite researcher of mine named Chris Eliasmith has created a spiking neural network simulation application called Nengo. This is an excellent “brain making package” that lets you build, test, and deploy your own neural networks using Python. The tutorials are excellent, and make you feel like you have a foot in the door with artificial intelligence. Find out more at: https://www.nengo.ai/ You have to download Python before you can run it. Which brings us to Python…

Python: If you are serious about machine learning, neural networks, or other popular forms of AI or data science software, I strongly recommend learning Python.The newest version of Python can be downloaded for free from the official website. Python is one of the hottest programming languages, and one of the easiest to learn. It is fantastic for automating things, and is necessary for anyone who wants a future in AI, especially neural network engineers. The download link is here https://www.python.org/downloads/

There are many software packages that you can use along with Python to start building world-class AI projects. These include Tensor Flow, Keras, Pandas, SciKit Learn, and many more. You will probably want to look into these for yourself.

PyCharm: Is a great developer’s environment for Python. It makes writing and keeping track of your Python code much easier, and it looks spiffy. 

Codeblocks: A free IDE to code in C++ http://www.codeblocks.org/downloads

GitHub: You can host all of the code that you will be writing on GitHub so other people can access it. I have posted a few beginner’s tutorial Python scripts on GitHub, and you can see them here: https://gist.github.com/jaredreser

Octave: Is a free version of Matlab, one of the most powerful mathematics software packages available. Anything you save in Octave can be run in Matlab and vice versa. Again, there are many helpful online tutorials for Octave that can help you learn it in no time. https://www.gnu.org/software/octave/download.html

MySQL: If you are interested in data science or database management you can download a number of free versions of “Structured Query Language” database management software.

R Studio: Free top of the line statistical software, with many free tutorials online. https://rstudio.com/products/rstudio/download/

Arduino: I strongly recommend ordering an Arduino starters kit. They will send you a number of electronics parts, sensors, and motors. You use them to build you own microcomputer, perform a number of experiments, and build your own gadgets and robots. You upload the code from your computer to the Arduino microcontroller and get your Arduino to do all sorts of interesting things. The best part is that you can see all of the code, and can rewrite or alter the code if you wish. I ordered an Arduino and an Elegoo (cheaper, and Arduino compatible) kit on Amazon, and completed all of the lessons. I got my friends involved, I built a ton of things I never thought I would, it was a blast. Check it out here: https://www.arduino.cc/

You might want to download a cheap copy of Go. It is a complex checkers-like board game that has many times more moves than chess. It is not easy to become a grandmaster, but it is very easy to start playing against a friend or the computer (on easy). Playing the game of Go a few times against a computer will really help to give you an understanding of how Google’s AI “Alpha Go” defeated the best Go players in the world.

Linux: Did you know that you can download distributions of Linux from the Windows Store and run them straight from Windows? This is an excellent way to become familiar with command line programming and to familiarize yourself with Linux, the world’s most popular, open source operating system.

Now I am learning everything I can about AI on Coursera. Try Andrew Ng’s “AI for Everyone” then try out his “Machine Learning” course. They are free, although if you want a printable certification you have to pay a little. Now I am working on IBM’s Data Science, and AI specialist courses on Coursera. They are great! https://www.coursera.org/

Software You Will Want to Have Your New Computer Running Smoothly

You should download the software intended for your CPU and GPU. That way you can keep track of temperature, overclocking, and you can come to understand more about their inner workings. For me this was “AMD Ryzen Master” software for my CPU, and “GeForce Experience” for my GPU.

Chrome and/or Firefox should be downloaded for an excellent, free internet browser.

Docs, Sheets, and Slides: If you don’t feel like paying Microsoft for Word, Excel, and Powerpoint, then you can simply download Google Docs, Google Sheets, Google Slides, and Google Forms for free. This is a web-based office suite offered by Google and integrated with Google Drive. It allows users to create and edit documents, spreadsheets, and presentations online.

You should download Adobe Reader from the Adobe website so that you can read and manipulate PDFs. https://get.adobe.com/reader/

I recommend CPU-Z, which is a simple but interesting application that allows you to find out all of the specs for your CPU, and watch the performance metrics change in real time.

You  will probably want to download something like Steam so that you can stream and play computer games.

You should download DVD playing software to read discs from your optical drive if you have one.

Retroarch: This is free software that you can use to play emulations of old videogames.  https://www.retroarch.com/

FL Studio: For around $100 you can download a state of the art music producing software package that is really fun to work with. Watch a 15 minute tutorial video and you’ll be creating your own beats in a matter of seconds. You might want to download the free version of Virtual DJ while you are at it.

How to Prepare For Building a Computer and Get Certified in the Process

I knew that before I built myself a new computer, that I wanted to be more informed about computer science in general. I figured that I should get A+ certified through CompTIA. The A+ certification is a basically a computer repair technician’s license and takes about 100-200 hours of studying to prepare for. To prepare you for this you can take the ITF+ which is a great introduction to the concepts and takes merely 20-40 hours of study. The information that you gain is very empowering and will teach you, not only how to build a computer but how to take care of it, troubleshoot it, customize it, and set up your own home network. You will know learn how to connect every computer in your house, the best ways to backup your data, and how to fix your friend’s computers too. If you are interested in AI but do not already have a computer science degree, then a firm understanding of computer hardware and OS usage is very helpful.  

In studying for the A+ computer repair technician certification I also acquired the “information technology fundamentals” cert, the ITF+ and the Project + (project management) cert as well. It was fun preparing for these exams and I used youtube.com and classic study guides by Mike Meyers and Quentin Doctor. I strongly recommend taking the PBS computer science Crash Course on youtube: https://www.youtube.com/playlist?list=PL8dPuuaLjXtNlUrzyH5r6jN9ulIgZBpdo After finishing the eight hours you will feel like you earned a BA in CS. My other favorite youtube prep guide for the A+ was PowerCert, which was very helpful. https://youtu.be/2eLe7uz-7CM  I watched tech videos from youtube every night before bed and I was introduced to countless fascinating new ideas and concepts.

Here is the playlist for some of my favorite AI youtube videos:


And here is the playlist for some of my favorite computer science videos:


I also used several different apps on my phone to learn about coding and computer science, such as Mimo, Codeacademy, Grasshopper, Sololearn, Py, and Code Playground. These all have free content and were very informative. Start by downloading the trial version of Mimo, and see what you think.

Finally I want to leave you with a table that shows some of the computers that I have owned over the years. You can see how the specs have advanced due to accelerating returns in the technology sector. I find the exponential progress in computing to be fascinating and exciting. We are in the information age. Get caught up in it, you may be glad you did.

Computer
CPU Speed

(Hz)
RAM Memory
 (Bytes)
Hard Drive Storage
 (Bytes)
1986
Apple II GS
2,800,000
256,000
20,000,000
2004
Dell Desktop
3,200,000,000
3,500,000,000
80,000,000,000
2006
Sony Desktop
2,800,000,000
2,000,000,000
150,000,000,000
2009
HP Desktop
3,200,000,000
8,000,000,000
400,000,000,000
2009
HP HDX Laptop
2,130,000,000
8,000,000,000
500,000,000,000
2013
Dell Laptop
2,000,000,000
8,000,000,000
200,000,000,000
2014
Sony Laptop
1,800,000,000
8,000,000,000
900,000,000,000
2014
Dell XPS Desktop
3,600,000,000
16,000,000,000
1,000,000,000,000
2015
Apple MacBook
1,100,000,000
8,000,000,000
128,000,000,000
2020
Home Made PC
4,300,000,000
32,000,000,000
2.000.000.000.000