Recall of learning and iteration


John Medina

Memory question. So here is my question; if there is no true repetition in learning, need there be any true repetition in memory? There does not seem to be any good reason to believe that repetition as mentioned in the research on memory is any more un-extended or unaltered repetition than there is in learning. The research into memory indicates that both repetition and elaboration are essential to the ability to recall information. It should be pointed out however, that it is possible that repetition alone without elaboration may not actually conducive to enabling recall at all. Although all the books talk about the importance of repetition, this seems to ignore the fact that true repetition is essentially impossible to generate.

Repetition may have a part to play in consolidation of memory, but it may not be quite in the way people have thought. Repetition may aid in consolidating memory mostly because it provides the memory with further opportunity for elaboration. Elaboration may be what is critical rather than the repetition itself. Be that as it may, even if repetition is in fact important in itself in improving memory function, this may still not be sufficient to recommend the use of drills in creating easily recallable memories.

In his book "Why Do I Need a Teacher When I've Got Google?" Ian Gilbert sums it up like this:

"Does rote learning work? Yes absolutely. Repetition reinforces connections between brain cells leading to better myelination and the creation of what can be lasting long-term memories. There are two significant downsides though. One, it is as boring as hell and demands high degrees of motivation of learners, self control and the sort of boredom threshold you would associate with train spotting or reality TV. Two, despite being effective it is not efficient. You may be achieving the results you want to achieve with your classes, so you are working effectively, but are you working efficiently? Could you, by using different memory strategies and techniques, achieve the same result by working less? Could you even achieve better results by working less?" 

Although current neurological wisdom about memories is that they reside in specific places in the brain this site holds that there is sufficient evidence to consider an alternative idea. This site wishes to propose that memories are a web of connections. We know that when more connections are added to a memory, the memory becomes more elaborated and thus has more meaning. This site considers that this process incidentally creates more entry points for reaching the memories. The more connections there are to a memory, the more different directions your thinking could be taking and still arrive at the memory. The more connections there are the more pathways there are to the memory and thus the easier it is to find the memory and activate its recall.

Repetition seems to do two things. One it causes myelin to wrap around the axons connecting the neurons which in turn allow the signal to move faster, more strongly and more easily. But in skill learning the main function is to obtain finer and finer control of the activity by the timing and strength of the signal. Two it would seem to also activate the generation and growth of new synapses, dendrites and axons when they are activated. In his book "Brain Rules" John Medina gives us a description of how the hippocampus and the cerebral cortex are connected and work together in creating memories:

"The first army [of nerves] is the cortex, that wafer-thin layer of nerves that blankets a brain... The second is a bit of a tongue twister, the medial temporal lobe. It houses another familiar old soldier, the oft mentioned hippocampus. Crown jewel of the limbic system, the hippocampus helps shape the long-term character of many types of memory...

How the cortex and the medial temporal lobe are cabled together tells the story of long-term memory formation. Neurons spring from the cortex and snake their way over to the lobe, allowing the hippocampus to listen in on what the cortex is receiving. Wires also erupt from the lobe and wriggle their way back to the cortex returning the eavesdropping favor. The loop allows the hippocampus to issue orders to previously stimulated cortical regions  while simultaneously gleaning information from them. It also allows us to form memories...

A conjecture about memory. As you may know, science as yet has not discovered how memories are formed. However, this site has extrapolated a conjecture as to how memories may possibly be brought into being. It should be noted that this idea has not been tested in any way, and so cannot even be designated a theory. It is plain and simple speculation. But it does seem to fit a lot of what is known so far about memory formation. As such it may have as much validity as the similar speculation that memories reside in a fixed single location.

Connection webs. This conjecture is based on the idea that memories may simply be complex webs of connections between neurons. This would necessitate that meaning is just how the bits of brain are connected together and how they tend to fire in unison as a circuit. This would account for the fact, that when we add more connections through elaboration, the memory becomes both more meaningful and more easily recalled. More connections would mean more meaning and incidentally more entry points which would mean it could be activated by entering the circuit in more ways thus improving recall. The question that was put, was simply to ask how brain structures might function to develop such webs of connections.

Perhaps the most important rule for brains is that 'neurons that fire together wire together'. Neuroscience research has produced a great deal of support for this idea, so our conjecture starts with it. So the question is, "How does this happen?" The simplest solution would be that when neurons in the cortex fire at the same time they tend to sprout new synapses which connect to other neurons or that have axons the are growing in the direction of the other neurons that are firing at the same moment and that this extending growth would continue until the two or more neurons that fired at the same moment eventually connect up. The problem with this solution is that to grow new synapses and maybe dendrites and even axons so that they reach far distant areas of the brain would take a long time to accomplish perhaps years. This does not seem to be a likely solution.

However, new synapses do seem to be involved. In her book "The Creative Brain" Nancy C. Andreasen says:

"In this particular case, when the neuron is stimulated to a sufficient degree to create a memory that needs to be preserved, a variety of chemical messages are sent to the cell nucleus, where in turn genes are expressed and send messages back out to the synapse that say: 'build more synapses and create new synaptic connections so that you can keep this information for a long time."

The process then, would have to be more complex. What we do know is that the formation of memories has something to do with neurogenesis and the creation of new neurons especially in the brain structure called the hippocampus. There are many theories about  how memories are stored in the brain. In his book "Connectome" Sebastian Seung suggests that memories may be stored in in the part of the brain called the hippocampus. He says:

"The hippocampus belongs to the medial temporal lobe... Some researchers believe that the hippocampus serves as the "gateway" to memory; they theorize that it stores information first and later transfers it to other regions like the neocortex."

This site holds that memories are not likely to be stored in a particular area of the brain as suggested above, but rather reside in the connections themselves and how different areas of the brain are connected up. If this is the case the hippocampus may simply act as a device to facilitate the connecting of one part of the brain to another.

Let us suppose that the formation of each new memory depends on the existence and development of a single new neuron. We can further suppose that each new neuron coming into existence in the hippocampus connects via its synapses to the complex of white fibers that connect the hippocampus to every part of the cortex known to encode declarative memories. The new neuron would in that case not contain a memory, but rather act as a device to connect the various parts of the cortex that have become currently active. Because these connections are already in place the new neurons could connect up to all the active cortical areas fairly quickly. How might this come about? When cortical neurons become active they would send those signals to the hippocampus via their many connectors. The new neuron be attracted to the charged axons and dendrites and would quickly connect to those active fibers thus connecting all the incoming signals. It would then collect all those signals and send them back as a whole to all the cortical neurons involved.

Synapses are the connectors in the brain. In her book "The Creative Brain" Nancy C. Andreasen says:

"Each of these neurons is designed to make multiple connections to other neurons. The nerve cells multiply their connective capabilities by sending out dendrites, which in turn expand by adding spines. Along the spines are multiple synapses. At the axonal end of the neuron there are also axon terminals containing synapses. The synapses are the real 'action sites' within the brain. There are different types of neurons, as defined by their number of axons and the complexity of their dendrites, and we do not have an accurate way to estimate the total number of synapses in the entire human brain. A typical estimate is that each nerve cell possesses approximately 1,000 to 10,000 synapses.

...As our brains form during fetal life, nerve cells grow and establish connections to one another. Some of them are hard wired and genetically determined, but many are shaped by our experiences. Each neuron does its work by talking across synapses to multiple other neurons at more or less the same time, and each of those neurons are talking to many others. (The technical term for for these interacting neurons is neural circuits.)

...The neural circuits of the brain are designed to monitor and modulate one another. Sometimes the connections send excitatory signals, and sometimes they send negative, or inhibitory, signals. Some connections connections create short feedback loops between neurons and some have long loops that spread across longer spans of the brain. It is estimated that a large feedback loop covering the entire brain takes only five or six synapses."

Whether one or more of the synapses fire and allow the signal to proceed depends on the strength of the incoming signal which in turn depends on the amount of myelin wrapped around its axon. The more times a signal travels along an axon, the more myelin wraps around it, the better and stronger and faster the signal will travel along it in the future. The wrapping of the myelin not only increases the strength speed and efficiency of the signal but also determines the path the signal will take.

In the beginning memories may be connected through a single neuron in the hippocampus. But this is not the most efficient way for neurons in the cortex to be connected. They could be connected more directly to one another. Alas, as was pointed out earlier, this would simply take too long for the instantaneous formation of a memory. However, we can suppose that even while the memory is being maintained by the neuron in the hippocampus that synapses bud, dendrites proliferate, and axons extend all in an effort to connect up with other neurons that are firing at the same moment. There are no special fibers to connect the neurons in different part of the cortex but here is what might happen. Gradually over perhaps years these cortical neurons would become more directly connected up. So it would take a long time, but so what, the memory is intact as long as the neuron in the hippocampus remains undamaged and is fired off at regular intervals that correspond with times just previous to when they are about to be forgotten.

When these new pathways would become strong enough, they would be used in preference to the long connections going through the neuron in the hippocampus and the need for that neuron in the hippocampus would diminish, and it would eventually die off leaving the memory in cortex fully connected up, without the hippocampus playing a part any longer. All this requires that the memory be activated often over a long period of time, possibly many years. This process would ensure that the number of neurons in the hippocampus never get to be too many, for as new ones would be forming old ones would be dying off. This leaves us with a situation quite different to how memory is usually thought of, where in the cortex a memory would not just be in one place. Such a memory could perhaps be activated at any or all of the connected junctures that make it up. This could be thousands of places in the cortex. Not only that but these same junctures could also be part of other memories.

The webs or matrices of memory. If our memories are, as suggested, complex webs of connections of neurons scattered across the cortex, then repetition would accomplish two things simultaneously. It would strengthen the connecting axons by the wrapping of extra myelin around them to protect and optimize them. But at the same time it would add more connections (even if those connections were only those that connected the memory to the time and place at which the recall took place. Details would be difficult to recall because they would be at the periphery of the memory and not always activated each time the memory as whole was activated. On the other hand the central core of the memory which John Medina calls the gist of the memory would be easy to recall because that is what would always be activated each time the the memory was recalled. To find a memory a person would have to navigate through a maze of connections, but the central core of a memory, the gist of the memory, would be found because it would be surrounded by so many entry paths while the details would have few entry paths.

It would then follow, that a thousand repetitions of activating a memory would have little effect if they all occurred in rapid succession, because their importance would involve two essential functions. One function would be the putting off the natural process of neurons and their connections atrophying when they are not being activated. The other function would be the making sure that the any two neurons involved in the memory and thus fired when the memory is active would continue to extend neural connections toward one another. A strongly connected memory through repetition and elaboration in the early stages of memory consolidation would be of help, but more spaced repetition and elaboration over time would make sure the process continued. What the brain would need is convincing that the memory is needed, and thus cause it to stop or postpone the otherwise entropic process that starts the moment the memory is minted. Repetition at regular intervals over a long period of time could interrupt the dieing off of connections at the very moment when it is most needed, when it is just about to happen and the memory is just about to disappear forever.

evolve Neural Darwinism. One of the problems with the above conjecture, about how memories are formed, is that we are still not sure that axons and dendrites continue to grow much after the first sixteen or so years of life. However, even if this growth does not occur much in adults this does not necessarily invalidate this conjecture. There is another theory mentioned in Sebastian Seung's book  "Connectome" where he suggests that synapses may not be created on demand but rather created randomly. He says:

"Perhaps synapse creation is a random process. Recall that neurons are connected to only a subset of the neurons that they contact. Perhaps every now and then a neuron randomly chooses a new partner from its neighbors and creates a synapse. ...Synapse creation alone, however, would eventually lead to a network that is wasteful. In order to economize , our brains would need to eliminate the new synapses that aren't used for learning. ...You could think of this as a kind of survival of the 'fittest' for synapses. Those involved in memory are the 'fittest' and get stronger. Those not involved get weaker, and are finally eliminated."

This could mean that although synapses may not created in response to the need for a new memory many new synapses may be created randomly in response to increases in the formation of memories.  

If we take out the possibility of growth of neuron pathways and plug in "Neural Darwinism" the conjecture about declarative memory formation is still viable. In this case we would have to consider that there may be many possible pathways between two neurons in the cortex and although the initial one created by a new neuron in the hypocampus would be strongest at first this would eventually be replaced by a more direct path through the neocortex. Let us suppose that initially when a memory is first laid down two different pathways are created, one that goes to the hypocampus as has been explained already, and one that goes through the neocortex in a long laborious twisting and back tracking journey through the tangle of neuronic connections. Sometimes, with luck, this path may be shorter than the one through the hypocampus, but usually it would be much much longer. Let us further suppose that every time we recall the memory that connects these two neurons in the cortex that both these pathways are activated. Not only that, but because of the random appearance of new synapses connecting new neurons, maybe another pathway may open up that is shorter or maybe several shorter pathways open up and maybe all of these now activate and form a circuit. The next time the memory is recalled the shorter pathway may be activated while the longer one may be inhibited and not activate. After considerable time and recall and the finding of shorter and shorter pathways and the deactivation of the longer pathways, the path through the neocortex could become quite short indeed. This could be so much so, that the path that includes the hypocampus may not be needed, and may itself be inhibited and deactivated.                                  

We know that memories tend to change over time. They seem to be unstable. This would be consistent with our theory above as elaboration would be essential to consolidating any concept, action or memory. Sometimes memories change for the better, and sometimes they change for the worse, but they change. This seems to be true of even the longest existing, most stable, long-term memories. In his book "Brain Rules" John Medina puts it like this: "There is increasing evidence that when previously consolidated memories are recalled from long-term storage into consciousness, they revert to their previously labile, unstable natures. Acting as if newly minted into working memory, these memories may need to become reprocessed if they are remain in a durable form. ...If consolidation is not a sequential one time event but one that occurs repeatedly every time a memory trace is reactivated, it means permanent storage exists in our brains only for those memories we choose not to recall! Oh, good grief."   

Memory as change. Change, as is expressed often on this site, means learning. Changing memory is something that does not make a lot of sense in terms of how we understand computers. In a computer if something is saved it is stored in a particular area of the computer (the hard drive) in a perfect form to be recalled. This is not how the brain works. There is no decision to save in the brain. Short-term memory becomes long-term memory if it is activated often, that is if it is relearned or recalled. Saving, if we could still call it that, takes place over a long period of time. But, as suggested above, learning and thus memory is not mostly about repetition but rather iteration where what is learned is constantly extended and thus changing. Recall, it is being suggested here, may work in the same way, adding associations every time a memory is recalled.

What associations? Well every time you recall something there are thousands of new associations just waiting to be attached. First there are the associations with what ever the reason was that you made the effort to recall, or the intrusive external event that triggered the memory to automatically pop into consciousness. These are particularly strong associations. Then there are the associations that comprise the external environment at the time when you in the process of recalling, and probably the thoughts you have had during and just after the recall. These are weaker often unconscious associations, but they are there nevertheless.

Anyway, we can be pretty sure of three things about memory. The elaboration of a memory makes it more memorable, expansion of a memory through iteration makes the memory more memorable, and using the memory makes it more memorable. Of course these three things are actually only one thing looked at from three different perspectives.

Consolidation of memories. John Medina in his book "Brain Rules" explains consolidation as follows:

"At first a memory trace is flexible, labile, subject to amendment, and at great risk for extinction. Most of the inputs we encounter in a given day fall into this category. But some memories stick with us. Initially fragile, these memories strengthen with time and become remarkably persistent. They eventually reach a state where they appear to be infinitely retrievable and resistant to amendment. As we shall see, however, they may not be as stable as we think. Nonetheless, we call these forms long-term memories."

Types of memories. Neuroscientists tend to talk about many different types of memory. There are three major types of memory which in turn can be further divided into other types of memory. They are explicit memory or declarative memory (long-term memory, short-term memory and working memory) and implicit memory or non declarative memory (procedural memory).

Explicit or declarative memory.

Long-term memory.  Long-term memory is usually divided into, semantic memory and episodic memory.

Semantic memory. Semantic memory is the type of memory that deals with meaning and structures made up of meanings. That is it the memory of concepts and statements that are constructed from concepts.

Semantic associations. Semantic memories are structures of hundreds semantic associations that go to make up each concept in a thought, and the stringing together of these concepts into further meaningful structures that could be declared as statements. This is the type of memory discussed in this site's section on meaningfulness. The associations in this type of memory are what provide the meaning of a word, a concept, a sentence, a text. These associations by linking together produce an abbreviated or symbolic form of the memory. Words for instance are symbols that stand for concepts. Words then are associated with all the elements that make up their meaning but when we recall a concept from memory we will in all likelihood recall only the word into consciousness. In a similar way when we recall some text we will recall only the gist (the meaning) and not word for word text. The brain abbreviates information so it can be processed efficiently. Meaning is a web of associations that we hold in memory although we only access this central core of what it is.

John Medina points out that a word on a list is best remembered if we we make an effort to associate it with as much meaning as possible. The concept or word apple is much less elaborately encoded than say his Aunt Mabel's apple pie. The concept or word apple however, has very elaborate encoding including all the associations needed to give meaning to that word or concept. If when we try to remember the word we concentrate on the number of diagonal lines in the word we are ignoring all the elaboration at our disposal. If instead we think about Aunt Mabel's apple pie the meaning is very elaborate.  Aunt Mabel's apple pie deals with not one but three strong concepts, pies, apples and Aunt Mabel. On top of this there is the fantastic smell of the pie, the delicious taste of the pie, its texture, its usual visual appearance, how it made us feel, etc. Aunt Mabel's pie can be very intrusive. Sudden exposure to pies, apples, aunt Mabel, pie smells, pie tastes may all invoke Aunt Mabel's apple pie into our stream of consciousness.

More about memory webs. It is in this type of semantic memory that it is easiest to see how memories could be webs of connections. The way to get a glimpse of how webs of connections might coalesce into memories is to start with the basic units of semantic memory, the concepts themselves, and more specifically concepts of objects. An object concept is a concrete form existing in the world. We know what these object concepts are because we know their meaning. These object concepts come in many sorts. One sort of object concept is a specific object. Such objects are Betsy the cow, Fido the dog, Bradley the man, Australia the country, and Mabel the yacht. Such object concepts are not a class or a category, or if they are, they are a category with only one member. Also they do not have to have specific names. They can be something like my blue pen or your red scarf. Most object concepts however are a class or a category and thus have many members. The most useful of these object concepts are the next level of abstraction. Such object concepts are a category which has specific objects as its members.

A ball. Let us consider the object concept "ball". We all know what a ball is, but how do we know it? It is suggested here that we know what a ball is because of its connections to memories of specific balls. The concept ball has probably thousands, no millions, no billions of connections. Every time you saw a ball, felt a ball, played ball, bounced a ball, heard about a ball, thought about a ball, the connections would be made and activated. This site holds that it is these connections when activated that give the concept "ball" its meaning, that they are in fact that meaning.

A sphere. Consider the concept of a sphere. A sphere is an aspect of a ball. While most balls are fairly spherical, some footballs are more egg shaped. Although a sphere is not really an object at all we often use the words that stand for those concepts interchangeably. You might describe a sphere as being ball shaped. But this is not really correct. In fact the opposite is true most balls can properly be described as being spherical. When the concept ball is activated the concept sphere is also activated as part of its meaning. Following from our theory a sphere unlike a ball would not have such a large number of connections. It would have only a few connections. However it is still a strong concept because it has a very strong connection to the concept ball and when the concept sphere is activated the concept ball, would for the most part, be activated also as part of its meaning, even though the reverse is more correct. In this way every concept would be a fantastic web of connections. Remember in just six connections you can probably connect to any neuron in the entire brain.

Concept formation. So how might these concepts be built up as we learn and grow? What we know about building memories is that the "connecting axons" that are activated get more myelin wrapped around them, making them stronger and quicker, while the "connecting axons" that are inhibited from becoming active, tend to wither and die. Let us suppose then, that when forming connections children select members that seem, for whatever reason, similar to them. Let us call these theories about what concepts are, or concepts that do not match concepts as they are understood by a particular culture. They would be sort of potential concepts or incorrect concepts. These incorrect concepts would be useful for building an internal model of reality, but fairly useless for communicating with others.

Modification. Thus infants would have to modify the connections as they gained information about what others in their culture accepted as being connected, or as being members of that concept category. For this to happen some connections would continue to be activated while others would be inhibited from being activated. The axons that continued to be activated would continue to be part of the meaning of the concept and the axons that were inhibited from being activated, would die off and no longer be a part of the meaning of the concept. On the other hand a child might miss some members of a concept category and have to modify the concept by adding members. This would simply be a matter of firing the various connections and at the same time adding the new connections. They would be more weakly connected at first but would get stronger, the more they were activated as part of the whole concept activation.

Semantic memories. Semantic memories then are concepts, or complex interrelations of concepts (stories), and they are ment to be changed each time they are remembered. While they may seem like static unchanging things they are in fact constantly in the process of changing. Every experience of an object, every recall of it, provides us with more information about it and even when we think we have an imutable understanding of what it is we are still deleating some connections that are not quite right, we are adjusting other connections, and still adding new connections. Not only does our understanding of concepts constantly change but also often the objects themselves change. A concept like a ball may not change but living concepts like animals, insects and humans get older, lose body parts and change in appearence. Concepts of place also change. Trees grow, die, change their leaves. Man made structures like buildings also change as they are built and knocked down.  For this reason semantic memories are ment to be infinitely flexable constantly expanding and contracting to fit the current state of things. 

A final note about this conjecture. The problem with this conjecture is that it still does not explain how the brain finds a memory in order to activate it. And how would the brain know when it has found it?  Answers even speculative ones simply beget more questions.                                 

Episodic memory. Episodic memory is the type of memory that deals with an event or episode in ones own life where a whole lot of information was attended to, and was thus processed into associations that are all welded together in to a whole unit of memory.

Episodic associations. Episodic memories are structures of hundreds of episodic associations that go to make up these episodes or events. In his book "Brain Rules" John Medina tells a story about an episodic memory of playing fetch with a huge Labrador, that surprised him by coming out of a lake and shaking water all over him. He continues:

"What was occurring in my brain in those moments? As you know the cortex quickly is consulted when a piece of external information invades our brains - in this case, a slobbery, soaking wet Labrador. The instant those photons hit the back of my eyes, my brain converts them into patterns of electrical activity and routes the signals to the back of my head (the visual cortex in the occipital lobe). Now my brain can see the dog. In the initial moments of this learning I have transformed the energy of light into an electrical language my brain fully understands. Beholding this action required the coordination of thousands of cortical regions dedicated to visual processing.

The same is also true of other energy sources. My ears pick up the sound waves of the dog's loud bark, and I convert them into the same brain-friendly electrical language to which the photons patterns were converted. These electrical signals will also be routed to the cortex, but to the auditory cortex instead of the visual cortex. ...This conversion and this individual routing is true of all energy sources coming into my brain, from the feel of the sun on my skin to the instant I unexpectedly and unhappily got soaked by the dog shaking off lake water. Encoding involves all of our senses, and their processing centers are scattered throughout the brain.

...In one 10-second encounter with an overly friendly dog, my brain recruited hundreds of different brain regions and coordinated the electrical activity of millions of neurons. My brain was recording a single episode, and doing so over vast neural distances, all in about the time it takes to blink your eye."

Episodic associations are often only peripherally encoded in a memory. In this case one focuses attention on a specific item of interest, and much of the other information is ignored and unprocessed. However, although this peripheral information is not part of what is recalled in the memory trace, it does provide some pathways for activating the memory. This it turns out is very important for enabling recall of any sort. It has been found that the most significant way we can help people remember something, is to put them in an environment as close as possible to the one where they first encoded the information.

Memory episodes.The episodes of episodic memory are usually understood to be constructed or built up in exactly the same way as semantic memories. This makes memory episodes unreliable. While each episode only occurs once it must be recalled many times in order to become eligable to go into long term storage. However every recall is an opportunity to contaminate the memory. It is a catch 22. the more it is recalled the easier it is to remember it but the more it is recalled the more it becomes contaminated. Every recall adds more associations and if those associations are often the same ones they can become stronly associated and thus distort or change the original episode.

Short-term memory. The relationship between short-term memory and working memory is interpreted in various ways by different theories, but it is usually understood that the two concepts are distinct. Working memory is a theoretical framework that refers to structures and processes used in temporarily storing and manipulating information. Working memory could also be understood as being working attention. Short-term memory generally refers to the short-term storage of information only, and it does not entail the manipulation or organization of information held in the memory. Thus while there are short-term memory elements in working memory models, the concept of short-term memory is usually conceived as being distinct from information manipulating components.

Short-term memory is labile, unstable and of limited duration. It is tending to spontaneously decay from the moment it comes into existence. In order to overcome this limitation of short-term memory, and retain information for longer, information has to be periodically repeated, or rehearsed. This is called covert rehearsal. It can be performed either by articulating it out loud, or by mentally simulating such articulation. In this way, information can re-enter the short-term store and be retained for a further period.

Chunking. Chunking is a process with which the amount of information a human can hold in short-term memory can be expanded. Chunking is performed by organizing material into meaningful groups. Although the average person may only retain about four different units in short-term memory, chunking can greatly increase a person's recall capacity. For instance, in recalling a phone number, a person could chunk the digits into three groups: first, the area code (such as 215), then a three-digit chunk (123) and lastly a four-digit chunk (4567). This method of remembering phone numbers is far more effective than trying to remember a string of 10 digits. Practice and the usage of existing information in long-term memory can lead to additional improvements in one's ability to use chunking. In one testing session, an American cross-country runner was able to recall a string of 79 digits after hearing them only once by chunking them into different running times.

Working memory. Working memory is a busy temporary workspace, rather like a desktop, that the brain uses to process newly acquired information. Working memory is the processor part of consciousness. The man whose legacy best characterizes this process is Alan Braddeley who described working memory as a three component model; auditory, visual and executive.

  1. Auditory working memory. The auditory part of working memory is the part that deals with sound. It is the part that retains linguistic information and processes it.
  2. Visual working memory. The visual part of working memory is the part that allows some visual information to be retained in memory and processed. Braddeley saw it acting as a sort of imaging-spatial sketch pad.
  3. Executive working memory. The executive part of working memory is the part that keeps track of individual threads of thought and which keeps them separate and keeps each together as a chunk of information. Thus professional chess players can play several opponents at once and keep each game separate in their minds.

Implicit or non declarative memory. Implicit memory is a type of memory in which previous experiences aid in the performance of a task without conscious awareness of these previous experiences. Evidence for implicit memory arises in priming, where subjects show improved performance on tasks for which they have been subconsciously prepared. Implicit memory also leads to the illusion-of-truth effect, which suggests that subjects are more likely to rate as true statements those that they have already heard, regardless of whether they are true or not.

Research into implicit memory indicates that it operates through a different mental process from explicit memory. Instead of connecting to the hippocampus implicit memories connect to the cerebellum ("little brain"). This is a structure located at the rear of the brain, near the spinal cord. It looks like a miniature version of the cerebral cortex, in that it has a similar wavy, or convoluted surface. The cerebellum probably plays a similar role in implicit memory as the hippocampus does in explicit memory.  It is essential in the learning of both procedural memory, and motor learning.

These memories are often called non declarative memories because although we do not recall them into consciousness as we perform them we could not declare them if we did. They are activated as an activity or a skill and accomplished without you having to consciously think about doing it. It is automatically activated when we wish to use it.  

Procedural memory. In daily life, people rely on implicit memory every day in the form of procedural memory. This type of memory allows people to remember how to drive their car or ride a bicycle without consciously thinking about these activities. It allows us to build up skills requiring co-ordination and fine motor control such as playing a musical instrument, or playing a sport or reacting to defend yourself.

Once we have learned some skill to a sufficient level there is a process which helps to make those actions or reactions automatic, thereby allowing them to sink to a merely physiological level, and to be performed without attention. When riding a bike we may however, modify our performance consciously going this way or that on the bike, or go faster or slower. But the schema of bike riding is unconscious. Of course when you are learning a skill you have to think about it often, and break it down into manageable units or schemas that can then be manipulated more easily. This is another type of chunking. As you continue to learn these schemas gradually sink out of consciousness into automatic activity. There is much about this in the book "The Art of Learning" by Josh Waitzkin. This is covered more extensively on this site in the section on thin slicing, which deals with the creativity of the unconscious.

Memory facilitation. Memories can be facilitated by circumstances at the time of imprinting and circumstances at the time of recall.

Facilitation at the time of recall. Memories can be facilitated by circumstances at the time of recall as explained above by placing people in an environment as similar as possible to the environment where the memory was first imprinted. This is clearly because these peripheral associations of the environment are recorded with the the actual memory although they may not be recalled with that memory. They instead provide more pathways to the memory enabling easier access to the memory.

Facilitation at the time of imprinting. Memories can be facilitated by circumstances at the time of imprinting in many different ways but all of them depend on the amount of attention being paid to the to the information. This may be interpreted as follows: If we pay attention various associations are formed between this incoming information and and information already residing within our heads. These associations give the new information meaning. But this is not all that happens. Other, usually weaker associations, are also formed with other information present, although not paid specific attention to. These peripheral associations form a context for the information to be memorized, which seems to take place on an unconscious level. These are the peripheral associations, mentioned above, that provide extra door handles for opening up memories.

Memory duration. Memories can last minutes, days, months, years or a lifetime. How long memories last depends on how often they are used and how elaborately they are connected or linked to other memories. Memory experts tend to think of these as two different process, but is this necessarily the case? We know, for instance, that if connections are not used they disconnect and the cells involved tend to die off. Memories with lots of connections would also be more likely to accessed in a search. So it could be said that the amount of elaboration increases the possibility of use. In any case it is clear that these two processes are inextricably bound together. The amount of elaboration increases the possibility of use and use determines whether the association remain elaborate or whether they die off.

Elaborate encoding. The researchers have called this elaboration of associations elaborate encoding. Elaborate encoding is all about meaning. That is to say, the more associations or connections to other information the more meaning, and thus the more easily memorized. Most of this elaborate encoding takes place in the first few moments of processing information into memory and this site holds that it is the most important consideration in memory. Elaborate encoding is accomplished in two quite different ways.

Effortful attention. Elaborate encoding can be accomplished through effortful attention where the person endeavors to pay attention and thus remember something. If one tries to accomplish memory imprinting through making an effort to pay attention, one will tend to fail after about ten minutes, depending on how boring the information is. On top of this effortful attention is easily lured away by sufficient distraction.

 

Effortless attention, automatic processing. On the other hand interest can focus attention automatically and effortlessly. This in turn causes elaborate encoding to be automatically imprinted as memory. In his book "Brain Rules" John Medina gives an example of automatic processing as follows:

"One type of encoding is automatic, which can be illustrated by talking about what you had for dinner last night, or The Beatles. The two came together for me on the evening of an amazing Paul McCartney concert I attended a few years ago. If you were to ask me what I had for dinner before the concert and what happened on stage, I could tell you about both events in great detail. Though the actual memory is very complex (composed of spatial locations, sequences of events, sights, smells, tastes, etc.) I did not have to write down some exhaustive list of its varied experiences, then try to remember the list in detail just in case you asked me about my evening. This is because my brain employed a certain type of encoding scientists call automatic processing. It is the kind occurring with glorious unintentionality, requiring minimal attentional effort. It is is very easy to recall data that has been encoded by this process. The memories seem bound all together into a cohesive, readily retrievable form. 

Types of interest. This interest that supports automatic processing comes in a number of different flavors.

  1. Intellectual Interest. Intellectual interest occurs where similar information has brought pleasure previously and thus we anticipate this information will also bring pleasure, thus focusing our attention on the information.

  2. Emotional Interest. Emotional interest occurs where some strong emotion rivets our attention on some event or episode. This can also be used as way of re-enabling effortful attention.

  3. Surprise Interest. Surprise interest occurs where something unusual or unexpected rivets our attention on some event or episode. This can also be used as way of re-enabling effortful attention. 

  4. Story Interest. Story interest occurs where the information comes in the form of a story. The interconnectedness of a story provides its own way of automatically focusing attention. Stories have been used to imprint memories long before recorded history.

  5. Simplicity Interest. Simplicity interest occurs where information has been presented in a compressed form, where the gist of some idea or concept has been teased out and conveyed in an understandable way. The brain seems to recognize this gist as having already performed much of its work and favors it with strong focus of attention. It may also be that the gist has by its very nature many handles on it that connect strongly with many associations to information already residing within our heads.  

In memory more means easier to find. John Medina says "The more elaborately we encode information at the moment of learning, the stronger the memory. ...The trick for business professionals, and for educators is to present bodies of information so compelling that the audience does this on their own, spontaneously engaging in deep elaborate encoding." While associations that enhance the meaning of some memory obviously make it more memorable, other associations the are only peripheral or contextual will also enhance memory retrieval because they provide more links or handles for opening the memory. The more associations of any sort added to a memory the easier it is to remember. It follows that the more interesting something is, the more associations are added effortlessly and automatically to it. Makes you wonder why things are so boring at schools doesn't it.

Real-world examples. The more the person focuses on the meaning of the presented information, the more elaborately the encoding is processed. When you focus on information in this way, you are linking it up with all the information already residing in your brain that provides meaning for it. Now as explained previously we usually tend not to remember examples so much as the gist of the idea, theory or concept. However, despite that, concrete examples of ideas theories or concepts are immensely important in forming those ideas, theories or concepts. While we are still trying to understand an idea, theory or concept a more abstract explanation can be almost meaningless. What you need is some concrete examples to ground the information in the real world. How can this be done? John Medina puts it like this:

"How does one communicate meaning in such a fashion that learning is improved? A simple trick involves the liberal use of relevant real-world examples embedded in the information, constantly peppering main learning points with meaningful experiences. This can be done by the leaner studying after class or, better, by the teacher during the actual learning experience. This has been shown to work in numerous studies. ...Providing examples is the cognitive equivalent of adding more handles to the door. Providing examples makes the information more elaborative, more complex, better coded, and therefore better learned."

It may well be that once a central core concept (the gist) has been formed, concrete examples do not need to be constantly referred to in working memory and can sink into unconsciousness, only to retrieved from long term memory when needed.

Introductions. In the movie business they say that if they haven't hooked you into the story in the first three minutes of the opening credits the movie will be a financial failure. In any kind of presentation of information the first few minutes are are where you have to grab the audience's attention. In his book "Brain Rules" John Medina says:

"Introductions are everything. As an undergraduate, I had a professor who can thoughtfully be described as a lunatic. He taught a class on the history of the cinema, and one day he decided to illustrate for us how art films traditionally depict emotional vulnerability. As he went through the lecture, he literally began taking off his clothes. He first took of his sweater and then, one button at a time, began taking of his shirt down to his T-shirt. He unzipped his trousers, and the fell around his feet, revealing thank goodness, gym clothes. His eyes were shining as he exclaimed, 'You will probably never forget now that some films use physical nudity to express emotional vulnerability. What could be more vulnerable than being naked?' We were thankful he gave us no further details of his example. ...If you are a student, whether in business or education, the events that happen the first time you are exposed to a given information stream play a disproportionately greater role in in your ability to to accurately retrieve it at a later date."

Retrieval of memories. John Medina in his book "Brain Rules" points out that retrieval of memories is also conceived of as happening in two different ways also, the library model and the crime scene model.

  1. Reproductive retrieval. John explains the library method of retrieval as follows: "In the library model, memories are stored in our heads the same way books are stored in a library. Retrieval begins with a command to browse through the stacks and select a specific volume. Once selected, the contents are brought into conscious awareness, and the memory is retrieved. This tame process is sometimes called reproductive retrieval."

  2. Reconstructive retrieval. John explains the crime scene method of retrieval as follows: "The other model imagines our memories to be more like a large collection of crime scenes Retrieval begins by summoning the detective to a particular crime scene, which invariably consists of a fragmentary memory. Upon arrival Mr. Holms examines the the partial evidence available, Based on inference and guesswork the detective then invents a reconstruction of what was actually stored. In this model, retrieval is not the passive examination of a fully reproduced, vividly detailed book. Rather retrieval is is an active investigative effort to recreate the facts based on fragments of data."

Decay and muddling of memories. Although it is believed we use both the above methods in retrieving memories it is fairly clear that long-term memory is retrieved mostly by reconstructive retrieval. In fact really accurate, detailed, reproductive retrieval is usually only good for a few day. This site holds that we should not be surprised by this state of affairs. We should expect memories to become damaged over time. We should expect bits of information to become lost. We might expect the input of a particular sense to disappear out of a particular memory. We should expect bits of information to be deleted by the brain because it is not used or seems unimportant. We should expect memories to become mixed with other similar memories. We should expect the brain to insert made up information into old damaged memories in order to make them make sense. Of course as explained earlier on this page, the more memories are used, the better the chances of it not being damaged over time, other than massive damage to whole areas of the brain. On the other hand mere recall does not prevent memories becoming mixed up. Everything appears to break down over time why not memories? It has been found however, that memories reactivated over fixed, spaced periods of time tend to prevent this deterioration from occurring.

Sensory  memories. All memories are made up of elements of information coming from all of our bodies different senses. If our brain is using the reconstructive method to retrieve a memory, obviously if the memory includes information from as many senses as possible there is a greater chance of the memory being reconstructed in a more reliable manner. There are simply more clues to what the memory was originally. Not only does more sensory involvement mean more accurate memories but again it also creates more pathways to the memory and thus a more durable memory. This is true for both reproductive and reconstructive retrieval.

  1. Taste/Gustatory. Compared to most other animals taste in human beings is a very poor sense indeed. While taste can contribute pathways for finding memories and contribute to episodic memory it provides us with very little useful information unless we train our palates.

  2. Smell/Olfactory. Smell is the oldest most primitive sense in the brain. Of all the senses it is the only one that is processed directly without first being mixed with all the other senses. As with taste is very poor in humans, providing us with little in the way of useful information. Nevertheless smell provides extremely strong cues for evoking memories.

  3.   Audio/Ecoic. Because of language, and the fact that most of what is meaningful to us is necessarily understood and communicated in linguistic form, hearing must be essential in encoding any memory, but especially so in semantic memory.

  4. Visual/Iconic. Vision is the king of the senses. In humans visual processing takes up half of the brain's resources. A visual image is better remembered than a sentence about the same thing. Memories that include images have a much better chance of being remembered than memories that do not include images. We remember best through pictures not through written or spoken words. Animated images are better remembered than still images. 

  5. Touch/Enactive. Touch is used in two different ways in remembering. We use it in a declarative memory where we can speak about how things felt. Like taste and smell its main use in memory is in awakening memories although it can convey a fair amount of information if we pay attention to it. Touch is also used in the hidden non declarative memories which involve the feeling of body movement, balance, and the feelings in our muscles as they work and perform actions.

Repetition or Iteration in memory. In his book John Medina makes special mention of repetition as being important in making memories more enduring and stable. This site takes the position that this could in fact be very misleading. It seems as if that this encouragement to repetition could lend itself to activities that are not conductive to memory improvement at all. We are talking about two activities that, though related, are in fact quite different. Both recall and learning can be repeated.

Repetition in recall. Repetition in recall could be simply be recalling the information for no purpose, or for the purpose of rehearsing it so it will be easily activated for an exam. How effective this might be in consolidating memory is difficult to estimate. This site is unaware of any studies done to show that memories recalled where an effort is made not to add new information are still effective in making those memories more enduring and stable.  

Iteration in recall. However, there is no doubt that making use of a memory does in fact cause it to be recalled, and in the process, makes the memory more enduring and stable. When we use the information we have memorized to solve some problem or complete some task, the information has to be recalled, but it also links the memorized information to new information in the form of a concrete example of how the information works and how it is useful. When this happens the information becomes hugely more elaborated and yet more clearly understood, as to it how it functions in reality. In this process the memory is changed and for the better. It becomes a better version of its previous self. The memory becomes an improved clearer more understandable iteration of its previous self.

Repetition in relearning. Repetition in learning could be rereading the same information rehearing the same information or rewatching the same information. It can be shown that attempting to do this without taking in any new information could be quite detrimental to remembering. For a start it is boring. And anything that is boring is very difficult to pay attention to. The brain has already memorized this. Why would our brains help us pay attention to information already memorized? Well it wouldn't would it? So here you are forcing yourself to reread information you already know, or listening to a lecture that you have already heard, or rewatching a presentation you have already seen. Is your memory going to become more durable and stable? Well, maybe, if your brain lets any of that information be attended to. But also maybe not. Maybe it just makes the learning task longer more difficult and unpleasant. This kind of effort can be shown to be very similar to where information is taken in without meaning to tie it together. There is a great deal of research that shows that this type of rote learning is only effective for very short periods of time after which it is summarily forgotten.

Iteration in learning. The key to repetition is to be found in elaboration. If when we reread a text and we learn something new that we missed or misunderstood when we read it previously, surely this makes all the difference. If, when we reread a text, we make new connections, new associations our interest and attention are easily maintained. The old information is somewhat activated as the new information is connected to it, so it is sort of repeated and sort of not repeated. This is what can be called a memory iteration. Every time the memory becomes active it changes because each time new information is added to it.

The same is true if we listen to a tape of a lecture we recorded. The second time through we are actually hearing a different lecture. We are hearing the parts of the lecture that we missed when we first heard it. Our attention is not on the old information we have already memorized but on the information we missed and how it fits together with that information already memorized. The lecture comes together better, it connects better with what we already know, and the information we have now memorized is an extended iteration of what we had remembered previously.

Watching a presentation a second time the same process of iteration occurs. When you watch a movie for a second time you should see a different movie. You should pick up on bits you missed. In the same way watching a presentation for a second time you should experience connections to what you have already memorized that you missed the first time through, or experience internal connections within the presentation that you did not connect together before. The memory of the presentation is a both more complex and more simple than it had been previously. It is an improved iteration of its former self.

The memory paradox. More information is better memory. It seems, at first sight, that more information should be more difficult to remember than less information, but such is not the case. It seems at first anti intuitive. However, once we understand how memory works this does make sense. First of all, as explained above, more information means more pathways that link to the memory and thus more ways of reaching the memory when we are trying to find it. However, it is also true the more information we have memorized about a subject the more that information can be compressed into an abbreviated or symbolic form that stands for that concept. More information also allows larger ideas or theories to be compressed in the same way into core simplifications or gist as John Medina likes to call it. Basically the more information attended to, at the initial exposure, the better and more memorable the memory. Like wise, the more new information added to that memory in subsequent interactions, the more stable and more enduring the memory.

What else helps improve memories?

  1. Interest. Any thing that helps create interest greatly improves memory because it enables effortless attention.

  2. Use all senses. Paying attention to information coming from all the senses enables encoding to be much more elaborate and thus greatly improves the likelihood of a memory enduring. It also creates many more avenues of reaching the memory when attempting to recall it.

  3. Examples. Concrete real world examples are best for anchoring memorized information in reality, but even abstract examples are useful in elaborating information and thus making it more memorable.

  4. The hook. Any informational sequence must coax you into being interested in it in the first 3 minutes if an enduring memory is to be encoded. Such initial interest is essential to how elaborate any memory will be encoded and thus how enduring that memory will be.

  5. The ten minute limit. When making an effort to pay attention to something one is only vaguely interested in, one is likely to fail after about ten minutes. John Medina found it very effective to break up his lectures into 10 minute modules of compressed, essential or core concepts. After each module he would woo back the interest of his audience with an example in the form of a story or an example that would arouse strong emotion or surprise. This he found would keep them going for the next ten minutes.

  6. Discussion and reflection. A great deal of research shows that thinking about or talking about an event immediately after it has occurred greatly enhances the memory of that event. It enhances the the durability of the memory the accuracy of the memory and how detailed the memory is. John Medina says: "This tendency is of enormous importance to law enforcement professionals. It is one of the reasons why it is so critical to have a witness recall information as soon as is humanly possible after a crime."

  7. Fixed, spaced intervals. Repetition works best in fixed, spaced intervals in conjunction with elaboration to form expanding iterative memories. John Medina says: "Deliberately re-expose yourself to information more elaborately, and in fixed, spaced intervals, if you want the retrieval to be the most vivid it can be."

Needs Interest Method Reality Keys How to Help Creative Genius Future What is Wrong Theories Plus
Karl Popper Self Control Knowing Maria Montessori Neuroscience Brain Plasticity