9-1aMemory and the Continuum of Information Processing
Take a minute and write down the five most important things you need to remember today. How would your life be affected if you couldn’t remember these things?
Cognitive psychologists see memory as part of a continuum of information processing that begins with attention, sensation, perception, and learning, which we explored in previous chapters, and progresses to the use of stored information in thinking, problem solving, language, and intelligent behavior, which we discuss in Chapter 10 (see Figure 9.1). Information flows in both directions along this continuum, leading to the bottom-up and top-down processing we observed in Chapter 5. Memories of the characteristics of Dalmatian dogs, for example, are gained by learning about Dalmatians through experience with them. These memories helped you identify the photograph of one in Chapter 5 and should help you interact with one appropriately (thinking and problem solving).
Figure 9.1The Information Processing Continuum.
Memory is located on a continuum of information processing that flows both from the bottom up and from the top down. We use our memories of Dalmatian dogs to recognize one in an ambiguous photograph and interact with a new one we happen to meet.
Illustration: top to bottom: From Richard L. Gregory, “The Medawar Lecture 2001 Knowledge for vision: Vision for knowledge,” Phil. Trans. R. Soc. B 2005 360, 1231–1251, by permission of the Royal Society; anetapics/ Shutterstock.com; Steve Smith/Getty Images
Memory can be divided into three steps: encoding, storage, and retrieval. Encoding refers to the process of acquiring information and transferring it into memory. In Chapter 5, we described how the sensory systems translate or transduce electromagnetic energy, sound waves, pressure, and chemical stimulation into action potentials that can be processed by the nervous system. These transduced signals can then be encoded in different forms in memory, such as visual codes, acoustic or sound codes, or semantic or meaningful codes. For example, when you meet your friend on the way to class, you encode her appearance visually, the sound of her voice acoustically, and how much you value her friendship semantically.
Encoded information needs to be retained, or stored. As we will see later in this chapter, storage of memories in the brain can last anywhere from fractions of a second (sensory memory) to several seconds (short-term and working memory) to indefinitely (long-term memory). Storage in the mind differs from storage of information in your computer in one important respect. Computers store encoded information in reliable and unvarying ways, such as putting socks in a drawer or papers in a file. What you retrieve is identical to what was stored. In contrast, human memory does not generate exact records. Instead, bits of information are stored that are later reconstructed into usable memories. Although this process typically results in a useful memory, errors and distortions can occur.
The culmination of the memory process is the retrieval of stored information. As you have no doubt experienced, storing information is no guarantee that you can find the information again when you need it. Later in this chapter, we will discuss ways that memory retrieval can fail in detail. Two of the most common causes of retrieval failure are interference and stress. For example, we seem to know all the answers when watching Who Wants to Be a Millionaire? in the safety of our homes, but when put on the spot, we might be lucky to remember our names. Fortunately, understanding the strengths and weaknesses of the brain’s memory functions may be one of the more practical topics for students that we cover in psychology. Once you understand how memory works, you will have an easier time ensuring that your study habits maximize your performance in school.
9-1bMemory Provides an Adaptive Advantage
Evolutionary psychologists view memory as “a component of a neural machine designed to use information acquired in the past to coordinate an organism’s behavior in the present” (Klein, Cosmides, & Tooby, 2002, p. 308). The evolution of memory allowed animals to use information from the past to respond quickly to immediate challenges, a monumental advance in the ability to survive. Instead of reacting to each predator or source of food as a new experience, an animal with the ability to remember past encounters with similar situations would save precious reaction time.
As we discussed in Chapter 3, useful adaptations often come with a price, such as the unwieldy antlers of the male deer that require energy to build yet help the deer fight successfully for mates. The development of a memory system is no exception to this rule. Forming memories requires energy. For memory systems to flourish within the animal kingdom, the survival advantages needed to outweigh the energy costs. Given the 81 years or so of human life expectancy, it would be difficult to demonstrate the energy costs of memory in people, but we can observe the costs in a simpler organism, the fruit fly (Drosophila), which has a life expectancy of only 10 to 18 days. Fruit flies are capable of learning classically conditioned associations between odors and electric shock (see Chapter 8). After experiencing pairings of odor and shock, the flies fly away from the odor 24 hours later (Mery & Kawecki, 2005). However, to form memories about odor and shock, the flies must use more energy than they use for activities that do not require memory. The flies that remembered how to avoid shock died about 4 hours earlier than flies that did not form memories. Nearly all animals have the capability of forming memories despite the high-energy costs, which is a testament to memory’s benefits to survival.
How Are Memories Processed?
Atkinson and Shiffrin (1968, 1971) proposed one of the most influential models of memory. According to this classic information processing model, data flow through a series of separate stages of memory (see Figure 9.2). Contemporary cognitive psychologists have continued to modify this original model while retaining the basic ideas that memories can be stored for different lengths of time and that control processes influence the system.
To illustrate the flow of information in this model, let’s consider what happens when you use your memory to complete a specific task—remembering a phone number provided to you by a new acquaintance.
While settling into your seat before class, you converse with a classmate about studying together for an upcoming exam. Your classmate gives you her cell phone number so that you can arrange a good time to meet. This incoming information, the auditory signals of your classmate’s voice in this case, is processed in a first stage of the information processing model, the sensory memory . This stage holds enormous amounts of sensory data, possibly all information that affects the sensory receptors at one time. However, the data remain for brief periods, usually a second or less, that only last as long as the neural activity produced by a sensation continues. The information held in sensory memory has been compared to a rapidly fading “echo” of the real input. You can demonstrate the duration and “fade” of sensory memory information by rapidly flapping your hand back and forth in front of your eyes. When you do this, you can “see” where your fingers were at a previous point in time.
Encoding failure is one of the most common memory problems faced by students. If we don’t encode information because we’re daydreaming during a lecture, there will be no memory of the information to retrieve later.
Sensory input is translated or transduced into several types of codes or representations. A representation of a memory refers to a mental model of a bit of information that exists even when the information is no longer available. Visual codes are used for the temporary storage of information about visual images (Baddeley, Eysenck, & Anderson, 2009). Haptic codes are used to process touch and other body senses. Acoustic codes represent sounds and words. Input from different sensory systems remains separate in sensory memory, and although these different sensory streams are processed similarly, there are also some differences. Acoustic codes, or echoic memories, last longer than visual codes, or iconic memories, possibly to meet our needs to hear entire words and phrases before we can understand spoken language.
Both brains and computers feature the ability to store memories with one critical difference: The computer stores exact copies of data, but the brain does not. Instead, the brain stores bits of data that are reconstructed later for use. This photo shows the high-security computer memory storage at the Swedish Bahnhof, a facility located 100 feet underground in a concrete bunker. The facility manages servers for many secretive organizations, including WikiLeaks.
George Sperling demonstrated the duration of iconic memories by testing recall for matrices of 12 to 16 letters that were presented for as few as 15 milliseconds. Participants were usually able to identify four or five letters. However, the process of verbally instructing participants to do this task takes time, during which the sensory memory for the matrix fades rapidly. If different tones were used to signal a row of the matrix to be recalled instead of verbally asking for a response, participants demonstrated recall for as many as 12 of the original 16 items (Averbach & Sperling, 1961; Sperling, 1960). If the tone was sounded less than a quarter of a second after the presentation of the matrix, participants could usually recall all four letters in a row. After a quarter of a second delay or more, recall fell to one letter (see Figure 9.3).
Why do we have a sensory memory? Only a small subset of this incoming data is processed by the next stage. It is likely that we need to collect incoming data until they make enough sense to process further. The first number in your classmate’s phone number might be simple (2), but it still contains two speech sounds (“t” and “oo”) that must be combined to make sense.
Encoding and storing memories do not guarantee that they can be retrieved when you need them. Stress can make retrieving even the simplest of memories surprisingly difficult.
A tiny amount of information in the sensory memory moves to the next stage of the information processing model, short-term memory (STM) , for further processing. If you focus on your new friend’s phone number, the information moves from sensory memory to short-term memory. Consider all the other information that might be processed by your sensory memory at the same time. Perhaps your professor’s first Microsoft PowerPoint slide has appeared on the screen, the heater in the classroom cycled on, and your stomach growled because you didn’t have time for breakfast. None of these bits of information are processed in short-term memory unless you pay attention to them. If you are distracted by one of these, it is likely that you will need to ask your friend to repeat her number.
Even though processing memory requires energy, the benefits to survival far outweigh the costs. Without memory, this squirrel would be unable to retrieve the acorns stored weeks ago.
Short-term memory, like the sensory memory that precedes it, appears to have remarkable limitations in duration. Without additional processing, information in short-term memory lasts 30 seconds at most (Ellis & Hunt, 1983). In a classic experiment, participants were shown stimuli consisting of three consonants, such as RBP (Peterson & Peterson, 1959). After seeing one of these triplets, participants counted backward by 3s for periods of 0 to 18 seconds to prevent further processing of the consonant triplet. As shown in Figure 9.4, accuracy dropped rapidly. It is likely that the Petersons’ task overestimates the length of time that material is stored in short-term memory. The study’s participants were aware in advance that they would be tested on the items, and despite the distraction of counting backward, they may have made deliberate efforts to retain the triplets in memory.
The classic description of short-term memory viewed this stage as a place to store information for immediate use. As investigations into memory advanced, researchers proposed an adaptation of this model called working memory , shown in Figure 9.5(Baddeley & Hitch, 1974). Short-term memory and working memory differed in two ways. First, short-term memory involves the passive storage of information, while working memory involves an active manipulation of information. Second, short-term memory was viewed as managing a single process at a time, whereas working memory was more complex, allowing multiple processes to occur simultaneously.
You are probably thinking right now that you know what to do to prevent this loss of information. If you repeat the information over and over, a process known as rehearsal , information stays in short-term memory indefinitely, so long as you don’t think about anything else. While you are entering your classmate’s number into your phone, you can rehearse the number in your short-term memory. However, if your attention is diverted from rehearsing the number when the professor calls on you, the phone number will be gone. The incoming information of the professor’s question pushes the previous data out of the system. Data in short-term memory are easily displaced by new, incoming bits of data. If rehearsing the information has been insufficient for moving it into the next stage, long-term memory, the data will be lost.
In addition to limitations of duration, short-term memory is characterized by severe limitations in capacity. George Miller (1956) argued that we can process between five and nine items, or “bits” (such as digits, letters, or words), in short-term memory simultaneously, or in his words, “the magical number 7 plus or minus 2.” More recently, psychologists have set the limit as about four items (Cowan, 2000). Memory capacities vary from individual to individual, and memory tasks appear prominently in standardized tests of intelligence, discussed in Chapter 10. People who enjoy larger than average short-term memory capacity excel at a number of cognitive tasks, including reading (Baddeley, Logie, Nimmo-Smith, & Brereton, 1985).
You might be wondering how short-term memory could be useful, given these limitations. These limitations actually make short-term memory an ideal solution for the tasks that we ask it to complete. The brief duration of short-term memories ensures that room is freed regularly for incoming information. Most tasks for which we use short-term memory require us to search its contents to find the right information. If short-term memory were capable of holding dozens of pieces of information instead of nine or fewer, this search process would be lengthy and difficult. It is also convenient to have a mechanism that allows you to use information and then discard it. You may not wish to devote precious room in your memory to the telephone number of a plumber you need only once or twice. Short-term memory allows us to use information without overburdening our storage capacities.
Nonetheless, it is often desirable to expand our capacity for information in short-term memory. The best way to accomplish this is to redefine what a “bit” of data is by chunking , or grouping, similar or meaningful information together (Miller, 1956). If the last four digits of your friend’s phone number are “one,” “five,” “seven,” and “nine,” she could reduce these four bits to two by saying “fifteen seventy-nine.” Trying to remember the following sequence of letters—FBIIRSCIAEPA—appears to be an insurmountable task. After all, remembering 12 letters lies outside the capacity of short-term memory. The task is greatly simplified by chunking the letters into meaningful batches of common abbreviations—FBI IRS CIA EPA. Now you have only four meaningful bits to remember rather than 12, which is safely within the capacities of short-term memory. Failure to use chunking as a strategy occurs frequently in people with verbal learning disabilities (Koeda, Seki, Uchiyama, & Sadato, 2011). In the absence of chunking, each item is processed as a single, unrelated bit of information, which rapidly overwhelms the capacity of short-term memory.
Observations that people could manage two short-term memory tasks at the same time led to modifications of this stage of memory (Baddeley & Hitch, 1974). For example, study participants could read a list of numbers and then read a paragraph. This task should quickly overwhelm the limited capacity of short-term memory because reading the paragraph should displace the earlier list of numbers. However, this outcome was not what the researchers observed. Participants had no difficulty remembering the numbers, suggesting that the numbers were stored separately from the words in the paragraph. After further exploration of the types of information that could be maintained separately in short-term memory, four components were proposed (Baddeley et al., 2009): a phonological loop, a visuospatial sketch pad, a central executive, and an episodic buffer.
The phonological loop is the working memory component responsible for verbal and auditory information. As you repeat your friend’s phone number while reaching for your phone, you are using your phonological loop. The visuospatial sketch pad holds visual and spatial information. When you describe the route from your friend’s dorm to your favorite coffee shop where you plan to hold your study session, you use your visuospatial sketch pad to describe the way. The central executive manages the work of the other components by directing attention to particular tasks (Baddeley, 1996). Divided attention, which we discussed in Chapter 5, requires the skills of the central executive. While discussing the route to the coffee shop with your friend (phonological loop), you visualize the route (visuospatial sketch pad), and your central executive parcels out the right amount of attention to allow you to do both tasks well.
The episodic buffer provides a mechanism for combining information stored in long-term memory, which we discuss in the Long-Term Memory, with the active processing takingplace in working memory. This component helps explain why chunking the string of letters earlier (FBI IRS CIA EPA) is easier than remembering the letters as individual bits of information—FBIIRSCIAEPA. Without information from long-term memory about what FBI and the other abbreviations mean, making these chunks would not provide any advantage.
The final stage of memory in the information processing model is long-term memory . Unlike sensory, short-term, and working memory, long-term memory has few limitations in capacity or duration. We do not run out of room in long-term memory for new data, and information can last a lifetime. The oldest person alive can still recall significant childhood memories and learn new things. Although old memories may become more difficult to retrieve, this process is different from losing them simply because of the passage of time.
Chase and Simon (1973) presented images of chess pieces on chessboards for only 5 seconds to chess masters and people who didn’t play chess. When the images were from real games, the chess masters recalled the placement of the pieces better than the nonplayers because they were able to use their knowledge of chess to chunk the images of the pieces’ locations in short-term memory. When the pieces were placed randomly on the boards, however, the chess masters were unable to use chunking and performed no better than the nonplayers.
Moving Information into Long-Term Memory
In most cases, information moves from short-term or working memory to long-term memory through rehearsal. After seeing your new friend’s number several times as you text her, you memorize it without trying to do so. Rehearsal can be divided into maintenance rehearsal, which means simple repetition of the material, and elaborative rehearsal, which involves linking the new material to things you already know.
Of the two types of rehearsal, elaborative rehearsal is a more effective way to move information into more permanent storage. The benefits of elaborative rehearsal can be explained using the levels of processing theory (Craik & Lockhart, 1972). When we look at written words we want to remember, we can attend to many levels of detail: the visual appearance of the word (font, all caps, and number of letters), the sound of the word, the meaning of the word, and the personal relevance of the word. These characteristics can be placed along a continuum of depth of processing from shallow to deep, with the encoding of the appearance of a word requiring less processing and effort than the encoding of the sound of a word, which in turn requires less processing and effort than the encoding of the meaning or personal relevance of a word. According to the levels of processing theory, words encoded according to meaning would be easier to remember than words encoded according to their visual appearance because encoding meaningfulness produces a deeper level of attention and processing (Craik & Tulving, 1975).
In one study designed to test the levels of processing theory, participants recalled more words when their instructions elicited the encoding of word meanings than when they wereinstructed to determine more surface features of each word, such as whether it appeared in capital letters (see Figure 9.6) (Craik & Tulving, 1975). In another study, deeper levels of processing were accompanied by more subvocal speech (reminiscent of “talking to yourself”), indicated by measurements of tiny movements in the muscles of speech (Cacioppo & Petty, 1981). However, the processing theory is not very specific about the meaning of deep or shallow processing. How would we apply this approach to evaluate participants’ recall for music, touch, or visual images? Further work on this theory needs to identify what determines depth of processing during encoding.
The accumulated knowledge of a long life, such as this Australian aborigine tribal elder’s familiarity with his harsh surroundings, probably meant the difference between life and death for many of our ancestors. There is no evidence that older adults are unable to add new information to their long-term memories or necessarily lose information they have known a long time.
Differences between Working and Long-Term Memory
In addition to not sharing the limitations of duration and capacity found with working memory, long-term memory appears to be unique in other ways.
Differences between working and long-term memories can be seen in classic experiments demonstrating the serial position effect. This phenomenon can be observed when people are asked to learn a list of words and recall them in any order they choose. As shown in Figure 9.7, recall of items takes on a U-shaped appearance when retrieval is plotted as a function of an item’s position in a list during presentation (Murdoch, 1962).
Figure 9.7The Serial Position Effect.
When people are given a list of words to remember and told they can recall the items in any order, the likelihood that a word on the list will be remembered depends on its position in the list. The primacy effect refers to the superior recall for the first words on the list, and the recency effect refers to the superior recall for the last words on the list. The primacy effect probably occurs because people have had more time to place these items in long-term memory. The recency effect probably occurs because these last words still remain in working memory at the time of retrieval. A delay in retrieval erases the recency effect but not the primacy effect.
The superior recall for the last words on the list is known as the recency effect, which occurs because these items remain in working memory at the time of recall. The recency effect, but not the primacy effect, disappears if recall is delayed by 30 seconds (Glanzer & Cunitz, 1966). After 30 seconds, items in long-term memory are still available for recall, but items in working memory are long gone.
One of the strongest arguments in favor of the separation of working and long-term memory is the occurrence of clinical cases in which one capacity is damaged while the other remains intact. Henry Molaison (the amnesic patient H.M.), whom we discussed at the beginning of this chapter, was able to remember a small amount of information for a few seconds but experienced enormous difficulties when trying to store new information in his long-term memory. In another case study, a patient with another type of brain damage appeared to have the opposite problem. Patient K. F. had normal long-term memory, as indicated by his ability to form new memories. However, his working memory was seriously impaired (Shallice & Warrington, 1970). When asked to recall a list of digits (a typical working memory task), he could remember only one or two digits, a big deviation from the typical ability to recall five to nine digits.
What Are the Different Types of Long-Term Memory?
Long-term memory can be divided into several categories (see Figure 9.8). These categories not only help us describe memory more precisely, but also represent activity in different parts of the brain.
Figure 9.8Types of Long-Term Memory.
Long-term memory can be divided into several categories, beginning with a distinction between declarative (also known as explicit or conscious memories) and nondeclarative (also known as implicit or unconscious memories). Declarative memories are further divided into semantic and episodic memories, which are combined when we use autobiographical memories. Examples of nondeclarative memories are procedural memories, classical conditioning, and priming.
Long-term memory can be divided into declarative, or conscious, memories and nondeclarative, or unconscious, memories. Declarative memories are easy to “declare,” or discuss verbally. Declarative memories are also called explicit memories because they are accessed in a conscious, direct, and effortful manner. In contrast to declarative memories, nondeclarative memories are difficult to discuss. For example, classical conditioning, which we examined in Chapter 8, produces nondeclarative memories. We might find it difficult to explain to another person why we get nervous right before an exam or dislike a food we ate once before becoming ill. Nondeclarative memories are also called implicit memories because they affect our behavior in subconscious, indirect, and effortless ways. We are aware of their outcomes (“I don’t want to eat that food”), but we are usually unaware of the information processing that led to that outcome.
Declarative memories are further divided into semantic and episodic memories (Tulving, 1972, 1985, 1995). Semantic memory contains your store of general knowledge in the form of word meanings and facts. Using your semantic memory, you can answer questions such as “Which NFL team won last year’s Super Bowl?” or “What is a churro?” Episodic memory is a more personal account of past experiences.
We can distinguish between semantic and episodic memories along four dimensions: the type of information processed, the organization of the information in memory, the source of the information, and the focus of the memory (Williams, Conway, & Cohen, 2008). Semantic memories contain general knowledge about the world, whereas episodic memories include more specific information about events, objects, and people. Semantic memory, as we will see later in this chapter, is organized according to categories. For example, we have a category for birds that contains our semantic knowledge of birds. Episodic memory, in contrast, is organized as a timeline. To answer a question from episodic memory, we often use time as a cue: “When I was in the eighth grade, my family took a vacation at the beach.” Semantic knowledge originates from others, such as your professors, or from repeated experience: “The ocean is colder in California than in Florida because every time I’ve gone to the beach in either location, this is what I have observed.” An episodic memory can result from a single, personal experience. Finally, the two types of memories serve different purposes. Semantic memory provides us with an objective understanding of our world, whereas episodic memory provides a reference point for our subjective experience of the self.
You might have semantic memories that tell you about the characteristics of Labrador retrievers and episodic memories about the day you chose your first puppy. Your autobiographical memories combine these two elements to give you an account of your life. A semantic element of your autobiographical memory might be that your dog’s parents were champions. The episodic elements of your autobiographical memory for the event might include memories of your puppy’s warmth and the happy way you felt that day.
Despite the differences just outlined, semantic and episodic memories often overlap. You could form an episodic memory of where you were when you stored a specific semantic memory. A colleague was introduced to a student’s parents as follows: “Mom, Dad, this is Professor Jones. He’s the one I told you about who taught us that rats can’t barf.” Not only did the student retain a semantic memory about rat behavior (which incidentally is true and is relevant to understanding the classical conditioning of taste aversion in rats), but the student correctly retained an episodic memory of when and where the fact was learned.
We can see that semantic and episodic memories interact dynamically to provide a complete picture of the past. Our semantic knowledge of the relative temperatures of the Pacific and Atlantic Oceans depends on the personal experiences of either hearing the fact in a geology classroom or vacationing on both coasts of the United States. At the same time, we use our semantic knowledge to interpret our episodic memories. Without semantic knowledge of the meanings of the words ocean, temperature, Atlantic, and Pacific, we would be unable to organize our experience into a coherent conclusion—the ocean is colder in California than in Florida.
This blending of semantic and episodic memories characterizes autobiographical memories (Williams et al., 2008). Autobiographical memories can contain factual, semantic aspects of personal experience without episodic aspects. You might know you were born in Pasadena, California, but you would not have any memory of being born. However, your autobiographical memories of Pasadena might also include episodic memories of attending the Rose Parade on New Year’s Day as a child, complete with images of the sights, sounds, and emotions of that experience.
A small number of people have been identified as having a rare condition known as highly superior autobiographical memory (HSAM) (Ally, Hussey, & Donahue, 2013; LePort et al., 2012; Parker, Cahill, & McGaugh, 2006). One individual with HSAM showed nearly perfect recollection of dates chosen at random from the time he was 11 years old. His recall was corroborated by entries in his grandmother’s diary, interviews with family members, his medical records, and historical facts about his hometown. Individuals with HSAM show superior recall for public events, as well as personal experience, but otherwise perform about the same as typical controls on other tests of memory. HSAM is associated with physical differences in networks in the brain that are associated with autobiographical memory (LePort et al., 2012).
Aurelien Hayman can recall detail from random dates in his past. When asked in 2012, Aurelien accurately recalled that October 1, 2006, was a cloudy day, he listened to “When You Were Young” by the Killers, asked a girl out and was turned down, wore a blue T-shirt, and experienced a power outage at his home. This rare type of memory, known as highly superior autobiographical memory (HSAM), has been the subject of only 20 published case studies.
Earlier, we defined nondeclarative memories as unconscious or implicit memories that are difficult to verbalize (Smith & Grossman, 2008). In other words, nondeclarative memories influence our behavior without our conscious awareness of having used a memory. Using nondeclarative memories, you are able to do something, such as using roller blades for the first time in years, without really knowing how you are doing it.
Three types of nondeclarative memories have been studied in detail: classical conditioning, discussed in Chapter 8, procedural memories, and priming. Procedural memories are also called skill memories because they contain information about how to carry out a skilled movement, such as driving a car. Priming occurs when exposure to a stimulus changes a response to a subsequent stimulus.
Procedural memories tell us how to carry out motor skills and procedures and are especially difficult to describe in words. Consider the differences between showing somebody how to use scissors and writing an essay about how to use scissors. Which would be easier? Explaining in words how to use scissors, particularly for a person who had never seen a pair of scissors, would be quite a challenge. In contrast, few of us experience difficulties demonstrating procedures (Squire, 1987).
One great advantage of procedural memories is their ability to automate our performance. When a novice driver first learns to operate a car with a manual transmission, significant conscious effort is required to remember the correct sequence—clutch, gas, shift. Once the skill is well learned, the driver is far less aware of this sequence; the person “just drives.” When procedures become automatic, we are free to direct our limited capacities for divided attention to other aspects of the task. A musician who has mastered the notes in a difficult piece can direct attention to the finer points of expression and phrasing. Unfortunately, if a procedure is learned incorrectly, such as a bad golf swing, considerable effort must be expended to fix the swing, which slows performance. The golfer must put in sufficient practice time to make the new, correct swing automatic.
It might have been years since this grandfather last put on a pair of ice skates, but to help his granddaughter learn to skate, he’s willing to get back out on the ice. He might be a little wobbly at first, but procedural memories for skilled movement are persistent. He’ll quickly be skating as if he’d done it every day.
age fotostock/age fotostock/Superstock
Priming, or the change in our response to a stimulus because of pre-exposure to related stimuli, explains many everyday effects of familiarity. People rate advertisements they have seen previously, even if they can’t consciously remember seeing them, more positively than those that they have not seen previously (Perfect & Askew, 1994). We agree that the unconscious way our attitudes can be manipulated is unsettling.
The distinction between nondeclarative procedural memories and declarative memories is one reason it is so challenging to be a computer help desk technician who must talk people through a repair procedure over the telephone. It would be easier to demonstrate how to fix the computer, which is why some software companies prefer to have the technician take over the computer remotely and apply the needed fixes as opposed to verbalizing procedures for the caller to undertake. It also explains why few star athletes go on to be good coaches. Performing a task is not the same as talking about it.
Priming is often studied using a lexical decision task (see Figure 9.9). In this task, a participant views two rapidly presented stimuli and must decide whether the stimuli are both real words (such as fork-roof) or not (such as mork-loof). Reaction time, in the form of hitting one key for real words and another for nonwords, is the dependent variable. When the two stimuli are related words (e.g., doctor–nurse), reaction time for deciding that nurse is a real word is faster than when the stimuli are unrelated words (e.g., butter–nurse) (Meyer & Schvanevelt, 1971).
Priming can be investigated within the lexical decision task, in which participants are asked to judge whether two words appearing together are both real words or not (a). Nonreal words are made by switching one letter from a real word, like plame from flame or lork from fork. Pairs of real words are either related to each other by meaning or not. The participants’ reaction time in this task (b) demonstrates that participants respond faster to related word pairs (bread–butter) than to unrelated word pairs (nurse–butter). These results support the idea that we organize items in long-term memory based on their meaning.
How Is Long-Term Memory Organized?
Librarians use coding systems to group books on the shelf. Can we identify the systems that we use to organize our memories? Memories that share characteristics are more closely linked in long-term memory than memories that show little overlap among their various features.
Connectionism views the mind as a network made up of simpler units or concepts. Connectionist models of memory suggest that thinking about one concept automatically leads to thinking about related concepts and their properties.
A spreading activation model (Collins & Loftus, 1975) recognizes that people form their own organizations in memory based on their personal experiences (see Figure 9.10). For example, if you ask people to report the first words that come to mind when they see the word red, you will get many different answers.
Figure 9.10Spreading Activation.
According to the spreading activation theory, thinking about red will activate nearby concepts (orange, green, and fire) faster than more distant concepts (sunsets and roses). This network suggests that a person would answer the question “Is a bus a vehicle?” faster than the question “Is an ambulance a vehicle?”
The spreading activation model also suggests that concepts differ in the strength of their connections. For example, even though avocados and oranges are both examples of the concept “fruit,” most people have a closer link in their memories between “orange” and “fruit” than between “avocado” and “fruit.” If asked whether an avocado or an orange is a fruit, reaction time to the second statement would be faster.
The spreading activation model does an excellent job of accounting for the results of the lexical decision experiments described earlier. Using the spreading activation model, the first word activates a concept. This activation spreads to connected concepts and properties. For closely related concepts such as “doctor” and “nurse,” activating the “doctor” concept would activate the “nurse” concept even before the person sees the word nurse, allowing a quick decision to be made as soon as the word appears. In contrast, with unrelated words like butter and nurse, the “nurse” concept would not be activated until the word actually appears, resulting in a relatively slower decision.
9-4bInferences: Using Schemas
Frederic Bartlett (1932/1967) observed that memory does not work like a video recording of events. He read long, involved stories to study participants and then asked them to recall the stories 20 hours later. Not too surprisingly, the recalled stories were shorter and had less detail than the original story. Somewhat more surprisingly, participants added features to the recalled stories that had not appeared in the original. These additions were not random. In most cases, the details added by participants fit the theme or meaning of the story.
We are more likely to remember details that are consistent with our schemas than those that are not. We will remember books in the professor’s office and brushes and canvases in the artist’s studio.
keith morris/Alamy Stock Photo
Bartlett concluded that memory storage does not occur in a vacuum. When we encounter new information, we attempt to fit the new information into an existing schema , or set of expectations about objects and situations. Details that are consistent with our schemas are more likely to be retained, whereas inconsistent details are more likely to be left out. Details may be added in memory if they make a story more consistent and coherent. For example, you are more likely to recall having seen books in a photograph of a professor’s office than in a photograph of a farmer working in the fields. Even if no books appeared in the professor’s office, you might recall seeing some, because most professors have offices filled with books. In Chapter 11, we will explore the development of schemas and the formation of concepts during childhood.
Schemas and False Memories
Frederic Bartlett observed that using schemas to frame our memories can lead us to add details that improve a memory’s consistency and coherence. In other words, we can “remember” things that did not occur because they fit our schemas. We can demonstrate this “fill in the blank” tendency in memory by asking you to memorize some word lists.
1. Read through both lists of words in order, and try to remember as many words as you can.
|sheets, pillow, mattress, blanket, comfortable, room, dream, lay, chair, rest, tired, night, dark, time
|door, tree, eye, song, pillow, juice, orange, radio, rain, car, sleep, cat, dream, eat
2. Without looking back at the list, write down as many words as possible from List 1 in any order.
Check your list of recalled words for any that did not appear in List 1. Pillow and dreamappear in both lists, but sleep appears in List 2 only. Many people insert sleep into their List 1 responses (a false memory) because so many of the words on List 1 fit the sleep schema. It is unlikely that you will insert words into your recalled list that are not related to the schema of sleep. See whether you can construct some lists on your own that produce other false memories. We return to the issue of false memories and retrieval later in this chapter.
The procedure is actually quite simple. First you arrange things into different groups depending on their makeup. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities that is the next step, otherwise you are pretty well set. It is important not to overdo any particular endeavor. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications from doing too many can easily arise. A mistake can be expensive as well. The manipulation of the appropriate mechanisms should be self-explanatory, and we need not dwell on it here. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then one never can tell. (Bransford & Johnson, 1972, p. 722)
The self is one of the most important schemas we have for organizing our thinking. If you can think about how the material you study is reflected in your experience, it will be easier to remember.
At this point, you are probably scratching your head in confusion. Reading this passage is bad enough, and remembering much of it later seems impossible. However, what if we tell you that the passage is about doing your laundry? With the laundry schema in mind, try rereading the passage. It is likely to make a lot more sense than when you read it the first time, and you will remember more of what you read.
9-5How Do We Retrieve Memories?
Storing information does us little good unless we can locate the information when we need it. Without a system of retrieval, our stored memories would be no more useful to us than a library in which books were placed on shelves at random. You might get lucky and find what you are looking for, but in most cases, the search would take so long that the information would no longer be needed.
9-5aRetrieval from Short-Term Memory
Imagine that you were told to remember the following letters for a subsequent test:
|c a f h k
During the test, you are shown a series of letters one at a time. If the letter you are shown matches one on your list, you pull the “yes” lever as quickly as possible. If the letter is not on your list, you pull the “no” lever. This procedure is used to investigate recall from short-term memory (Sternberg, 1966, 1967, 1969).
Because of the small number of items held in short-term memory, it is tempting to assume that we can retrieve them all simultaneously. However, this is not the way memory works. When the number of letters on the list was increased, a consistent 38 milliseconds of reaction time was added for each additional item. In other words, if you were asked to say whether “h” was on your list, you would first consider “c,” then “a,” then “f,” and so on until you reached the target letter. These results suggest that we search through short-term memory one item at a time, rather than retrieving its contents all at once.
Retrieval systems help us find the information we need, whether we are searching online, looking for a book, or trying to remember something important. Organized information is always easier to find than disorganized information.
9-5bRetrieval from Long-Term Memory
The popularity of games like Trivial Pursuit, crossword puzzles, and television game shows highlights an interesting aspect of memory retrieval. It feels good when you can remembersomething. At the same time, most students are all too familiar with the intense feelings of frustration that accompany the inability to retrieve information. You know the answer, but it isn’t coming to mind.
The Role of Cues
A cue is any stimulus that helps you access target information. Most students find recognition tasks such as true–false or matching exam items relatively easy. These tasks provide complete cues (the correct information is on the page in front of you). All you need to do is make a judgment about how well the presented information matches what is stored in memory. Compared to recognition tasks, recall tasks, such as essay exams, require an additional step. Information must be retrieved from memory and then recognized as correct, a process known as generate–recognize (Guynn et al., 2014). Recall tasks provide far fewer cues than recognition tasks and are typically more difficult as a result.
In addition to the amount of information provided, what makes a stimulus an effective cue? The most effective cues are those we generate ourselves, a finding that students might find particularly useful. In one experiment, one group of students wrote down 3 words of their choosing for every 1 of 600 words they were expected to learn, while another group studied the same 600 words accompanied by 3 words for each term selected by another person (Mäntylä & Nilsson, 1988). Although recall in the second group was an excellent 55%, the students who selected their own retrieval cues remembered a remarkable 90% of the words. In a later section on improving your memory, we will emphasize the benefits of incorporating your experience when forming new memories. If you are able to put concepts to be learned in your own words and associate them with personal experiences, they will be easier to remember.
The popularity of memory games, such as Trivial Pursuit, probably results from the rewarding feeling we get when we retrieve a sought-after memory.
What Is the “Own-Race Bias” in Memory for Faces?
Following the Misidentification of five innocent African-American men by a white eyewitness in 1971, William Haythorn was inspired to ask whether cross-racial identifications were as accurate as same-race identifications (Meissner & Brigham, 2001). This led to decades of investigations into the own-race bias (ORB) in memory for human faces, also known as the cross-race effect or other-race effect.
Meta-analyses have supported significant ORB effects that are consistent across a number of racial and ethnic groups (Meissner & Brigham, 2001). Researchers still differ, however, in how they explain the effect. One school of thought suggests that the ORB results from perception (Rossion & Michel, 2011). Because people are more familiar with their own race than others, they might initiate different types of attention and perceptual processes when encoding information about faces representing different races (see Chapter 5). Other researchers focus on more social and cognitive factors, such as perceived in-group versus out-group membership (Hugenberg, Young, Bernstein, & Sacco, 2010). According to this approach, a member of your in-group might seem more important to encode, leading to the use of different perceptual strategies.
Most of the ORB studies make use of an either-or approach to racial identity. What happens when the stimuli are more ambiguous? Given the large number of people with multiracial identities today, this seems like a logical and important question. When ambiguous racial group faces are used as stimuli, participants’ memory for them depends on whether the ambiguous faces are viewed as in-group members or not (Pauker et al., 2009). When perceivers associated ambiguous faces with their in-group, memory was better, but when the ambiguous faces were not included in the in-group, memory was once again poor.
What happens if the perceiver is bicultural? Which faces would seem to fit the “own-race” category for these individuals? Latino-Americans were primed to focus on either their Latino or American cultural selves (Marsh, Pezdek, & Ozery, 2016). When primed to focus on their Latino cultural selves, the participants demonstrated better memory for Latino faces than for white faces. However, when primed to focus on their American cultural selves, memory for white faces was better than for Latino faces.
These results emphasize the importance of considering social and cognitive factors in addition to perceptual and learned aspects in efforts to understand the basis of the own-race bias.
Cues might work because of a process known as encoding specificity (Flexser, 1978; Tulving, 1983; Tulving & Thomson, 1973). Each time you form a long-term memory, target information is encoded along with other important bits present at the same time. As a result, each memory is processed in a unique and specific way because this exact combination of bits is unlikely to occur again. Any stimulus that was present and noticed during this encoding process could serve as a cue for retrieving the target memory.
Participants typically show an improved memory for faces of people from their own race, a phenomenon known as the own-race bias (ORB). Researchers do not agree on the causes of the ORB, but some argue in favor of the importance of an in-group versus out-group hypothesis. When ambiguous faces are presented, participants’ memories for the faces depend on whether they are primed to think of the ambiguous faces as members of their own in-group or not.
Pauker, Kristin; Weisbuch, Max; Ambady, Nalini; Sommers, Samuel R.; Adams Jr., Reginald B.; Ivcevic, Zorana; Not so black and white: Memory for ambiguous group members; Apr 1, 2009; Journal of Personality and Social Psychology; American Psychological Association; Reprinted with permission.
It may be tempting to blame context-dependent memory for the experience of alcohol-related blackouts, but these are more likely to result from alcohol’s active interference with the formation of long-term memories (Lisman, 1974). There is no evidence that getting drunk again will make it easier to recall what happened the last time a person got drunk.
Among the bits of information that get encoded along with target memories are features of the surrounding environment, leading to context-dependent memory. You have probably been advised to study in a well-lit, quiet, professional environment to perform your best on exams. Duplicating your testing situation when you study should provide the greatest number of retrieval cues. When study participants were asked to learn lists of words in one of two distinctive rooms while either standing or sitting, recall was best when participants were tested in the same room and position as when they learned the information (Greenspoon & Ranyard, 1957). As shown in Figure 9.13, scuba divers who learned words either on land or underwater retrieved the most words when their encoding and testing circumstances were the same (Godden & Baddeley, 1975). Although these effects are small, it is still a good idea to study in a quiet, classroomlike environment, which might explain why studying in the library has remained popular.
Mood and other internal states can serve as encoding cues. In one creative study, people with bipolar disorder (described in Chapter 14), whose moods can swing from mania to depression, learned words in one state (mania or depression) and tried to retrieve them in either the same state, mania–mania or depression–depression, or the opposite one, mania–depression or depression–mania (Weingartner, Miller, & Murphy, 1977). The participants were most successful when learning and retrieving occurred in the same state, whether that was mania or depression.
Tip of the Tongue
Retrieval is not an all-or-none phenomenon. Instead, retrieval proceeds step by step, with each new step bringing you closer to the target. This gradual retrieval is best illustrated by the tip-of-the-tongue (TOT) phenomenon. TOT is probably a familiar experience for you. While trying to remember a word or name, you might retrieve the first letter of the item, but the complete item remains elusive.
Figure 9.11Our Surroundings Are Encoded in Context-Dependent Memories.
Features of our environment get encoded along with target memories. Study participants learning lists of words either on land or while underwater recalled more words when tested in the same context compared to when they were tested in the opposite context. This diver might find it more difficult to retrieve information about types of fish if tested on dry land instead of underwater.
Ernest Manewal/Getty Images Source: Adapted from D. R. Godden, & A. D. Baddeley (1975). “Context-Dependent Memory in Two Natural Environments: On Land and Under Water,” British Journal of Psychology, 66, 325–331.
In a classic series of experiments, more than 200 TOT experiences were induced in research participants by presenting definitions of relatively rare English words (Brown & McNeill, 1966). For example, participants were asked to supply a word for “a navigational instrument used in measuring angular distances, especially the altitude of the sun, moon and the stars at sea” (Brown & McNeill, 1966, p. 333). You may be picturing the object right now, or thinking about a movie of an old salt using this instrument—it starts with an s—but most of you will have difficulty retrieving the word sextant.
Participants showed considerable evidence of partial recall during their TOT experiences. They were able to identify words that they recognized instantly, unlike words they did not know. Many were able to identify the first letter and the number of syllables in the target word. Incorrect words that were retrieved frequently sounded like the target, although their meanings were usually quite different. In some cases, retrieving the incorrect word blocked the retrieval of the correct item, but in other cases, the incorrect word was an additional cue.
Reconstruction during Retrieval
When retrieved, information to be used flows from long-term memory back into working memory. The mind engages in reconstruction , or the building of a memory out of the stored bits by blending retrieved information with new content present in working memory (Bartlett, 1932/1967). When you retrieve the target information, you are reconstructing something sensible to fit the occasion, as opposed to simply reproducing some memory trace. If the memories are rather fresh, such as when you are describing an automobile accident to a police officer, this process might produce updates in the original memory that will then be reconsolidated into long-term memory. Changes you make in the original memory will be stored as the new “truth.” However, if you are recalling a memory from a more remote time, less updating is likely to occur (Gräff et al., 2014).
Drugs, including the caffeine in coffee, can produce strong context-dependent effects on memory. If you study while drinking coffee, taking a test without coffee is likely to make retrieval more difficult.
Storytellers often discover that certain aspects of their stories provoke more of a reaction from the audience. These aspects are emphasized and perhaps exaggerated for even greater effect, and the new, more exciting story replaces the original version in long-term memory. Fish become larger, vacations become more exciting, and heroes become more heroic. When participants in a study were asked to repeat a complicated story on several occasions, they tended to simplify the story, highlight some aspects more than others, and adjust the story to fit their worldviews (Bartlett, 1932/1967). Such alterations probably form the basis of mythology. In the retelling of the adventures of Odysseus or King Arthur, accounts of the original true events are lost or hopelessly distorted.
Most of us believe that our memories, especially for important life events, are relatively accurate. Elizabeth Loftus set out to evaluate the reliability of eyewitness testimony in courtroom settings and discovered that memories are rather flexible. In one experiment, participants watched a video of an automobile accident and answered a number of questions about what they had seen (Loftus & Palmer, 1974). One group heard the question “How fast was the white sports car going while traveling along the country road?” while the other group heard the same question with a slight addition—”How fast was the white sports car going when it passed the barn while traveling along the country road?” There was no barn in the video, but when participants were asked 1 week later whether they had seen a barn, 20% of those who had heard the barn question answered “yes,” while fewer than 5% of the other participants did so. One must assume that skilled attorneys are quite aware of this feature of memory and could use such leading questions to the advantage of their clients (see Figure 9.12).
Tip-of-the-tongue experiences were elicited in volunteers by describing rare words in English. One of the items was the name of the weird-looking instrument this naval cadet is learning to use. Study participants were often able to retrieve the first letter of the word (s) and the number of syllables (two) without necessarily retrieving the whole word (sextant), which demonstrates retrieval is not all or none.
Figure 9.12Memory Reconstruction.
After study participants viewed a short video of an automobile accident, Loftus and Palmer (1974) asked one group, “About how fast were the cars going when they hit each other?” while a second group was asked, “About how fast were the cars going when they smashed each other?” One week later, both groups were asked if they recalled seeing glass on the road after the accident. There was no glass on the road in the video, so the correct answer was “no.” Hearing the word smashed instead of hit increased the likelihood that a participant would “remember” glass on the road and answer “yes.”
Dmitry Kalinovsky/ Shutterstock.comSource: Adapted from “Reconstruction of Automobile Destruction: An Example of the Interaction Between Language and Memory,” by E. F. Loftus & J. C. Palmer. (1974). Journal of Verbal Learning and Verbal Behavior, 13, “Reconstruction of Automobile Destruction: An Example of the Interaction Between Language and Memory,” by E. F. Loftus & J. C. Palmer. (1974). Journal of Verbal Learning and Verbal Behavior, 13, 585–589.
If you began this psychology course believing, like many people do, that memory works like a video of life, it might surprise you to learn that this is not the case. It may be quite unsettling to realize that memories are open to change and revision and that those distinct and confident childhood memories we cherish may be somewhat inaccurate or even flat out wrong. However, it also doesn’t make sense to think that we would evolve a system of memory that was usually wrong. Instead, fuzzy trace theory suggests that we use precious resources to form different types of memories based on our needs, ranging from verbatim, or exact accounts, to gist, which means we retain the general idea of events (Reyna, 2008). We use gist when a relatively vague level of information is sufficient because this is a more efficient use of resources. We use the more energy-intensive verbatim memories for situations that require detailed, accurate recall, such as remembering the periodic table of the elements in your chemistry class. This system works well for us most of the time, but later in this chapter, we will see how relying on gist instead of retrieving verbatim information can lead to false memories (see Figure 9.13).
Figure 9.13The Use of Gist Increases with Age.
Older children, with their improved language skills, use gist more effectively than younger children (Odegard, Cooper, Lampinen, Reyna, & Brainerd, 2009). When children attended birthday parties with a theme (e.g., Harry Potter or SpongeBob SquarePants), older children appeared to be able to use the theme of the party to provide gist, leading to their successful recall of more theme-related events, such as having magic potions at Hermione’s party. Younger children, however, could not remember theme-related events better than generic birthday party events such as blowing out candles on a cake, suggesting that forming a theme gist was not helpful.
Lawrence Lucier/Getty Images Source: Adapted from T. N. Odegard, C. M. Cooper, J. M. Lampinen, V. F. Reyna, & C. J. Brainerd (2009). “Children’s Eyewitness Memory for Multiple RealLife Events,” Child Development, 80(6), 1877–1890, doi:10.1111/j.1467-8624.2009.01373.x
People often report especially vivid episodic memories about where they were and what they were doing when they first heard news that evoked a strong emotional response, such as sadness at hearing about the death of a beloved celebrity, such as Carrie Fisher of Star Wars fame. However, research evidence suggests that these memories are not always as accurate as we think they are.
A checkpoint for the accuracy of our memories results from source monitoring (Johnson, Hashtroudi, & Lindsay, 1993). Under normal circumstances, we do a good job of distinguishing between external and internal sources of information, as in “I said that” or “I thought about saying that.” Again, this system works well for us most of the time, but it can produce false memories when we attribute a memory to the wrong source. For example, you might think you told your roommate you would be coming in late, but you may have only mentally reminded yourself to do so. You have mistaken an internal source of information (“I thought that”) with an external source (“That happened”).
Retrieval of Emotional Events
Take a moment and write down the most important five events in your life last year. Do these events have anything in common with one another? We’re willing to guess that each of the events on your list is associated with strong emotions. From an evolutionary perspective, this makes good sense. In Chapter 7, we argued that emotions provide quick guidance for approach-or-avoidance decisions. Many of our emotional experiences, while not life threatening, have significance for us, and forming strong memories of these events will help us respond effectively to similar situations in the future.
Emotions, particularly negative emotions, do not have a simple relationship with memory retrieval. In some cases, we have difficulty remembering negative events, which we discuss in a later section on motivated forgetting. For example, some individuals report that they can recall few or no details of having been sexually molested as children. In other cases, memories for negative events seem even more vivid and intrusive than other types of memories. Many adults in the United States formed a flashbulb memory of the terrorist attacks of September 11, 2001, or an especially vivid memory including details of where they were and what they were doing when they first heard the news. Individuals diagnosed with posttraumatic stress disorder (PTSD) often experience intrusive flashbacks of the events that originally traumatized them. How do we reconcile these differences in retrieval for emotional events?
As information moves through the stages described by the information processing model, it remains relatively fragile and subject to modification. Stress and strong negative emotions are accompanied by the release of hormones and by patterns of brain activity that can either enhance or impair memory processing, depending on the timing of the emotional response relative to target learning (Joëls, 2006). If stress and learning happen at the same time, an enhanced memory such as a flashbulb memory might be formed (Diamond, Campbell, Park, Halonen, & Zoladz, 2007). Stress occurring either before or after learning impairs memory formation (Joëls, 2006). Impairment of memory following an important life event might protect the fragile memories of that event from interference until they are fully consolidated.
So far, we have been considering isolated events that elicit strong negative emotions and stress. As we will see in Chapter 16, chronic stress produces its own set of challenges for memory by producing a number of important changes in the parts of the brain associated with memory formation. For example, chronic stress is associated with a loss of volume in the hippocampus that is likely to have profound influences on the formation of new memories (Roozendaal, McEwen, & Chattarji, 2009).
Should We Erase Traumatic Memories?
In Chapter 14, we explore a condition known as posttraumatic stress disorder (PTSD) that results when some people experience a traumatic event. Although trauma of many kinds can induce PTSD, one of the most reliable sources of PTSD is combat exposure. Although about 7 to 8% of Americans will experience PTSD at some point in their lives, between 10 and 13% of U.S. soldiers who served in Iraq or Afghanistan and approximately 30% of soldiers who served in Vietnam meet the diagnostic criteria for the disorder (Gradus, 2016).
Among the symptoms of PTSD is the experience of vivid, intrusive flashbacks of the traumatic event. These flashbacks can be triggered by environmental stimuli, such as the odor of diesel fumes, according to the principles of classical conditioning we discussed in Chapter 8. Individuals with PTSD would like to eliminate these distressing responses. The current mode of therapy is exposure therapy, which we also discussed in Chapter 8, but many people with PTSD are resistant to extinction learning (Giustino, Fitzgerald, & Maren, 2016). What if we could interfere with memories for traumatic events?
If provided along with behavioral therapy soon after a traumatic event, a drug called propranolol can prevent or reduce the later development of PTSD (Giustino et al., 2016). As shown in Figure 9.14, propranolol, which is often prescribed for cardiovascular conditions, affects emotional memories by blocking norepinephrine in the amygdala (see Chapter 4). This in turn prevents people from forming strong memories of the emotions associated with an event, although the facts of the event are processed normally. Propranolol has little effect on older, established fear memories that have already been consolidated. Researchers are making progress on methods for intervening with these older memories.
Figure 9.14Propranolol Reduces Emotional Learning.
After a single conditioning trial of tone followed by shock (left), rodents learn to freeze in response to the tone. The following day, rodents are given one tone followed by an injection of either a placebo or propranolol (center). Subsequently, animals injected with propranolol show less freezing in response to the tone (right). Propranolol probably achieves this outcome by interfering with the action of norepinephrine, a neurochemical associated with vigilance, in the amygdala.
Source: Dębiec, J., & Ledoux, J. E. (2004). Disruption of reconsolidation but not consolidation of auditory fear conditioning by noradrenergic blockade in the amygdala. Neuroscience, 129(2), 267-272. doi:http://dx.doi.org/10.1016/j.neuroscience.2004.08.018.
In our discussion of the information processing model, we noted that memories that are retrieved mingle with ongoing material in short-term or working memory. Subsequently, these memories undergo a process of reconsolidation. If you are thinking about something you just read in the chapter, those memories are more “open” to modification, but memories for the more distant past are less changeable. What if we could get the brain to treat memories from the distant past more like recent memories?
Long-term memories involve structural changes in neurons and their synapses that result from changes in gene expression, which we discussed in Chapter 3. In mice, the state of a chemical called histone deacetylase 2 (HDAC2) correlates with the “open” period of reconsolidation in which memories are modifiable as opposed to the “closed” period characteristic of more remote memories (Gräff et al., 2014). Application of HDAC2 inhibitors essentially returns remote memories to the same modifiable state that characterizes fresh memories. Extinction of fear responses during this artificial reconsolidation state should produce more effective reductions in traumatic fear.
We can’t imagine that anyone would wish continued suffering on people with PTSD, but not all therapists are comfortable with the concept of “erasing” traumatic memories. As one psychiatrist wrote, “There is pain in life, and it needs to be dealt with in a human way …. Our suffering, trauma included, is not a brain problem, but a human problem” (Berezin, 2014, para. 3). Reducing emotional distress might make us less disturbed than we should be about events in our environments (Lavazza, 2015). In spite of the cost to individuals, we might ask if we want a community that cannot remember the emotional trauma of war and assault.
How Reliable Are Eyewitnesses?
Our Legal System relies heavily on the testimony of eyewitnesses, especially those who have nothing to gain by telling a lie. Given the flexible nature of human memory as discussed in this chapter, is the trust we place in eyewitness accounts reasonable?
Carefully controlled research by Elizabeth Loftus into the use of eyewitness testimony (Loftus, 1979; Loftus & Palmer, 1974), along with the development of forensic deoxyribonucleic acid (DNA) testing in the 1990s, seriously compromised trust in eyewitness testimony. Out of all cases in which an innocent person has been cleared of a crime because of DNA evidence, about 75% involved mistaken identification of the perpetrator by an eyewitness (Wells, Memon, & Penrod, 2006).
Psychologists have used research on eyewitness behavior to make scientifically based recommendations to law enforcement officials. For example, the manner in which photograph lineups of possible suspects are shown to witnesses affects the likelihood of mistaken identification. In the typical procedure, witnesses view lineup photographs simultaneously, which allows them to compare all the people and choose the person who looks most similar to their memories of the perpetrator. Unfortunately, this procedure makes mistaken identification more likely if the real suspect does not appear in the lineup. The witness simply chooses the person who looks most like the remembered perpetrator. If a sequential procedure is used, in which the witness must respond “yes” or “no” to a picture before moving to the next one, mistaken identifications occur less frequently (Steblay, Dysart, Fulero, & Lindsay, 2001).
Perhaps juries could evaluate eyewitness testimony more accurately if they took the witness’s apparent confidence into account. In other words, a person who seems confident about identifying a suspect might be expected to be more accurate than a witness with less confidence. Unfortunately, witnesses testifying in court who express a 95% confidence in their judgment (expecting to be wrong only 5% of the time) are correct only 70% to 75% of the time (Brewer, Keast, & Rishworth, 2002). However, witness confidence at the time of identification appears to be strongly related to accuracy (Wixted, Mickes, Clark, Gronlund, & Roediger, 2015). If courts paid more attention to this early confidence rather than confidence in the courtroom months or even years later, fewer innocent people should be convicted.
The traditional lineup used in the criminal justice system is likely to produce a mistaken identification when the real perpetrator is not included. The witness simply picks the most similar person. Psychologists have shown that giving “yes” or “no” answers to one photo at a time reduces the risk of a mistaken identification.
Joel Gordon Photography
Special consideration must be given to cases in which the eyewitness is a child. An understanding of children’s memory development is critical for evaluating the child’s ability to serve as a witness to a crime. Some data indicate that children’s memories for significant events, such as a trip to an emergency room, are quite reliable as long as 4 to 5 years later (Peterson & Whalen, 2001). However, young children are accustomed to pleasing adults with their answers and are more suggestible than adolescents and adults. Fortunately, understanding the strengths and limitations of children’s memory systems has allowed experts to develop methods for obtaining the most accurate reports possible from child witnesses (Bruck & Ceci, 2009).
Further improvements should accompany the development of new, more reliable measures of recognition, such as brain imaging, reaction time, rapid presentation of faces, and analyses of witness eye movements (Wells et al., 2006).
9-6Why Do We Forget?
Now that we understand the processes involved with the formation, storage, and retrieval of memories, we can turn our attention to the troublesome topic of forgetting. For students, whose job description involves committing large amounts of information to memory, an understanding of forgetting is the source of practical advice for improving memory and avoiding memory failure.
We define forgetting as a decrease in the ability to remember a previously formed memory. The key here is that to forget a memory, it has to have been formed in the first place. This definition excludes a number of instances that we have discussed previously. For example, many students maintain that they “forgot” information needed for an exam but instead were daydreaming during the lecture covering the material and never learned it. This example is better understood in terms of lack of attention and encoding failure than as an example of forgetting. When forgetting is the result of brain injury or disease, we usually refer to the loss of information as amnesia.
Andy Reynolds/Getty Images
Understanding forgetting is complicated because we measure memory indirectly by looking at performance. As most students are all too aware, actual memory for a topic can be quite different from performance on an exam. It would be handy for both students and instructors if some sort of modern imaging technology would allow us to “see” whether introductory psychology had been adequately stored in the brain, but alas, this is not currently possible. Stress, illness, time pressure, and distractions can temporarily reduce our ability to recall information. When we discuss true forgetting, we are not considering the effects of these temporary difficulties.
Although forgetting can be frustrating, it also has its adaptive benefits. Forgetting provides a way to prioritize the things we should remember. For example, we are often asked to change our computer passwords to maintain security. At first, this can lead to annoying competition in memory between the old and the new passwords. Over time, however, the strength of an old password weakens. Functional magnetic resonance imaging (fMRI) studies have shown that prefrontal areas of the brain actively suppress memories that are used less frequently (Kuhl, Dudukovic, Kahn, & Wagner, 2007). By suppressing these lower priority memories, we can avoid confusion and reduce the amount of work we have to do to recall higher priority memories.
Decay occurs when our ability to retrieve information that we do not use fades over time. Imagine taking last term’s final exams today. How would you do? You might think that the material you learned last term is gone forever, but just because you can’t retrieve something doesn’t mean that the memories are lost.
A classic method of measuring the retention of material in long-term memory over time is the method of savings. This method compares the rate of learning material the first time to the rate of learning the same material a second time. It might take you 50 practice trials to learn the periodic table of elements for your first chemistry class. In a subsequent course, you again need to memorize the table. This time, it only takes you 20 trials. The greater speed of learning the second time indicates that you retained or saved some prior memories of the table. Using this technique, we can demonstrate that people who studied high school Spanish but never used it later in life retained most of their memories for Spanish vocabulary words 50 years later (Bahrick, 1984). Instead of a large amount of forgetting because of the passage of time, most of the material we learn is retained nearly indefinitely.
People attending their 70th high school reunions might have forgotten the names of some of their classmates whom they hadn’t seen in decades.
Although the idea of decay fits our everyday experience of forgetting quite well, most contemporary psychologists believe that the simple passage of time does not do a good job of predicting memories that are easy or difficult to retrieve (Berman, 2009). It is likely that forgetting occurs because of a combination of factors, which may or may not include decay.
Interference is the competition between newer and older information in the memory system. The brain requires a measurable amount of time to produce a physical representation of a memory. In the window of time in which memories are being processed but are not yet fully consolidated, they may be subject to distortion, loss, or replacement by interference from other bits of information.
How long is this window? The physical changes related to memory that occur at the level of the synapse might take minutes or hours. Memory loss usually occurs when this consolidation is interrupted. Individuals who experience unconsciousness as a result of a head injury rarely remember much about the immediate circumstances leading to the injury. Procedures such as general anesthesia or electroconvulsive therapy (ECT), described in Chapter 15, often produce slight memory deficits spanning a period of hours or possibly a day or two before and after treatment. In contrast, storage of memories in the cerebral cortex might take years, during which time information can be lost or distorted (Dudai, 2004).
Interference can be demonstrated by comparing performance in a list-learning task. The more lists someone must learn, the more difficult it becomes to remember words on the first list (Tulving & Psotka, 1971). In other words, learning new lists of words interfered with memory for the first list.
Does this mean that the first list is erased from memory by the incoming information? To test this hypothesis, researchers gave participants in one experiment a little help in the form of memory cues. The lists all contained categories of items, such as types of buildings (e.g., house, barn, garage, and hut). If the experimenters provided their participants with a cue in the form of the category (types of buildings), the effects of having learned additional lists were quite small. It appears that the words on the first list were maintained in memory, but learning additional lists made them hard to retrieve.
To make matters worse, interference can work in two directions (Underwood, 1957) (see Figure 9.15). Let’s assume that your foreign language class is assigned one list of vocabulary words to study each night. You have procrastinated on your homework, and to catch up for a quiz the next day, you now have three lists of vocabulary words to study instead of the usual single list. Our interest will be in how well you can remember the second of the three lists. If we compare your memories for the second list to those of students who studied the first list when it was assigned, we find that your performance is relatively poor. In other words, learning the first list on the same night as the second list produces proactive interference for the second list. Proactive interference refers to reduced memory for target information as a result of earlier learning.
Proactive and Retroactive Interference.
If we measure recall of a target list of words, we find that it is worse both when preceded by learning another list (proactive interference) and when followed by learning another list (retroactive interference).
At the same time, we can compare your memories for the second list to the performance of your classmates who studied the third list the night after they studied the second list. Again, your performance is likely to be worse. Reduced memory for target information because of subsequent learning is known as retroactive interference. This type of interference was demonstrated in the multiple lists study we discussed previously, which you may recall unless too much retroactive interference has occurred.
Elizabeth Loftus (2003) demonstrated that it was relatively easy to implant a false memory in study participants of having taken a hot air balloon ride during childhood.
The Internal Revenue Service reports that far more people who owe money fail to sign their tax returns than do those who are due a refund. Assuming that the failure to sign the return is not a conscious act of defiance, how can we account for this lapse in memory? Theories of motivated forgetting , or the failure to remember or retrieve unpleasant or threatening information, suggest that the nonsigners are protecting themselves from further unpleasantness by “forgetting” to sign their tax forms.
Memory is a servant to our overarching goals. Retrieval, for better or worse, is often influenced by our motivations, and our motivations can distort the memories we retrieve. While not exactly forgetting in the sense of our earlier definition, motivated distortions of memory can be so extreme that the original information is essentially lost during the process.
In one example of the influence of motivation on recall, study participants were presented with a list of choices, such as between two internships, roommates, or cars for sale, with equal numbers of corresponding positive and negative features (high resale value or some rust in the case of the cars). Subsequently, they remembered the positive features associated with their ultimate choices better than the negative features (Henkel & Mather, 2007). When they were deceived into thinking they had chosen the other option instead (because of a friendly “reminder” from the experimenter), they continued to remember the false choice more positively. In related research, participants conveniently demonstrated less recall for ethical rules after they had been given an opportunity to cheat (Shu, Gino, & Bazerman, 2011). In Chapter 13, we will explore how these types of discrepancies between behavior and attitudes can change the attitudes, not just the memories for them.
Identifying the presence of motivated forgetting can have serious practical implications. Beginning in the 1970s, largely because of greater public recognition that incest was more common than previously believed, many adults began to report having been a victim of sexual abuse during childhood. These cases represented a range of possible motivated forgetting from suppression, in which the individual consciously remembered the incidents but had not reported them to parents or other authorities, to repression, in which the individual reported no conscious memory of the incidents until the memories were suddenly recovered during therapy or while reading a news report of a child molestation case.
A number of psychologists studying memory suspected that not all reports of recovered memories of child abuse were true, and some might represent confabulation or confusion between imagined and true memories. As we mentioned earlier in this chapter, our source monitoring abilities usually prevent us from mistaking false for true memories, but the system does not perform perfectly. Under the right set of circumstances, it is relatively easy for people to believe strongly in a memory that is simply not true.
We demonstrated in an earlier section on schemas that false recall for verbal stimuli can be produced by presenting words that are associated by meaningfulness (e.g., bed, rest, and awake). In this case, most study participants formed a false memory for the presentation of the word sleep (Deese, 1959). Perhaps you are thinking that memorizing strings of words in a laboratory has little relevance to the experience of traumatized victims of child abuse. Loftus, whom we met earlier in our discussion of memory reconstruction, addressed that concern by demonstrating that more complex false memories were rather easy to implant in study participants. Loftus (2003) described how imagining an event had happened or even reading the testimonials of witnesses could increase a person’s confidence that a false event had occurred. Most persuasive is the use of photographs. When a real family photo was superimposed on a hot air balloon, 50% of participants “remembered” taking a ride, including details about how old they had been at the time and that the photo was taken by a particular person.
Until we understand more about the nature of confabulation, a cautious approach to repressed memories is probably the best course of action. We can neither prove nor disprove these memories without additional evidence, so any therapy should be aimed at relieving distressing symptoms without reference to their source (American Psychological Association, 2014).
9-7What Is the Biology of Memory?
Cognitive neuroscientists have made considerable progress in discovering the biological correlates of memory processing. In this section, we first zoom in for a look at how memory is managed at the cellular and biochemical levels. Next, we zoom out again to explore patterns of brain activation that are associated with certain types of memory processing.
9-7aMemory at the Level of the Synapse
Forming new memories requires changes in the connections neurons make with one another at the synapse, or synaptic consolidation. You might find it strange to think that such a process is going on in your brain as you read this chapter.
Eric Kandel and his colleagues have demonstrated persistent changes in the strength of synapses responsible for several types of learning in the sea slug, including classical conditioning (Antonov, Antonova, Kandel, & Hawkins, 2003; Brunelli, Castellucci, & Kandel, 1976; Carew & Kandel, 1973). In addition to changes in synaptic strength, it appears that learning stimulates a cascade of gene expression, which in turn produces the long-term structural changes in neurons that represent memories. The number of axon terminals increases following sensitization and decreases following habituation (Bailey & Chen, 1983; see Chapter 8). These observations are consistent with the behavior observed in each case—lower levels of responses to stimuli in habituation and higher levels of responses to stimuli in sensitization (see Figure 9.16).
Learning Changes Neural Structure.
Neurons have smaller numbers of axon terminals following habituation but larger numbers following sensitization.
One of the major processes responsible for change at the synaptic level during learning is long-term potentiation (LTP) , which enhances communication between two neurons. This phenomenon can be demonstrated experimentally by applying a rapid series of electric pulses to one area of the nervous system and observing the increased reactions of cells receiving input from that area (Bliss & Lømo, 1973; see Figure 9.17). Results from demonstrations of LTP suggest that the relatively simultaneous activation of a neuron sending information and the neuron receiving this information produces changes that make the synapse between them more efficient. LTP shares many features with memory, which makes it an attractive candidate for being one of the processes underlying memory phenomena. LTP lasts a long time, possibly indefinitely, which is similar to our thinking about long-term memories. Second, both memories and LTP can be formed after only brief exposure to stimuli.
Long-Term Potentiation (LTP).
LTP can be demonstrated by applying a series of electrical pulses (center) and observing the increased reactions of cells receiving input (right) compared to their previous baseline (left). LTP shares many features with memory, such as being long-lasting and formed after a brief exposure to stimuli.
9-7bWorking Memory and the Brain
Scientists have also made progress in their search for brain activity that correlates with working memory, although they continue to debate how the executive processes of working memory are organized (Nee et al., 2013). Studies of people with brain damage suggest that several executive functions are managed by different parts of the frontal lobes but that a single central executive probably does not exist (Stuss, 2011). Working memory does not occur in a separate, isolated part of the brain. Instead, the phonological loop and visuospatial sketch pad use the same posterior parts of the brain that are used in verbal and visual perception (Bledowski, Kaiser, & Rahm, 2010). Top-down influences originating in the prefrontal and parietal cortex provide the attention necessary to maintain stimulus information in working memory (Nee et al., 2013).
9-7cLong-Term Memories and the Brain
Through the careful observation of people with brain damage, along with brain imaging studies in healthy participants, scientists have discovered correlations between activity in parts of the brain and specific components of long-term memory. These discoveries support the distinctions made by cognitive psychologists between declarative and nondeclarative memories based on observations of behavior.
Declarative Memories and the Hippocampus
In Chapter 4, we described the important role played by the hippocampus in memory. The hippocampus clearly participates in the consolidation of semantic and location information into long-term memory. The hippocampus might also be involved with the re-experiencing of episodic memories throughout the lifespan (Moscovitch, Nadel, Winocur, Gilboa, & Rosenbaum, 2006).
Now that we are familiar with some distinctions between declarative and nondeclarative memories, we can examine the case study of Henry Molaison (the amnesic patient H.M.) in more detail. In follow-up observations of Molaison, Brenda Milner discovered that not all of his memories were equally affected by the surgery that damaged his hippocampus (Milner, 1966, 2005). Molaison retained most of his memory for events leading up to his surgery, but his ability to form new memories was profoundly reduced. The inability to form new memories is known as anterograde amnesia. Much to Milner’s surprise, Molaison learned a new procedural task, mirror tracing, as well as typical control participants did. In one of these tasks, Molaison was asked to draw the shape of a star while looking at a sample star and his hand in a mirror. After 3 days, Molaison mastered the task. However, if asked, he would deny ever having performed the task. His procedural memories were intact, but his declarative memories for the details of the task were nonexistent (see Figure 9.18).
Figure 9.18Separating Declarative and Nondeclarative Memories.
The mirror-tracing task requires a participant to trace a five-pointed star, which is mounted on a wooden board that blocks the participant’s view of the star and his or her hand. The participant must view the star and his or her hand in a mirror. This task is especially challenging because the mirror reverses the image, so if you want the pencil to trace around the star away from your body, you have to move your pencil toward your body instead. Brenda Milner was surprised to observe that Henry Molaison learned the mirror-tracing task at a normal rate, even though he didn’t remember the details of the task. This outcome suggested to Milner that nondeclarative, procedural memories such as the mirror-tracing task were not managed by the brain the same way as declarative memories.
Source: Milner, B. (1965). Memory disturbance after bilateral hippocampal lesions. In P. M. Milner & S. E. Glickman (Eds), Cognitive processes and the brain (97–111). Princeton, NJ: Van Nostrand.
Declarative Memories and the Cerebral Cortex
Semantic memories appear to be widely distributed across the cerebral cortex (see Figure 9.19). Using brain imaging, researchers can observe which parts of the cerebral cortex are active when a person is thinking about particular types of memories (Binder, Desai, Graves, & Conant, 2009). Different areas are activated when a person is accessing knowledge of actions, items that can be manipulated, concrete concepts, and abstract concepts. For example, naming animals is associated with activity in the occipital lobes, suggesting that visualizing an animal’s appearance might be helpful in this task (Martin, Wiggs, Ungerleider, & Haxby, 1996). Naming tools activates areas of the frontal and parietal lobes normally associated with movements and action words. To name a hammer, for example, we might consider the hand movements associated with using hammers and words such as pound or hit.
Figure 9.19Semantic Memories Are Widely Distributed in the Brain.
Different patterns of activity in the cerebral cortex are correlated with various types of semantic memories. Naming animals (a) is associated with activity in the visual cortex of the occipital lobe, suggesting that we think about what an animal looks like to name it. Naming tools (b) activates areas associated with hand movements, suggesting that we think about how we would use a hammer or saw to name one.
© Argosy Publishing, Inc.
In spite of the overlapping characteristics of semantic and episodic memory, they involve distinctive processing in the brain. Patients in the early stages of Alzheimer’s disease showed much more dramatic episodic memory deficits than semantic memory deficits(Perry, Watson, & Hodges, 2000). The default mode network (DMN; see Chapter 4) is associated with thinking about the self, so it is not surprising to note that structures in this network are also implicated in episodic memory processing (Greicius, Srivastava, Reiss, & Menon, 2004). Areas of the temporal lobe and insula seem particularly important for remembering emotional personal experiences (Fink et al., 1996; Sheldon, Farb, Palombo, & Levine, 2016).
Episodic memories are also affected by damage to the prefrontal cortex. Damage in this area can produce a condition known as source amnesia. People with source amnesia maintain their semantic knowledge but do not recall how they acquired it. A man who experienced damage to his prefrontal cortex as the result of a traffic accident retained his semantic and procedural knowledge of the game of chess, but he could not remember how old he was when he learned or who taught him to play the game (Tulving, 1989).
Procedural memories are correlated with activation of the basal ganglia, forebrain structures that are part of the brain’s motor systems (see Chapter 4). People with Huntington’s disease and Parkinson’s disease, both of which produce degeneration in the basal ganglia, typically have trouble learning new procedures (Knowlton et al., 1996; Krebs, Hogan, Hening, Adamovich, & Poizner, 2001). In contrast, their declarative memories remain relatively intact. Recall that Henry Molaison experienced the opposite outcome. His procedural memory abilities were intact, but his declarative memory abilities were severely impaired.
Procedural memories quickly become automatic. When you learn to drive, you must attend to each step, but after learning the process, you “just drive.”
9-7dBiochemistry and Memory
Throughout this chapter, we have emphasized that memory is not a single thing but rather a flexible, multistage process. It should not be surprising, therefore, that we do not have a simple biochemical account for memory.
Acetylcholine (ACh), discussed in Chapter 4, affects the encoding of new information (see Figure 9.20). Drugs that inhibit systems using ACh as a major neurotransmitter interfere with memory formation (Atri et al., 2004). People with Alzheimer’s disease, which is characterized by severe memory deficits, show degeneration of neural circuits that use ACh. Medications prescribed to reduce the symptoms of Alzheimer’s disease boost ACh activity (Holzgrabe, Kapkova, Alptuzun, Scheiber, & Kugelmann, 2007). At the same time, high ACh levels might impair memory consolidation and retrieval (Micheau & Marighetto, 2011). Relatively low levels of ACh, characteristic of sleep, improve the transfer of information from temporary to more permanent storage (Diekelmann & Born, 2010).
Acetylcholine (ACh), Caffeine, and Memory.
Not only do drugs promoting ACh initiate changes in neural structure in honeybees (Weinberger, 2006), but so does caffeine. Bees rewarded with caffeine were 3 times as likely to remember a floral scent as bees rewarded with sucrose (Wright et al., 2013).
Researchers are also interested in the role of the neurotransmitter glutamate in memory formation. One type of glutamate receptor, known as the N-methyl-d-aspartate (NMDA) receptor, is a prime candidate for learning-related changes such as those observed in LTP (Qiu & Knopfel, 2007). Not too surprisingly, chemicals that enhance the activity of glutamate receptors have been shown to boost memory formation in rats (Balschuna, Zuschrattera, & Wetzel, 2006). Similar compounds are being tested for possible use in treating Alzheimer’s disease.
Individual differences in working memory capacity are correlated with activity in systems using GABA, the major inhibitory neurochemical in the brain, especially in a part of the frontal lobes known as the dorsolateral prefrontal cortex (Yoon, Grandelis, & Maddock, 2016). Individual differences in working memory play important roles in overall intelligence (see Chapter 10). At the same time, impairments in working memory are characteristic of conditions from schizophrenia to dementia.
9-8How Can We Improve Memory?
Most college students by definition have good memory skills—this is an essential component of academic success, and those who lack these skills generally do not end up in higher education. However, we can always improve, and the observations made by psychologists studying memory provide many practical suggestions.
We have already discussed several lines of research that have practical implications for improved memory. The structure of long-term memory implies that organized material is easier to remember than disorganized material. Elaborative rehearsal, especially when you connect material to personal experience, anchors new material in your existing memory stores and makes it easier to retrieve. The effects of state, mood, and context on retrieval suggest that studying in circumstances that are most similar to those in which you will retrieve your memories will give you the best outcome. In addition to these basic suggestions, we would like to offer a few additional tips.
9-8aDistribute Practice over Time
Psychology professors will never give up trying to convince students that cramming is a terrible memory strategy. Persistent faith in cramming is surprising, given that we all know concert pianists and basketball players are better off practicing 1 hour a day each day for a week than practicing 6 hours straight the night before a performance. The mind works in similar ways whether it learns to play basketball or it learns the periodic table of elements, so the same learning strategies should work in either case.
Most of us realize that the best way to improve our music or athletic skills is to practice every day (distributed practice). We would think it odd if an athlete or musician crammed practice in the night before a game or performance (massed practice). The same advantage of distributed over massed practice holds for academic work, but unfortunately, that fact does not deter some students from cramming for exams.
Nearly all forms of learning show evidence of an advantage of distributed practice (practice spread out over time) as opposed to massed practice (practice condensed to a short period;Russo & Mammarella, 2002). In other words, spacing the input of information to the brain over time produces better memory than cramming. Whether we are discussing the learning of classically conditioned responses by sea slugs or the learning of complex semantic information by college students, the advantage of distributing learning over time is a constant. By giving the brain more time to consolidate each memory, less is likely to be lost to interference.
We usually think about tests as measuring a student’s ability to retrieve memories, but test taking is a powerful tool for forming memories, too (Roediger & Butler, 2011). Research demonstrates that taking a test produces superior long-term memory when compared with repeated studying of material. In addition, taking tests improved participants’ ability to think about learned material with greater flexibility and to apply material to new situations. We are not advocating that you abandon reviewing your textbook and lecture notes, but we hope you will take advantage of the online testing opportunities that accompany this textbook.
How Can We Protect Memory Retrieval from Stress?
It comes as no surprise to students to learn that stress impairs memory retrieval. All of us have had the experience of leaving a classroom after a test only to remember all the things we forgot as we make our way home. How can we avoid such frustrating experiences?
The way we study might have an effect on how well our memories hold up under stress. Many of us study by going over material multiple times, or “restudying.” Research suggests that a more efficient method of studying includes “retrieval practice,” or the taking of practice tests. These two methods were compared to each other in their ability to withstand stress (Smith, Floerke, & Thomas, 2016).
The Question: Which study method (restudying versus retrieval practice) results in better retrieval during a stressful situation?
Two groups of 60 participants were given the task of learning either 30 concrete nouns or 30 images of nouns, presented one at a time. Following this first presentation, the restudying group restudied the items, while the retrieval practice group recalled as many of the items as they could remember. No feedback was given to the retrieval practice group.
On the next day, half of each group was stressed by being asked to give an unprepared speech and solve math problems in front of judges and peers. While they were doing these activities, the other half was given a comparable but not stressful task. Five minutes into their respective tasks (stressful or not), and again 20 minutes later, the participants were asked to recall the items they studied the previous day.
The method of stressing participants used in this study has appeared in many other experiments without report of adverse effects. However, the possibility of being assigned to the stressful condition should be noted in the informed consent form, along with referrals to appropriate professionals who can help manage stress.
The groups showed no differences in recall during the first test, which occurred five minutes into their stress or no-stress control experience, but their performance was significantly different at the second, delayed test. For the restudying group, stress produced a significant decline in the items recalled. However, for the retrieval practice group, stress made no difference in the number of items recalled. Regardless of stress levels and time of testing, the retrieval practice groups outperformed the restudying groups (see Figure 9.21).
Retrieval Practice Protects Memory from Stress.
A restudying group went over the material to be learned four times, while a retrieval group was asked to recall the items. The following day, half of both groups were stressed while the other half was given a nonstressful task to complete. When asked again to recall the items, stress had a significant, negative effect on the restudying participants. The retrieval group not only remembered more items than even the nonstressed restudying participants, but they were also unaffected by stress. Students prone to test anxiety might wish to incorporate the retrieval method of preparation into their study regimen.
If you have been in the habit of relying on restudying as your major or only strategy for succeeding in your classes, you might want to rethink your methods in light of this experiment. Retrieval practice appears to buffer performance from stress effects much better than restudying. It is likely that retrieval practice accomplishes this feat by producing multiple routes for accessing a memory. Each time you attempt to retrieve information, you also think about associations and the context of the information in slightly different ways. This provides you with multiple pathways back to the information you seek. When you are stressed, physiological correlates of stress (see Chapter 16) might interfere with your use of some but not all of these pathways. The more pathways you have, the more likely you are to find the information in memory.
Physical exercise, especially vigorous exercise such as running, increases adult neurogenesis, or the birth of new neurons, in the hippocampus, at least in mice (Bolz, Heigele, & Bischofberger, 2015; Moon et al., 2016). Not only did exercising mice experience more neurogenesis, but their memory performance improved compared to mice who did not exercise. We will not guarantee that taking up jogging will improve your grades, but it will certainly benefit your health and mood (Chen et al., 2016).
Initially, many psychologists believed that the positive role of sleep in memory formation resulted from a lack of interference. If you learned something right before going to sleep, no further information would enter the system to cause interference. More sophisticated research, however, has demonstrated that sleep plays an active role in the consolidation of memories. Learning during waking might strengthen new connections, but sleep-related processing might reorganize existing memories to accommodate new information (Stickgold & Walker, 2007).
Most types of memories appear stronger after a period of sleep (Boyce, Glasgow, Williams, & Adamantidis, 2016; Genzel, Kroes, Dresler, & Battaglia, 2014). We can also say with confidence that students who pull all-nighters are not doing their memory systems a favor (Havekes et al., 2016). In one experiment, staying up all night produced poor memory for a previous task, and two additional nights of adequate sleep did not compensate for the original deprivation (Stickgold, James, & Hobson, 2000).
Initially, many psychologists believed that the positive role of sleep in memory formation resulted from a lack of interference. If you learned something right before going to sleep, no further information would enter the system to cause interference. More sophisticated research, however, has demonstrated that sleep plays an active role in the consolidation of memories. Learning during waking might strengthen new connections, but sleep-related processing might reorganize existing memories to accommodate new information (Stickgold & Walker, 2007).
Most types of memories appear stronger after a period of sleep (Boyce, Glasgow, Williams, & Adamantidis, 2016; Genzel, Kroes, Dresler, & Battaglia, 2014). We can also say with confidence that students who pull all-nighters are not doing their memory systems a favor (Havekes et al., 2016). In one experiment, staying up all night produced poor memory for a previous task, and two additional nights of adequate sleep did not compensate for the original deprivation (Stickgold, James, & Hobson, 2000).
The early Greeks devised a number of methods, known as mnemonics , for improving memory. Mnemonic devices expand memory capacity by linking the material to be remembered to information that is relatively effortless to retrieve. The first-letter approach takes advantage of chunking. You condense a large amount of information into an acronym. For example, in Chapter 12, we’ll use the acronym OCEAN to help remember the five major personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism.
One of the classic Greek techniques was the method of loci, or places. This technique is particularly handy when you are trying to memorize a list of items in order, such as the planets in our solar system or the cranial nerves. The method takes advantage of the fact that we form excellent representations of visual images in memory. You begin by imagining a familiar place, perhaps your childhood home. As you imagine yourself walking through your home, you visualize each item in a particular location. If you wish to remember your grocery list (although writing the items down is probably easier), you might imagine a carton of eggs on the little table in your entry, a loaf of bread on the sofa, a box of cereal on the television, and so on. To recall your list, all you need to do is to take another imaginary walk through your house, recalling the items you placed as you go. If all goes well, you should remember all your items in the correct order.
This technique may sound like a lot of work, but it can be effective. One of the authors of this textbook had a colleague in graduate school who performed so perfectly on her neuroanatomy exams that her professors accused her of cheating. She related to them how she had been taught the method of loci as a childhood game and had practiced the technique throughout her academic career. After they posed several difficult lists to her, all of which she recalled perfectly, they were convinced of her honesty.
The ancient Greek mnemonic device, the method of loci, takes advantage of our superior memory for visual images of familiar places. Although the method involves consciously imagining things in a particular place, we often use location as a memory aid less consciously. You are probably familiar with the layout of your favorite grocery store and use that mental image to guide your memories for the food you need to purchase. If the store reorganizes its layout between trips, you might forget something.
The Memory Perspective
What is Transactive Memory?
Shared memories are a characteristic of close relationships. You may know people in close relationships who seem to know intuitively what the other is thinking, perhaps even finishing the partner’s sentences. According to a theory of transactive memory, couples in long-term relationships also develop a division of labor in regard to memory, in which each partner knows certain things but also knows what information can be retrieved from the partner if needed (Wegner, 1986; Wegner, Giuliano, & Hertel, 1985). For example, one partner might not keep track of where candles are stored in the house but knows that the other partner knows where the candles are and can be called upon to provide that knowledge in the event of an emergency.
How do couples develop systems like this? Three major strategies have been identified (Wegner, Erber, & Raymond, 1991). First, one partner can explicitly agree to take on an area of expertise, such as managing the household finances. Second, as people get to know each other better through self-disclosure, they also learn about each other’s relative areas of knowledge and expertise. One partner might have an interest in computer science, while the other thinks that computers work by magic. If something goes wrong with a household computer, the second person will turn to the first. Finally, couples learn about their partner’s access to information. If you know that your partner discussed holiday plans with your families, you are likely to assume that your partner knows more about your holiday options than you do.
People in close relationships form transactive memories, or a division of labor for remembering certain things. This woman might remember how to do certain home repair tasks, and her partner might remember others. Together, they have access to far more information than either individual could manage separately.
Bellurget Jean Louis/Getty Images
From an evolutionary standpoint, what are the advantages of working out this division of memory labor? One major advantage of this type of transactive memory is that a couple working well together has access to far more knowledge than either individual could manage separately. The convenience of these systems, in contrast to managing knowledge individually, would contribute to further bonding. Transactive memory is negotiated over long periods between each couple in ways that are unique and not interchangeable with others.
The concept of transactive memory has been extended from intimate couples to larger groups (Peltokorpi, 2008). In the context of larger groups, transactive memory contributes to group cognition or information processing that differs from individual cognition. Transactive memory is critical to understanding the behavior of teams in organizations (Argote & Guo, 2016; Lee, Bachrach, & Lewis, 2014). As in the case of intimate couples, transactive memory contributes to the group’s ability to manage more information than any individual could be expected to do in an efficient manner based on the relevant specialties of the individuals making up the group. Transactive memory contributes to the establishment, maintenance, and adaptation of organizational routines, or ways to solve familiar problems (Miller, Choi, & Pentland, 2014).
Whether transactive memory takes place at the couple or the organizational level, it takes time to develop. People beginning a new relationship can expect some miscommunications and misunderstandings (and overdue bills and lost candles) until their transactive memory system begins to take shape.
False Memories and Cyberbullying
Earlier in this chapter, we gave you the opportunity to explore the formation of false memories. After reading two lists, one of which contained many words associated with sleep, most participants mistakenly believe that the word “sleep” occurred in the sleep-related list (it did not). This type of experiment is known as the Deese–Roediger–McDermott or DRM paradigm, after Roediger and McDermott (1995), who updated the work of Deese (1959). Using the DRM paradigm, researchers reliably elicit false memories of words that do not actually appear on lists.
How might this relate to cyberbullying? Not too surprisingly, aggressive people often form different schemas about how the world works. This leads them to interpret ambiguous information in hostile ways (Dodge, 1980). What happens when we use the DRM paradigm to assess the effects of aggressive schemas on memory? A person’s tendency toward aggressiveness was associated with more false memories of aggressive words in an otherwise ambiguous list (Takarangi, Polaschek, Hignett, & Garry, 2008).
Can we link these aggressive tendencies more closely to actual risk of cyberbullying? When adolescents were exposed to a modified DRM paradigm containing a list of ambiguously hostile words, a list of insults, and three lists of neutral, control words, participants who reported having engaged in cyberbullying responded differently than their less aggressive peers (Vannucci, Nocentini, Mazzoni, & Menesini, 2012; see Figure 9.22). Cyberbullies showed more aggressive false memories for the ambiguously hostile words and more verbal/aggressive false memories for insults.
Figure 9.22Cyberbullying Is Linked to Hostile False Memories.
For both boys and girls, cyberbullying scores were significantly and positively correlated with false memories for violent and insulting terms but not for neutral terms. These results suggest that cyberbullying is associated with the likelihood of making hostile distortions in memory. As with all correlations, we cannot presume causality. It is possible that thinking in hostile ways promotes cyberbullying, that engaging in cyberbullying promotes hostile thinking, that cognitions and bullying mutually affect each other, or that some third variable promotes both hostile thinking and cyberbullying.
Source: Vannucci, M., Nocentini, A., Mazzoni, G., & Menesini, E. (2012). Recalling unpresented hostile words: False memories predictors of traditional and cyberbullying. European Journal of Developmental Psychology, 9(2), 182–194. doi:10.1080/17405629.2011.646459.
These results can be understood within a general aggression model (GAM) (Anderson & Bushman, 2002). The GAM sees aggression as a possible outcome of three stages: person and situation inputs, present internal states (including cognitions and emotions), and appraisals and decision-making processes. The experience of hostile false memories could reinforce hostile schemas, which in turn affect the appraisals of other people’s behaviors as hostile (Takarangi et al., 2008). Seeing innocent or ambiguous behaviors as intentionally hostile might lead an aggressive person to “retaliate.” As we observed in Chapter 7, the main motive of cyberbullying appears to be reactive or responding to a perceived threat or insult. A tendency to remember the other person’s behavior as more threatening than it really was could lead to an escalation of aggression.