Introduction to Psychology from a Christian Worldview

CH. 4 LEARNING, MEMORY, AND INTELLIGENCE

Learning is defined as relatively permanent changes in behavior that result from experience but are not caused by fatigue, maturation, drugs, injury, or disease.

Memory is simply a process of encoding, storing, and retrieving pieces of information.

Everything we are, in our conscious experience, is dependent upon memory. Without memory we would live in a constant state of rediscovery, whereby every instance would be newly learned. Learning and memory are also intricately connected to intelligence.

Intelligence is the overall capacity to think and act logically and rationally within one’s environment.

What is Learning: Approaches to Learning

Learning, psychology tells us, consists of changes in behavior. But not all changes in behavior are examples of learning.

 

In the most brief explanation, learning is a change in behavior (or the potential for behavior) as a result of experience.

Learning:

A process resulting in a relatively consistent change in behavior or behavioral potential and is based on experience.

Learning is difficult to assess because it cannot be observed directly; instead, inferences are made about learning based on changes in performance.

 

Learning is not easily separated from other major topics in psychology. Changes in behavior are centrally involved in many aspects of psychology, including motivation, personality, development, and even mental disorders.

Cognitive Theories:

Theories that look at intellectual processes such as those involved in thinking, problem solving, imagining, and anticipating.

Behavioristic Theories:

Theories concerned with objective evidence of behavior rather than with consciousness and mind. Sometimes these are referred to as S-R or associationistic theories because they deal mainly with associations between stimuli and responses (muscular, glandular, or mental reaction to a stimulus).

Stimulus:

Any change in the physical environment capable of exciting a sense organ. Stimuli can also be internal events such as glandular secretions or even thoughts.

Behavioristic Approaches:

Classical Conditioning and Pavlov’s Experiments

An American named Edwin Twitmyer was actually the first person known to have reported the principle of classical conditioning. About a year later, a Russian by the name of Ivan Pavlov presented essentially the same findings—only he had used dogs as subjects whereas Twitmyer had used humans.

Classical Conditioning, sometimes called learning through stimulus substitution, is learning through stimulus substitution as a result of repeated pairings of an unconditioned stimulus with a conditioned stimulus

To clarify the laws of classical conditioning, Pavlov devised a series of experiments (Pavlov, 1927). In the best known of these, a dog is placed in a harness-like contraption. The apparatus allows food powder to be inserted directly into the dog’s mouth or to be dropped into a dish in front of the dog.

 

The salivation that occurs when

food powder is placed in the dog’s

mouth is an unlearned response

and is therefore an unconditioned

response (UR). The stimulus of food

powder that gives rise to the UR is

an unconditioned stimulus (US).

Unconditioned Response (UR):

The automatic, unlearned response an organism gives when the US is presented.

Unconditioned Stimulus (US):

A stimulus that elicits an automatic, unlearned response from an organism

Behavioristic Approaches:

Classical Conditioning and Pavlov’s Experiments Cont.

In Pavlov’s conditioning demonstration, the trainer arranged for a buzzer to sound as food powder was inserted into the dog’s mouth. This procedure was repeated a number of times.

 

After a while, the trainer simply sounded the buzzer without providing any food powder. And the dog still salivated. The animal was been conditioned to respond to a buzzer, termed a conditioned stimulus (CS), by salivating, a conditioned response (CR).

Instincts:

Complex unlearned, behaviors.

While the concept of classical conditioning might seem abstract or not applicable to important human behavior, it is one of the more important factors in maintaining dependence on abused drugs.

Most animals, including humans, are born with a number of these simple, prewired (meaning they don’t have to be learned) stimulus–response associations called reflexes. More complex behaviors that are also unlearned are instincts.

Conditioned Stimulus:

A once neutral stimulus that becomes conditioned after repeated pairings with the US.

Reflexes:

Stimulus–response associations.

Conditioned Response:

Previously the UR that is now given in response to the CS.

Behavioristic Approaches:

Pavlov’s Experiments and Acquisition

Several factors are directly related to the ease with which a classically conditioned response can be acquired:

 

The distinctiveness of the CS: a stimulus that is easily discriminated from other stimulation will more easily become associated with a response.

 

The temporal relationship between the conditioned and the unconditioned stimuli:

 

Delayed (or forward-order) conditioning, the ideal situation, presents the conditioned stimulus before the unconditioned stimulus, with the CS continuing during the presentation of the US.

Trace conditioning is to have the CS begin and end before the US.

Simultaneous conditioning is to present the US and the CS simultaneously.

Backward conditioning is to present the US prior to the CS.

Unconditioned stimulus (US)–conditioned stimulus (CS) pairing sequences are shown here in the order of effectiveness. Conditioning takes place most quickly in the delayed sequence where the CS (buzzer) precedes the US (food powder) and continues throughout the time the US is presented.

 

Behavioristic Approaches:

Other Classical Conditioning Concepts

CONCEPT DESCRIPTION IMPORTANT TERMS
Generalization and Discrimination A dog trained to salivate in response to a buzzer may also salivate in response to a bell, a gong, or a human imitation of a buzzer. This phenomenon is called stimulus generalization. It involves making the same responses to different but related stimuli. An opposite phenomenon, stimulus discrimination, involves making different responses to highly similar stimuli. Stimulus Generalization: involves making the same responses to different but related stimuli Stimulus Discrimination: involves making different responses to highly similar stimuli
Extinction and Recovery Many classically conditioned responses are remarkably durable. But classically conditioned responses can be eliminated—a process called extinction. Once a classically conditioned response has been extinguished, it can be reacquired much more easily than was initially the case. This phenomenon, spontaneous recovery, illustrates that behaviors that are apparently extinguished are not necessarily completely forgotten. Extinction: process by which classically conditioned responses are eliminated Spontaneous recovery: Is a classical conditioning-related behavior referring to the rapid re-emergence of a previously extinguished behavior.

In general, classical conditioning theorists were not especially concerned with consequences; they studied relationships among stimuli and responses.

CONCEPT DESCRIPTION IMPORTANT TERMS
Contiguity Pavlov’s explanation for Why an emotional (or other) response such as fear becomes conditioned to a particular stimulus (or class of stimuli) is that the simultaneous or near-simultaneous presentation of a stimulus and a response leads to the formation of a neural link between the two. According to Pavlov, what is most important in the conditioning situation is the contiguity (closeness in time) of the stimulus and response. Contiguity: closeness in time of the stimulus and response
Blocking One explanation for blocking is this: Whenever something new happens to an animal, it immediately searches its memory to see what events could have been used to predict it. Blocking: A phenomenon in classical conditioning in which conditioning to a specific stimulus becomes difficult or impossible because of prior conditioning to another stimulus.
Consequences Learning is a fundamentally adaptive process: changes in behavior are what allow organisms to survive. One explanation for classical conditioning says, in effect, that what is learned is not a simple pairing of stimulus and response as a function of contiguity, but the establishment of relationships between stimuli. This explanation holds that what is important in a conditioning situation is the information a stimulus provides about the probability of other events.

Behavioristic Approaches:

Other Classical Conditioning Concepts Cont.

Operant Conditioning: Skinner

Operant conditioning is built around the importance of behavior’s consequences. Operant conditioning is closely associated with B. F. Skinner (1953, 1969, 1971, 1989), one of the most influential psychologists of this age.

Operant Conditioning:

(Skinner) describes changes in the probability of a response as a function of its consequences

Skinner noted that although classical conditioning explains some simple forms of learning where responses are associated with observable stimuli (respondent behavior), most of our behaviors are of a different kind.

 

Behaviors such as walking, jumping, listening to music, writing a letter, and so on are more deliberate; they are seldom associated with a specific stimulus the way salivation might be. These behaviors appear more voluntary.

 

Skinner calls them operants because they are operations that are performed on the environment rather than in response to it.

Respondent:

A response elicited by a known, specific stimulus. An unconditioned response.

Operant:

An apparently voluntary response emitted by an organism.

Operant Conditioning: The Skinner Box

In his investigations, Skinner used a highly innovative piece of equipment now known as a Skinner box.

When a naive rat is placed in this box, it does not respond as predictably as a dog in Pavlov’s harness. Its behaviors are more deliberate, perhaps more accidental. It does not know about Skinner boxes and food trays.

 

It needs to be magazine trained. In a typical magazine training session, these steps are followed:

The experimenter depresses a button that releases a food pellet into the tray.

At the same time, there is an audible clicking sound.

Eventually the rat is drawn to the tray, perhaps by the smell of the pellet, perhaps only out of curiosity.

The experimenter releases another food pellet.

The rat hears the click, eats the pellet, hears another click, runs over to eat another pellet (repeat).

 

In a very short period of time, the rat has been magazine trained.

Skinner Box:

An experimental chamber used in operant conditioning experiments

All of the basic elements of Skinner’s theory of operant conditioning are found in the rat-in-Skinner-box demonstration:

The bar pressing is an operant—an emitted behavior.

The food is a reinforcer

Reinforcement is its effect.

Any stimulus (condition or consequence) that increases the probability of a response is said to be reinforcing.

The Basic Operant Conditioning Model

Reinforcement:

The effect of a reinforcer.

Reinforcer:

Any stimulus condition or consequence that increases the probability of a response.

After a rat had been magazine trained in a Skinner box, when placed in the same situation on another occasion, the rat may begin to emit the operant immediately. The rat has learned associations not only between the operant and reinforcement, but also between the operant and specific aspects of the situation—called discriminative stimuli (SD).

In brief, Skinner’s explanation of learning is based on associations that are established between a behavior and its consequences. Any other distinctive stimulus that happens to be present at the time of those consequences may also come to be associated with the operant.

Discriminative Stimulus (SD):

Skinner’s term for the features of a situation that an organism can discriminate to distinguish between occasions that might be reinforced or not reinforced.

The Law of Effect is the basic law of operant conditioning:

 

Behaviors followed by reinforcement are more likely to be repeated and behaviors not followed by reinforcement are less likely to recur.

 

Operant conditioning does suggest a way of teaching animals or humans very complex behaviors by reinforcing small sequential steps in a chain of behaviors that will ultimately lead to the desired final behavior.

 

 

 

This process is called shaping.

 

 

 

In shaping, the animal (or person) does not learn a complete final response at once but is reinforced instead for behaviors that come progressively closer to that response.

 

Shaping can be a very helpful concept for the process of potty training small children.

Operant Conditioning: Shaping

Shaping:

Reinforcing small sequential steps in a chain of behaviors, leading to the desired final behavior.

Operant Conditioning: Schedules of Reinforcement

Types of Reinforcement Schedules
Fixed Ratio (FR): This schedule provides reinforcement after a specific/defined number of responses are made. For example, if you were to tell your child that they could have a piece of candy after they pick up 10 toys, that child would be on a FR 10 schedule.
Variable Ratio (VR): This schedule provides reinforcement after a certain yet changing number of responses are emitted. For example, if you were to tell your child that they could have a piece of candy after they pick up about 20 toys, they would be on a VR 20 schedule. The difference between this schedule and a FR schedule is that they could be rewarded after picking up 15, 17, 18, 21, 24 or 25 toys.
Fixed Interval (FI): This schedule provides reinforcement for the first response made after a certain time period has elapsed since the last reward, regardless of how many responses have been made during the interval. For example, you tell your child to eat his or her dinner and, to ensure that your child is eating, you tell him or her that you will check on him or her every 3 minutes. If they are eating when you check on them, you tell them that they can have a piece of candy.
Variable Interval (VI): This schedule provides a reinforcement after the first response is made after some period of time has elapsed, but the time changes or varies from reinforcer to reinforcer. For example, if you were to tell your child to do his homework and to ensure that it was actually being done you check on him at 2, 4, 8, 10, 12, 16, and 18 minutes, that child would be on a VI 8 minute schedule.

Operant Conditioning: Effects of Different Schedules of Reinforcement

The effects of different schedules of reinforcement are evident in three different dependent variables:

Rate of learning (acquisition rate):

Initial learning is usually more rapid when every correct response is reinforced (a continuous schedule).

If only some responses are reinforced (intermittent schedule), learning tends to be slower and more haphazard.

Rate of responding:

With intermittent schedules, this is closely tied to expectations the animal might develop about how and when it will receive reinforcement.

With a fixed-interval schedule, this tends to drop off dramatically immediately after reinforcement and picks up again just before the end of the time interval.

Rate of forgetting (extinction rate):

Extinction is typically more rapid with a continuous schedule than with intermittent schedules.

Of the intermittent schedules, variable ratio schedules typically result in the longest extinction times.

 

The independent variable in studies of operant conditioning is the experimenter’s control of reinforcement (the schedule of reinforcement).

 

Operant Conditioning: Types of Reinforcement

Extrinsic Reinforcement

is reinforcement to increase a behavior in the future that comes from an external source (e.g., reading to earn a reward). It includes the variety of external stimuli that might increase the probability of a behavior.

Intrinsic Reinforcement

may be loosely defined as satisfaction, pleasure, or reward that is inherent in a behavior and is therefore independent of external rewards. It is reinforcement to increase a behavior in the future that comes from an internal source (e.g., reading because one loves to read).

vs.

vs.

Primary Reinforcers

are stimuli that are naturally rewarding for an organism. They are stimuli that are rewarding for most people, most of the time, without anybody having had to learn that they are rewarding, such as food, drink, sleep, comfort, and sex. These are not learned.

Secondary Reinforcers

are stimuli that may not be reinforcing initially but that eventually become reinforcing as a function of having been associated with other reinforcers. include the wide range of stimuli that may not be reinforcing initially but that eventually become reinforcing as a function of having been associated with other reinforcers. Thus, secondary reinforcers are learned.

 

 

Negative Reinforcer

are are effective not when they are added to a situation, but rather when they are removed.

For these reinforcers, when an unwanted or painful stimulus is removed and consequently, the probability that the behavior will be repeated is increased

 

Positive Reinforcers

are pleasing or positive stimulus is given and consequently, the probability that the behavior will be repeated is increased

 

If the stimulus increases the probability of a behavior it follows, it is a positive reinforcer.

vs.

 

Negative Reinforcement

occurs whenever a behavior gets rid of something undesirable, and the person becomes more likely to engage in that behavior in the future. Meaning, the experience is positive because a bad thing is taken away.

Positive Reinforcement

occurs when a behavior allows a person to experience something that is pleasurable—like getting high or onset of euphoria/reward. In turn, this increases the probability that the person will engage in that behavior in the future.

vs.

 

Operant Conditioning: Types of Reinforcement Cont.

Punishment vs. Reinforcement

Negative reinforcement increases the probability of a response; the intended effect of punishment is precisely the opposite.

 

The consequences of behavior can involve the removal or presentation of stimuli that are pleasant or unpleasant (noxious). This presents the four distinct possibilities that are relevant to operant learning.

Positive Reinforcement Positive Punishment
Increases the probability of a behavior’s occurrence. Involves giving the person a desired stimulus Decreases the probability of a behavior’s occurrence. Involves giving the person an undesired stimulus
Negative Reinforcement Negative Punishment
Increases the probability of a behavior’s occurrence. Involves the removal of an undesired stimulus Decreases the probability of a behavior’s occurrence. Involves the removal of a desired stimulus

 

The Ethics of Punishment

Objections to Punishment:

Punishment is not always effective in eliminating undesirable behavior. Certainly, it is not nearly as effective as reinforcement in bringing about more desirable behavior.

Punishment often leads to undesirable emotional side effects sometimes associated with the punisher rather than with the punished behavior.

Punishment does not present a guide for desirable behavior; instead, it emphasizes undesirable behavior.

Some research indicates that punishment sometimes has effects opposite to those intended.

Most of these objections apply mainly to physical punishment and not to other forms of punishment.

These other forms of punishment (verbal reprimands, loss of privileges) have long been considered legitimate and effective means of controlling behavior; however, taking these forms of punishment to the extreme is also harmful to development.

 

Perhaps it would be better to re-conceptualize punishment from strictly discipline to discipleship. One simply desires to decrease an unwanted behavior, discipline, while one desires to develop the person primarily through appropriate mentorship and modeling, discipleship.

If punishment is to be used, here are six evidence-based guidelines:

The punishment or aversive stimulus must be swift and brief.

It should be administered right after the inappropriate response occurs.

The punishment should be of limited intensity.

The punishment should be aimed at reducing unwanted behavior not at humiliating the person or attacking their character.

The punishment should be limited to the situation in which the response occurs.

It should consist of penalties instead of physical pain.

Operant Conditioning and Human Behavior

Our lives illustrate what are called concurrent schedules of reinforcement—a variety of options, each linked with different kinds and schedules of possible reinforcement.

Concurrent Schedule of Reinforcement:

A situation in which two or more different reinforcement schedules, each typically related to a different behavior, are presented at the same time.

In experimental situations where participants can choose between different behaviors with different probabilities of reward, they try to maximize the payout.

There are six important variables in determining which models will be most likely to influence behavior:

A model’s observed behavior will be most influential when it is seen as having reinforcing consequences.

The model is perceived positively, liked, and respected.

There are perceived similarities between features and traits of the model and the observer.

Observational learning is also impacted by the degree to which the observer is rewarded for paying attention to the model’s behavior.

The model’s behavior must be visible and must stand out in comparison to other models.

The observer must be capable of reproducing the behavior that is being observed.

A Transition to Cognitivism:

Problems for Traditional Behaviorism & Insight

Instinctive drift presents a problem for traditional operant theory. It is now apparent that not all behaviors can be conditioned and maintained by schedules of reinforcement, that there is some degree of competition between unlearned, biologically based tendencies and the conditioning of related behaviors.

 

Additionally, behaviors that are highly probable and relatively easy are typically those that have high adaptive value. Among humans, these might include behaviors such as learning a language so we can communicate.

Wolfgang Köhler, spent 4 years trying to frustrate apes with a pair of problems: the “stick” problem and the “box” problem.

 

Both problems are essentially the same; only the solutions differ. In both, an ape finds itself unable to reach a tantalizing piece of fruit, either because it is too high or because it is outside the cage beyond reach. In the “stick” problem, the solution involves inserting a small stick inside a larger one to reach the fruit. In the “box” problem, the ape has to place boxes one on top of the other.

When the ape realizes that none of its customary behaviors is likely to obtain the bananas, it may sit for a while, apparently pondering the problem. Then, bingo, it leaps up, quickly joins the sticks or piles the boxes, and reaches for the prize. This solution was insight.

According to Köhler, insight, is a complex, largely unconscious process, is not easily amenable to scientific examination.

Insight:

The sudden recognition of relationships among elements of a problem.

The Main Beliefs of Cognitive Psychology

Cognitivism is an approach concerned mainly with intellectual events such as problem solving, information processing, thinking, and imagining.

 

The dominant metaphor in cognitive psychology, notes Garnham (2009), is a computer-based, information processing (IP) metaphor. The emphasis is on the processes that allow the perceiver to perceive, that determine how the actor acts, and that underlie thinking, remembering, solving problems, and so on.

 

Not surprisingly, experimental participants in cognitive research tend to be human rather than nonhuman.

The Main Beliefs of Cognitive Psychology
Learning Involves Mental Representation The cognitive view describes an organism that is more thoughtful, that can mentally imagine and anticipate the consequences of behavior. In this view, the learner actively participates in the learning process, discovering, organizing, and using strategies to maximize learning and reward.
Learners Are Not Identical Individuals come with different background information, different inclinations and motives, different genetic characteristics, and different cultural origins. As a result, even in the same situation, individuals often learn very different things.
New Learning Builds on Previous Learning The importance of individual differences among learners rests partly on the fact that new learning is often highly dependent upon previously acquired knowledge and skills.

Bandura’s Social Cognitive Theory

We learn many of these complex behaviors, explains Bandura, through observational learning—that is, by observing and imitating models. And, in a sense, learning through imitation is a form of operant learning in that the imitative behavior is like an operant that is learned as a result of being reinforced.

 

A large number of studies indicate that social imitation is a powerful teacher among humans: Even children as young as 2 or 3 imitate and learn from each other.

In Bandura’s social cognitive theory, models are not limited to people who might be imitated by others; they include symbolic models as well. Symbolic models are any representation or pattern that can copied, such as oral or written instructions, pictures, book characters, mental images, cartoon or film characters, and television actors.

Models provide the imitator with two kinds of information:

How to perform an act

What the likely consequences of doing so are.

 

If the observer now imitates the behavior, there is a possibility of either direct reinforcement or vicarious (secondhand) reinforcement.

Vicarious Reinforcement:

When you see someone doing something repeatedly, you unconsciously assume that the behavior must be reinforcing for that person.

Direct Reinforcement:

Results from the conse-quences of the act itself.

Symbolic Model:

A model other than a real-life person. For example, books and TV important symbolic models.

Social Cognitive Theory:

An explanation of learning and behavior that emphasizes the role of social reinforcement and imitation as well as the importance of the cognitive processes that allow people to imagine and to anticipate.

 

Bandura’s Social Cognitive Theory: Reciprocal Determinism

According to Bandura, reinforcement does not control us blindly; its effects depend largely on our awareness of the relationship between our behavior and its outcomes. What is fundamentally important is our ability to figure out cause-and-effect relationships and to anticipate the outcomes of our behaviors.

 

That we are both products and producers of our environment is the basis of Bandura’s concept of triadic reciprocal determinism.

There are three principal features of our social cognitive realities:

 

Our personal factors – our personalities, our intentions, what we know and feel

Our actions – our actual behaviors

Our environments – both the social and physical aspects of our world

 

These three factors affect each other reciprocally.

Triadic Reciprocal Determinism:

Describes the three principal features of our social cognitive realities: our personal factors (our personalities, our intentions, what we know and feel); our actions (our actual behaviors); and our environments (both the social and physical aspects of our world).

Bandura’s Social Cognitive Theory: Effects of Imitation

According to Bandura, through observational learning, we learn three different classes of behaviors:

Modeling Effect:

The type of imitative behavior that involves learning a novel response.

Inhibitory/

Disinhibitory Effect:

The type of imitative behavior that results either in the suppression (inhibition) or appearance (disinhibition) of previously acquired deviant behavior.

Eliciting Effect:

Imitative behavior in which the observer does not copy the model’s responses but simply behaves in a related manner.

Type of Effect Description Illustration
Modeling Effect Acquiring a new behavior as a result of observing a model. After watching a mixed martial arts program, Jenna tries out a few novel moves on her young brother, Liam.
Inhibitory-Disinhibitory Effect Stopping or starting some deviant behavior after seeing a model punished or rewarded for similar behavior. After watching Jenna, Nora, who already knew all of Jenna’s moves but hadn’t used them in a long time, now tries a few of them on her sister (disinhibitory effect). Nora abandons her pummeling of her sister when Liam’s mother responds to his wailing and takes Jenna’s smartphone away (inhibitory effect).
Eliciting Effect Engaging in behavior related to that of a model. Robin tries to learn to play the guitar after her cousin is applauded for singing at the family reunion.

We learn brand new behaviors, modeling effect.

 

We learn to suppress or stop suppressing deviant behaviors, inhibitory/disinhibitory effect.

 

We learn to engage in behaviors similar but not identical to the model’s behavior, eliciting effect.

Bandura’s Social Cognitive Theory: Humans as Agents of Their Own Behaviors

To the right is Bandura’s notion of triadic reciprocal determinism. Behavior, the person, and the environment all mutually influence and change each other.

Being agents of our own actions requires three things:

 

Intentionality.

Intentionality implies forethought, the ability to symbolize that allows you to foresee the consequences of the actions you intend.

Being able to reflect on them and to reflect on ourselves and especially on our own effectiveness, self-efficacy.

Self-efficacy:

Judgments we make about how effective we are in given situations.

Practical Applications of Learning Principles

Cognitivism
Discovery Learning This is a learner-centered approach to teaching where content is not organized by the teacher and presented in a relatively final form. The acquisition of new knowledge comes about largely through the learner’s own efforts; learners are expected to investigate and discover for themselves, and to construct their own mental representations.
Constructivism This is a general term for student-centered approaches to teaching, such as discovery-oriented approaches, reciprocal learning, or cooperative instruction—so called because of their assumption that learners should build (construct) knowledge for themselves.
Reciprocal Teaching This is a teaching method designed to improve reading comprehension.
Cognitive Apprenticeship This is when novice learners are paired with older learners, teachers, or parents who serve as mentors and guides.
Behaviorism
Behavior Modification The systematic application of learning principles to change behavior. This is widely used in schools and institutions for children with behavioral and emotional problems, as well as in the treatment of mental disorders. Essentially, it involves the deliberate and systematic use of reinforcement, and sometimes punishment, to modify behavior.

Memory and Intelligence: Attention

Generally, attention has been conceptualized as a state of concentrating on something (focalization of consciousness) or as a finite processing capacity that can be allocated in a variety of ways.

 

Along with dividing attention based on the task performed, contemporary models of attention tend to fall into two general categories:

Theories that view attention as a causal mechanism, which distinguish between automatic and controlled processes.

Theories that see attention as a consequence of other processes, like priming activities for some memory.

Attention:

A concentrated mental effort that functions as a filter to ignore un-important events and focus on important events.

Cognitive overload occurs when excessive demands are placed on particular cognitive processes, especially attention and memory.

Cognitive Overload:

The amount of working memory resources dedicated to a specific task with the idea that there is a limit to the amount of processing load the brain can manage.

Factors Influencing Attention:

The number of sources

The similarity of sources

The complexity of the tasks

Automaticity

As people try to multitask, their cognitive resources get “stretched” to their maximum ability, which interferes with performance as well as results in more rapid fatigue due

to cognitive shifts from one

source or activity to another.

The Basics of Memory

Memory is simply a process of encoding, storing, and retrieving pieces of information. Each stage in the memory process in important for the accuracy and the ability to retrieve the information later.

 

We refer to the information itself as a “memory.” The ability to bring that particular memory into our cognitive awareness depends on encoding, storage, and retrieval.

Memory:

The process of encoding, storage, and retrieval of any piece of information obtained through conscious experience;

a memory can also be an individual instance of encoded, stored, and retrieved information.

Encoding:

The process of transforming experienced information into a form that can be later stored and used by the brain.

Storage:

The process of storing. This occurs after encoding.

Retrieval:

The process of recognizing and then correctly recalling a piece of information from storage in long-term memory.

Step 1:

Step 2:

Step 3:

Memory structures

The stage model of memory suggests the process of encoding → storage → retrieval is integrated into these three different levels of memory. These levels are sensory memory, short-term memory, and long-term memory.

Sensory Memory:

A form of memory that holds large amounts of sensory information such as sights and sounds for a very brief amount of time, normally only a few seconds.

Sensory memory cannot be retained for a longer duration through the process of rehearsal (e.g., studying).

It seems to happen automatically, without awareness, and is very difficult to manipulate through psychological techniques.

It captures a very large amount of information.

The conscious awareness we have been discussing to this point is more formally termed short-term memory. Another term for short-term memory is “working memory.”

Short-term Memory:

A type of temporary memory used to hold information long enough for an individual to process it, and make sense of it; also called working memory.

Working Memory:

The Baddeley model describing how information is processed in short-term memory by means of a control system (central executive system) and systems that maintain verbal material (phonological loop) and visual material (visual-spatial sketch pad).

Miller (1956) discovered humans seem to be capable of holding five to nine different items for a short amount of time.

 

On average, the amount of time a piece of information will remain in short-term memory—without rehearsal—is 20 seconds.

 

According to Baddeley, short-term memory is divided into three components:

The visuospatial sketchpad is specialized to process visual and spatial information.

 

The phonological loop, which is specialized for auditory and verbal information, is the part of short-term memory assumed to be used during most traditional memory tasks, such as remembering word lists.

 

The central executive acts as a type of CEO, organizing and integrating the specialized processing of the visuospatial sketchpad and phonological loop prior to encoding the information into long-term memory. It also plays a key role in dictating when retrieval from long-term memory will occur, and which information will be retrieved.

 

Rehearsal:

The process of repeatedly introducing new information in order to retain the information in short-term memory, or to introduce into long-term memory.

Maintenance Rehearsal:

A relatively shallow level of rehearsal typically characterized by repeating something many times (e.g., repeating a phone number in your head).

Elaborative Rehearsal:

A type of rehearsal in which a person actively tries to tie new information to pre-existing information already in long-term memory. The net effect is to increase the likelihood that the new information is retained in long-term memory.

Memory Structures

Encoding helps move information from its temporary state in short-term memory to a much longer lasting state in long-term memory through rehearsal.

 

There are two main types of rehearsal used to get information into long-term memory are maintenance rehearsal and elaborative rehearsal.

Memory Structures Summarized

Source: Brian Kelley

Memory Structures: Long-Term Memory

Long-Term Memory

is one of the human memory systems that can store information for a long-period of time; it is divided into two types:

explicit memory and implicit memory.

Type of L-TM Definition Important Terms
Explicit Memory This is conscious recollection of facts or experiences. It requires you to consciously think about general knowledge about this world and specific concepts (semantic memory) or may involve your conscious access to your personal experiences that took place at a specific time in a certain place (episodic memory). Semantic Memory: It is one’s general knowledge about the world and specific concepts. Episodic Memory: This is conscious recollection of one’s personal experiences that took place at a specific time in a certain place.
Implicit Memory This is memory about how to perform a task which is usually unconsciously accessible; it is generally easier to demonstrate or perform this memory than to explain it. Procedural Memory: This involves learning of motor and cognitive skills.

Source: Brian Kelley

Forgetting: Interference Theory

Interference Theory is currently the dominating theory to explain forgetting.

 

 

Interference can happen in one of two ways:

 

Because new materials are competing with something you used to know and interferes with the old information, retroactive interference

When old materials can compete with new things we are trying to learn, proactive interference.

Interference Theory:

A theory of forgetting which suggests most forgetting is the result of an interaction between new and previously learned information, leading either to a failure to learn new material, or a forgetting of past material.

Retroactive Interference:

A theory of forgetting in which more recent information gets in the way of trying to recall older information.

Proactive Interference:

This is when old information prevents the formation or recall of newer information.

 

Schacter (1999) discussed the forgetful nature of our long-term memory with following seven characteristics.

 

Absent-mindedness: This refers to the breakage between your memory and your attention.

 

Transience: Despite our effort, memory tends to diminish quickly. It describes the temporal nature of our memory.

 

Blocking: Blocking is also known as the tip-of-the-tongue phenomenon––when you feel and try to remember things but cannot articulate it. It happens because of the inaccessibility of stored information.

 

Misattribution: This refers to the confusion of the original source of information.

 

Suggestibility: Due to the constructive nature of our memory system, leading words or suggestions from others can alter and bias our memory.

 

Bias: Current personal experiences or events can cloud how we remember similar events that happened in the past.

 

Persistence: According to Schacter, negative events tend to linger longer than neutral or positive memories.

Seven Deadly Sins of Memory

Forgetting:

Memory Reconstruction & Long-lasting Memories

Long-term memory is malleable; it can be surprisingly easy to implant a new memory or alter one’s existing memory.

 

Loftus’ (1975) results show the long-term memory system may not be as accurate as we wish and can be changed by a simple leading question.

 

As important note is that study results imply that eyewitness testimony for legal procedures is subject to the reconstructive nature of memory.

 

Although long-term memories are not perfect. However, there seem to be special events immune from forgetting. Flashbulb memories are typically long lasting, precise, and accurate.

 

Flashbulb Memories:

A type of long-term memory. Memories formed by dramatic and surprising public or personal events; typically known to be immune from forgetting.

There are a few useful strategies for successful studying habits:

 

Repeat the materials: Proper encoding and repetition help maintain the information for a longer period of time.

 

When repeating the materials, leave some space between your study sessions: New materials are more vulnerable to forgetting, so repeat the process to consolidate that information.

 

Make the materials meaningful: Information that goes through elaborative rehearsal is processed deeply and more deeply processed information is typically remembered better.

 

Minimize interference if possible: If you are studying two similar contents, leave sometime between the two rather than working on them back to back. Devising your own memory aid (mnemonic device).

Forgetting: Improving Memory

Intelligence

Intelligence and intelligence testing has been one of the most studied and controversial constructs in the history of psychology.

 

It has been extremely useful in identifying individual strengths and weaknesses by providing additional information on how to best teach struggling students. However, it has also been used to segregate and label individuals that,

 

To make matters even more difficult, psychologists have not been able to agree on an operational definition of intelligence.

Intelligence:

The overall capacity to think and act logically and rationally within one’s environment.

The Binet–Simon Scale was designed so that test items increased in level of difficulty related to age. As you can expect, a 10-year-old should be capable of completing more complex tasks than a 7-year-old. This is the genesis of the concept of mental age.

 

A child’s overall intelligence was calculated by dividing the child’s mental age by chronological age and multiplying the results by 100. Here would be the mathematical calculation for the child:

 

7 (mental age)/10 (chronological age) × 100 = 70 (IQ)

 

Currently, an IQ of 100 is considered average.

 

One of the major changes made to the Binet–Simon Scale was that Terman (1916) introduced the term intelligence quotient (IQ) rather than using mental age.

Intelligence and Intelligence Testing: History and Development

During World War I, Yerkes and colleagues created two group-administered intelligence tests: Army Alpha and Army Beta. Both intelligence tests were used to assess incoming soldiers on their ability to serve the military and move into leadership positions.

 

Wechsler–Bellevue Intelligence Scale extended intelligence testing to adult populations that currently could not be assessed using the Stanford–Binet. Currently, there are four different Wechsler intelligence tests.

Intelligence Quotient:

The global score derived from standardized intelligence tests.

Mental Age:

The age given at which a child is currently performing intellectually.

Army Alpha:

The test given to literate military personnel to determine rank.

Army Beta:

The nonverbal test given to illiterate military personnel to determine rank.

Review TABLE 4.4 in your textbook for the full review of Contemporary Theories on Intelligence

Intelligence and Intelligence Testing: Contemporary Theories on Intelligence

Theorist & Theory Highlights Abilities/Categories of Intelligence
Charles Spearman Psychometric g
Howard Gardner He proposed a theory of multiple intelligences (at least 8 different types) because he believed intelligence could be best described and understood through multiple abilities.
Lewis Thurstone Word fluency, Verbal comprehension, Spatial visualization, Number facility, Associative memory, Reasoning, and Perceptual speed
Robert Sternberg Analytical intelligence, Practical intelligence, and Creative intelligence
Cattell–Horn–Carroll (CHC) This is one of the most researched and widely accepted theories of intelligence. The CHC Theory of Intelligence is actually an integration of Cattell and Horn’s and Carroll’s models of intelligence.

Psychometric g:

A term coined by Spearmen regarding a person’s general or overall intelligence.

Multiple Intelligences:

A theory suggesting that intelligence is a product of a number of abilities rather than one ability.

CHC Theory of Intelligence:

The most researched and widely supported theory of intelligence.

 

Intelligence: Genetic & Environmental Influences

It is important to know you are not born with a level of intelligence. Intelligence is not innately given to each individual and can change throughout one’s lifetime.

 

People are born with a genetic predisposition or genotype that is expressed through interactions with your environment or your phenotype.

One way in which psychological researchers have examined the relationship between intelligence and genetic/environmental influences is through twin studies.

 

Nobody truly knows the exact percentage genetics or environment play in overall IQ scores; however, it is widely agreed that both do influence IQ test scores.

 

Intelligence is not a fixed number. It is a manifestation of your genotype as it interacts with your phenotype throughout development.

Intelligence: Cultural Influences

Dating back to the second half of the 20th century, multiple landmark court cases have attempted to answer the question, “Are intelligence tests culturally biased?”

Culturally Biased:

A term used when an intelligence test gives an unfair advantage to White, affluent, male test takers.

Culturally Loaded:

A term used when many of the items on the intelligence test are derived from the mainstream culture.

Results of statistical analyses should be able to identify whether intelligence tests are culturally biased. Meaning, if intelligence tests are consistent in their measurement across ethnic groups, they are not assumed to be culturally biased.

The consensus is that intelligence tests are psychometrically fair

 

It is important to differentiate between culturally biased and culturally loaded.

In Larry P. v. Riles, the judge concluded intelligence tests are racially and culturally biased.

Conversely, in Parents in Action on Special Education v. Joseph P. Hannon, the federal court ruled intelligence tests are not culturally biased.

In fact, intelligence tests are culturally loaded; all intelligence tests have some degree of relationship with the culture in which they were designed.

 

It is entirely possible that different levels of cultural exposure could result in group differences in IQ scores. Intelligence test scores must be interpreted within the context that they are achieved.

Created by Bailee Robinson