Exploring Popular Assumptions

Exploring Popular Assumptions

Give one example of a ‘folk-psychology’ claim that was not discussed in section this week. This can be something you have heard in your life, or one that you find on the Internet or popular media.  Explain why this claim is pseudoscientific, and not scientific.

Identifying Bias in Scientific Reporting

After discussing bias in class and lab this week, find a news article (about something scientific) that includes an example of one type of bias.

Answer the following:

a) Give the title and a link to the article you read.

b) Definition of the type of bias you identified.

c) Description of the example you found in a recent scientific news article.

d) Explain why this kind of bias is problematic.

d) Give a few possible ways that this bias could have been mitigated in your given example.

  • Beth Morling – Research Methods in Psychology_ Evaluating a World of Information.pdf

     

     

    THIRD EDITION

    Research Methods in Psychology EVALUATING A WORLD OF INFORMATION

     

     

     

    THIRD EDITION

    Research Methods in Psychology EVALUATING A WORLD OF INFORMATION

    Beth Morling UNIVERSITY OF DELAWARE

    n W. W. NORTON & COMPANY, INC.NEW YORK • LONDON

     

     

    W. W. Norton & Company has been independent since its founding in 1923,

    when William Warder Norton and Mary D. Herter Norton first published

    lectures delivered at the People’s Institute, the adult education division of

    New York City’s Cooper Union. The firm soon expanded its program beyond

    the Institute, publishing books by celebrated academics from America and

    abroad. By midcentury, the two major pillars of Norton’s publishing program—

    trade books and college texts—were firmly established. In the 1950s, the Norton

    family transferred control of the company to its employees, and today—with

    a staff of four hundred and a comparable number of trade, college, and

    professional titles published each year—W. W. Norton & Company stands as

    the largest and oldest publishing house owned wholly by its employees.

    Copyright © 2018, 2015, 2012 by W. W. Norton & Company, Inc.

    All rights reserved Printed in Canada

    Editor: Sheri L. Snavely Project Editor: David Bradley Editorial Assistant: Eve Sanoussi Manuscript/Development Editor: Betsy Dilernia Managing Editor, College: Marian Johnson Managing Editor, College Digital Media: Kim Yi Production Manager: Jane Searle Media Editor: Scott Sugarman Associate Media Editor: Victoria Reuter Media Assistant: Alex Trivilino Marketing Manager, Psychology: Ashley Sherwood Design Director and Text Design: Rubina Yeh Photo Editor: Travis Carr Photo Researcher: Dena Digilio Betz Permissions Manager: Megan Schindel Composition: CodeMantra Illustrations: Electragraphics Manufacturing: Transcontinental Printing

    Permission to use copyrighted material is included in the Credits section beginning on page 603.

    Library of Congress Cataloging-in-Publication Data

    Names: Morling, Beth, author. Title: Research methods in psychology : evaluating a world of information / Beth Morling, University of Delaware. Description: Third Edition. | New York : W. W. Norton & Company, [2017] | Revised edition of the author’s Research methods in psychology, [2015] | Includes bibliographical references and index. Identifiers: LCCN 2017030401 | ISBN 9780393617542 (pbk.) Subjects: LCSH: Psychology—Research—Methodology—Textbooks. | Psychology, Experimental—Textbooks. Classification: LCC BF76.5 .M667 2017 | DDC 150.72—dc23 LC record available at https://lccn.loc.gov/2017030401

    Text-Only ISBN 978-0-393-63017-6

    W.  W. Norton & Company, Inc., 500 Fifth Avenue, New York, NY 10110 wwnorton.com W.  W. Norton & Company Ltd., 15 Carlisle Street, London W1D 3BS

    1 2 3 4 5 6 7 8 9 0

     

     

    For my parents

     

     

    vii

    Brief Contents

    PART I Introduction to Scientific Reasoning CHAPTER 1 Psychology Is a Way of Thinking 5

    CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It 25

    CHAPTER 3 Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57

    PART II Research Foundations for Any Claim CHAPTER 4 Ethical Guidelines for Psychology Research 89

    CHAPTER 5 Identifying Good Measurement 117

    PART III Tools for Evaluating Frequency Claims CHAPTER 6 Surveys and Observations: Describing What People Do 153

    CHAPTER 7 Sampling: Estimating the Frequency of Behaviors and Beliefs 179

    PART IV Tools for Evaluating Association Claims CHAPTER 8 Bivariate Correlational Research 203

    CHAPTER 9 Multivariate Correlational Research 237

    PART V Tools for Evaluating Causal Claims CHAPTER 10 Introduction to Simple Experiments 273

    CHAPTER 11 More on Experiments: Confounding and Obscuring Variables 311

    CHAPTER 12 Experiments with More Than One Independent Variable 351

    PART VI Balancing Research Priorities CHAPTER 13 Quasi-Experiments and Small-N Designs 389

    CHAPTER 14 Replication, Generalization, and the Real World 425

    Statistics Review Descriptive Statistics 457

    Statistics Review Inferential Statistics 479

    Presenting Results APA-Style Reports and Conference Posters 505

    Appendix A Random Numbers and How to Use Them 545

    Appendix B Statistical Tables 551

     

     

    viii

    BETH MORLING is Professor of Psychology at the University of  Delaware. She attended Carleton College in Northfield, Minnesota, and received her Ph.D. from the University of Massachusetts at Amherst. Before coming to Delaware, she held positions at Union College (New York) and Muhlenberg College (Pennsylvania). In  addition to teaching research methods at Delaware almost every semester, she also teaches undergraduate cultural psychology, a seminar on the self- concept, and a graduate course in the teaching of psychology. Her research in the area of cultural psychology explores how cultural practices shape people’s motivations. Dr. Morling has been a Fulbright scholar in Kyoto, Japan, and was the Delaware State Professor of the Year (2014), an award from the Council for Advancement and Support of Education (CASE) and the Carnegie Foundation for the Advancement of Teaching.

    About the Author

     

     

    ix

    Preface

    Students in the psychology major plan to pursue a tremendous variety of careers— not just becoming psychology researchers. So they sometimes ask: Why do we need to study research methods when we want to be therapists, social workers, teachers, lawyers, or physicians? Indeed, many students anticipate that research methods will be “dry,” “boring,” and irrelevant to their future goals. This book was written with these very students in mind—students who are taking their first course in research methods (usually sophomores) and who plan to pursue a wide variety of careers. Most of the students who take the course will never become researchers themselves, but they can learn to systematically navigate the research information they will encounter in empirical journal articles as well as in online magazines, print sources, blogs, and tweets.

    I used to tell students that by conducting their own research, they would be able to read and apply research later, in their chosen careers. But the literature on learning transfer leads me to believe that the skills involved in designing one’s own studies will not easily transfer to understanding and critically assessing studies done by others. If we want students to assess how well a study supports its claims, we have to teach them to assess research. That is the approach this book takes.

    Students Can Develop Research Consumer Skills To be a systematic consumer of research, students need to know what to priori- tize when assessing a study. Sometimes random samples matter, and sometimes they do not. Sometimes we ask about random assignment and confounds, and sometimes we do not. Students benefit from having a set of systematic steps to help them prioritize their questioning when they interrogate quantitative infor- mation. To provide that, this book presents a framework of three claims and four validities, introduced in Chapter 3. One axis of the framework is the three kinds of claims researchers (as well as journalists, bloggers, and commentators) might make: frequency claims (some percentage of people do X), association claims (X is associated with Y), and causal claims (X changes Y). The second axis of

     

     

    x PREfACE

    the  framework is the four validities that are generally agreed upon by methodol- ogists: internal, external, construct, and statistical.

    The three claims, four validities framework provides a scaffold that is rein- forced throughout. The book shows how almost every term, technique, and piece of information fits into the basic framework.

    The framework also helps students set priorities when evaluating a study. Good quantitative reasoners prioritize different validity questions depending on the claim. For example, for a frequency claim, we should ask about measurement (construct validity) and sampling techniques (external validity), but not about ran- dom assignment or confounds, because the claim is not a causal one. For a causal claim, we prioritize internal validity and construct validity, but external validity is generally less important.

    Through engagement with a consumer-focused research methods course, students become systematic interrogators. They start to ask more appropriate and refined questions about a study. By the end of the course, students can clearly explain why a causal claim needs an experiment to support it. They know how to evaluate whether a variable has been measured well. They know when it’s appro- priate to call for more participants in a study. And they can explain when a study must have a representative sample and when such a sample is not needed.

    What About Future Researchers? This book can also be used to teach the flip side of the question: How can produc- ers of research design better studies? The producer angle is presented so that stu- dents will be prepared to design studies, collect data, and write papers in courses that prioritize these skills. Producer skills are crucial for students headed for Ph.D. study, and they are sometimes required by advanced coursework in the undergraduate major.

    Such future researchers will find sophisticated content, presented in an accessible, consistent manner. They will learn the difference between media- tion (Chapter 9) and moderation (Chapters 8 and 9), an important skill in theory building and theory testing. They will learn how to design and interpret factorial designs, even up to three-way interactions (Chapter 12). And in the common event that a student-run study fails to work, one chapter helps them explore the possi- ble reasons for a null effect (Chapter 11). This book provides the basic statistical background, ethics coverage, and APA-style notes for guiding students through study design and execution.

    Organization The fourteen chapters are arranged in six parts. Part I (Chapters 1–3) includes introductory chapters on the scientific method and the three claims, four validities framework. Part II (Chapters 4–5) covers issues that matter for any study: research

     

     

    xiSupport for Students and Instructors

    ethics and good measurement. Parts III–V (Chapters 6–12) correspond to each of the three claims (frequency, association, and causal). Part VI (Chapters 13–14) focuses on balancing research priorities.

    Most of the chapters will be familiar to veteran instructors, including chapters on measurement, experimentation, and factorial designs. However, unlike some methods books, this one devotes two full chapters to correlational research (one on bivariate and one on multivariate studies), which help students learn how to interpret, apply, and interrogate different types of association claims, one of the common types of claims they will encounter.

    There are three supplementary chapters, on Descriptive Statistics, Inferential Statistics, and APA-Style Reports and Conference Posters. These chapters provide a review for students who have already had statistics and provide the tools they need to create research reports and conference posters.

    Two appendices—Random Numbers and How to Use Them, and Statistical Tables—provide reference tools for students who are conducting their own research.

    Support for Students and Instructors The book’s pedagogical features emphasize active learning and repetition of the most important points. Each chapter begins with high-level learning objectives— major skills students should expect to remember even “a year from now.” Impor- tant terms in a chapter are introduced in boldface. The Check Your Understanding questions at the end of each major section provide basic questions that let students revisit key concepts as they read. Each chapter ends with multiple-choice Review Questions for retrieval practice, and a set of Learning Actively exercises that encourage students to apply what they learned. (Answers are provided at the end of the book.) A master table of the three claims and four validities appears inside the book’s front cover to remind students of the scaffold for the course.

    I believe the book works pedagogically because it spirals through the three claims, four validities framework, building in repetition and depth. Although each chapter addresses the usual core content of research methods, students are always reminded of how a particular topic helps them interrogate the key validities. The interleaving of content should help students remember and apply this questioning strategy in the future.

    I have worked with W. W. Norton to design a support package for fel- low instructors and students. The online Interactive Instructor’s Guide offers in-class activities, models of course design, homework and final assignments, and chapter-by-chapter teaching notes, all based on my experience with the course. The book is accompanied by other ancillaries to assist both new and experienced research methods instructors, including a new InQuizitive online assessment tool, a robust test bank with over 750 questions, updated lecture and active learning slides, and more; for a complete list, see p. xix.

     

     

    xii PREfACE

    Teachable Examples on the Everyday Research Methods Blog Students and instructors can find additional examples of psychological science in the news on my regularly updated blog, Everyday Research Methods (www .everydayresearchmethods.com; no password or registration required). Instruc- tors can use the blog for fresh examples to use in class, homework, or exams. Students can use the entries as extra practice in reading about research studies in psychology in the popular media. Follow me on Twitter to get the latest blog updates (@bmorling).

    Changes in the Third Edition Users of the first and second editions will be happy to learn that the basic organi- zation, material, and descriptions in the text remain the same. The third edition provides several new studies and recent headlines. Inclusion of these new exam- ples means that instructors who assign the third edition can also use their favorite illustrations from past editions as extra examples while teaching.

    In my own experience teaching the course, I found that students could often master concepts in isolation, but they struggled to bring them all together when reading a real study. Therefore, the third edition adds new Working It Through sections in several chapters (Chapters 3, 4, 5, 8, and 11). Each one works though a single study in depth, so students can observe how the chapter’s central concepts are integrated and applied. For instance, in Chapter 4, they can see how ethics concepts can be applied to a recent study that manipulated Facebook newsfeeds. The Working It Through material models the process students will probably use on longer class assignments.

    Also new in the third edition, every figure has been redrawn to make it more visually appealing and readable. In addition, selected figures are annotated to help students learn how to interpret graphs and tables.

    Finally, W. W. Norton’s InQuizitive online assessment tool is available with the third edition. InQuizitive helps students apply concepts from the textbook to practice examples, providing specific feedback on incorrect responses. Some questions require students to interpret tables and figures; others require them to apply what they’re learning to popular media articles.

    Here is a detailed list of the changes made to each chapter.

     

     

    xiiiChanges in the Third Edition

    CHAPTER MAJOR CHANGES IN THE THIRD EDITION

    1. Psychology Is a Way of Thinking

    The heading structure is the same as in the second edition, with some updated examples. I replaced the facilitated communication example (still an excellent teaching example) with one on the Scared Straight program meant to keep adolescents out of the criminal justice system, based on a reviewer’s recommendation.

    2. Sources of Information: Why Research Is Best and How to Find it

    I simplified the coverage of biases of intuition. Whereas the second edition separated cognitive biases from motivated reasoning, the biases are now presented more simply. In addition, this edition aims to be clearer on the difference between the availability heuristic and the present/present bias. I also developed the coverage of Google Scholar.

    3. Three Claims, Four Validities: Interrogation Tools for Consumers of Research

    The three claims, four validities framework is the same, keeping the best teachable examples from the second edition and adding new examples from recent media. In response to my own students’ confusion, I attempted to clarify the difference between the type of study conducted (correlational or experimental) and the claims made about it. To this end, I introduced the metaphor of a gift, in which a journalist might “wrap” a correlational study in a fancy, but inappropriate, causal claim.

    When introducing the three criteria for causation, I now emphasize that covariance is about the study’s results, while temporal precedence and internal validity are determined from the study’s method.

    Chapter 3 includes the first new Working It Through section.

    4. Ethical Guidelines for Psychology Research

    I updated the section on animal research and removed the full text of APA Standard 8. There’s a new figure on the difference between plagiarism and paraphrasing, and a new example of research fabrication (the notorious, retracted Lancet article on vaccines and autism). A new Working It Through section helps students assess the ethics of a recent Facebook study that manipulated people’s newsfeeds.

    5. Identifying Good Measurement

    This chapter retains many of the teaching examples as the second edition. For clarity, I changed the discriminant validity example so the correlation is only weak (not both weak and negative). A new Working It Through section helps students apply the measurement concepts to a self-report measure of gratitude in relationships.

    6. Surveys and Observations: Describing What People Do

    Core examples are the same, with a new study illustrating the effect of leading questions (a poll on attitudes toward voter ID laws). Look for the new “babycam” example in the Learning Actively exercises.

    7. Sampling: Estimating the Frequency of Behaviors and Beliefs

    Look for new content on MTurk and other Internet-based survey panels. I updated the statistics on cell-phone-only populations, which change yearly. Finally, I added clarity on the difference between cluster and stratified samples and explained sample weighting.

    I added the new keyword nonprobability sample to work in parallel with the term probability sample. A new table (Table 7.3) helps students group related terms.

     

     

    xiv PREfACE

    CHAPTER MAJOR CHANGES IN THE THIRD EDITION

    8. Bivariate Correlational Research

    This chapter keeps most of the second edition examples. It was revised to better show that association claims are separate from correlational methods. Look for improved moderator examples in this chapter. These new examples, I hope, will communicate to students that moderators change the relationship between variables; they do not necessarily reflect the level of one of the variables.

    9. Multivariate Correlational Research

    I replaced both of the main examples in this chapter. The new example of cross- lag panel design, on parental overpraise and child narcissism, has four time periods (rather than two), better representing contemporary longitudinal studies. In the multiple regression section, the recess example is replaced with one on adolescents in which watching sexual TV content predicts teen pregnancy. The present regression example is student-friendly and also has stronger effect sizes.

    Look for an important change in Figure 9.13 aimed to convey that a moderator can be thought of as vulnerability. My own students tend to think something is a moderator when the subgroup is simply higher on one of the variables. For example, boys might watch more violent TV content and be higher on aggression, but that’s not the same as a moderator. Therefore, I have updated the moderator column with the moderator “parental discussion.” I hope this will help students come up with their own moderators more easily.

    10. Introduction to Simple Experiments

    The red/green ink example was replaced with a popular study on notetaking, comparing the effects of taking notes in longhand or on laptops. There is also a new example of pretest/posttest designs (a study on mindfulness training). Students sometimes are surprised when a real-world study has multiple dependent variables, so I’ve highlighted that more in the third edition. Both of the chapter’s opening examples have multiple dependent variables.

    I kept the example on pasta bowl serving size. However, after Chapter 10 was typeset, some researchers noticed multiple statistical inconsistencies in several publications from Wansink’s lab (for one summary of the issues, see the Chronicle of Higher Education article, “Spoiled Science”). At the time of writing, the pasta study featured in Chapter 10 has not been identified as problematic. Nevertheless, instructors might wish to engage students in a discussion of these issues.

    11. More on Experiments: Confounding and Obscuring Variables

    The content is virtually the same, with the addition of two Working It Through sections. The first one is to show students how to work through Table 11.1 using the mindfulness study from Chapter 10. This is important because after seeing Table 11.1, students sometimes think their job is to find the flaw in any study. In fact, most published studies do not have major internal validity flaws. The second Working It Through shows students how to analyze a null result.

    12. Experiments with More Than One Independent Variable

    Recent work has suggested that context-specific memory effects are not robust, so I replaced the Godden and Baddeley factorial example on context-specific learning with one comparing the memory of child chess experts to adults.

     

     

    xv

    CHAPTER MAJOR CHANGES IN THE THIRD EDITION

    13. Quasi-Experiments and Small-N Designs

    I replaced the Head Start study for two reasons. First, I realized it’s not a good example of a nonequivalent control group posttest-only design, because it actually included a pretest! Second, the regression to the mean effect it meant to illustrate is rare and difficult to understand. In exchange, there is a new study on the effects of walking by a church.

    In the small-N design section, I provided fresh examples of multiple baseline design and alternating treatment designs. I also replaced the former case study example (split-brain studies) with the story of H.M. Not only is H.M.’s story compelling (especially as told through the eyes of his friend and researcher Suzanne Corkin), the brain anatomy required to understand this example is also simpler than that of split- brain studies, making it more teachable.

    14. Replication, Generalization, and the Real World

    A significant new section and table present the so-called “replication crisis” in psychology. In my experience, students are extremely engaged in learning about these issues. There’s a new example of a field experiment, a study on the effect of radio programs on reconciliation in Rwanda.

    Supplementary Chapters In the supplementary chapter on inferential statistics, I replaced the section on randomization tests with a new section on confidence intervals. The next edition of the book may transition away from null hypothesis significance testing to emphasize the “New Statistics” of estimation and confidence intervals. I welcome feedback from instructors on this potential change.

    Changes in the Third Edition

     

     

    xvi

    Acknowledgments

    Working on this textbook has been rewarding and enriching, thanks to the many people who have smoothed the way. To start, I feel fortunate to have collaborated with an author-focused company and an all-around great editor, Sheri Snavely. Through all three editions, she has been both optimistic and realistic, as well as savvy and smart. She also made sure I got the most thoughtful reviews possible and that I was supported by an excellent staff at Norton: David Bradley, Jane Searle, Rubina Yeh, Eve Sanoussi, Victoria Reuter, Alex Trivilino, Travis Carr, and Dena Diglio Betz. My developmental editor, Betsy Dilernia, found even more to refine in the third edition, making the language, as well as each term, figure, and refer- ence, clear and accurate.

    I am also thankful for the support and continued enthusiasm I have received from the Norton sales management team: Michael Wright, Allen Clawson, Ashley Sherwood, Annie Stewart, Dennis Fernandes, Dennis Adams, Katie Incorvia, Jordan Mendez, Amber Watkins, Shane Brisson, and Dan Horton. I also wish to thank the science and media special- ists for their creativity and drive to ensure my book reaches a wide audience, and that all the media work for instructors and students.

    I deeply appreciate the support of many col- leagues. My former student Patrick Ewell, now at Kenyon College, served as a sounding board for new examples and authored the content for InQuizitive. Eddie Brummelman and Stefanie Nelemans provided additional correlations for the cross-lag panel design in Chapter 9. My friend Carrie Smith authored the Test Bank for the past two editions and has made it

    an authentic measure of quantitative reasoning (as well as sending me things to blog about). Catherine Burrows carefully checked and revised the Test Bank for the third edition. Many thanks to Sarah Ainsworth, Reid Griggs, Aubrey McCarthy, Emma McGorray, and Michele M. Miller for carefully and patiently fact-checking every word in this edition. My student Xiaxin Zhong added DOIs to all the refer- ences and provided page numbers for the Check Your Understanding answers. Thanks, as well, to Emily Stanley and Jeong Min Lee, for writing and revising the questions that appear in the Coursepack created for the course management systems. I’m grateful to Amy Corbett and Kacy Pula for reviewing the ques- tions in InQuizitive. Thanks to my students Matt Davila-Johnson and Jeong Min Lee for posing for photographs in Chapters 5 and 10.

    The book’s content was reviewed by a cadre of talented research method professors, and I am grateful to each of them. Some were asked to review; others cared enough to send me comments or examples by e-mail. Their students are lucky to have them in the classroom, and my readers will benefit from the time they spent in improving this book:

    Eileen Josiah Achorn, University of Texas, San Antonio Sarah Ainsworth, University of North Florida Kristen Weede Alexander, California State University,

    Sacramento Leola Alfonso-Reese, San Diego State University Cheryl Armstrong, Fitchburg State University Jennifer Asmuth, Susquehanna University Kristin August, Rutgers University, Camden

     

     

    xviiAcknowledgments

    Jessica L. Barnack-Tavlaris, The College of New Jersey Gordon Bear, Ramapo College Margaret Elizabeth Beier, Rice University Jeffrey Berman, University of Memphis Brett Beston, McMaster University Alisa Beyer, Northern Arizona University Julie Boland, University of Michigan Marina A. Bornovalova, University of South Florida Caitlin Brez, Indiana State University Shira Brill, California State University, Northridge J. Corey Butler, Southwest Minnesota State University Ricardo R. Castillo, Santa Ana College Alexandra F. Corning, University of Notre Dame Kelly A. Cotter, California State University, Stanislaus Lisa Cravens-Brown, The Ohio State University Victoria Cross, University of California, Davis Matthew Deegan, University of Delaware Kenneth DeMarree, University at Buffalo Jessica Dennis, California State University, Los Angeles Nicole DeRosa, SUNY Upstate Golisano Children’s Hospital Rachel Dinero, Cazenovia College Dana S. Dunn, Moravian College C. Emily Durbin, Michigan State University Russell K. Espinoza, California State University, Fullerton Patrick Ewell, Kenyon College Iris Firstenberg, University of California, Los Angeles Christina Frederick, Sierra Nevada College Alyson Froehlich, University of Utah Christopher J. Gade, University of California, Berkeley Timothy E. Goldsmith, University of New Mexico Jennifer Gosselin, Sacred Heart University AnaMarie Connolly Guichard, California State University,

    Stanislaus Andreana Haley, University of Texas, Austin Edward Hansen, Florida State University Cheryl Harasymchuk, Carleton University Richard A. Hullinger, Indiana State University Deborah L. Hume, University of Missouri Kurt R. Illig, University of St. Thomas Jonathan W. Ivy, Pennsylvania State University, Harrisburg W. Jake Jacobs, University of Arizona Matthew D. Johnson, Binghamton University Christian Jordan, Wilfrid Laurier University Linda Juang, San Francisco State University

    Victoria A. Kazmerski, Penn State Erie, The Behrend College Heejung Kim, University of California, Santa Barbara Greg M. Kim-Ju, California State University, Sacramento Ari Kirshenbaum, Ph.D., St. Michael’s College Kerry S. Kleyman, Metropolitan State University Penny L. Koontz, Marshall University Christina M. Leclerc, Ph.D., State University of New York

    at Oswego Ellen W. Leen-Feldner, University of Arkansas Carl Lejuez, University of Maryland Marianne Lloyd, Seton Hall University Stella G. Lopez, University of Texas, San Antonio Greg Edward Loviscky, Pennsylvania State University Sara J. Margolin, Ph.D., The College at Brockport, State

    University of New York Azucena Mayberry, Texas State University Christopher Mazurek, Columbia College Peter Mende-Siedlecki, University of Delaware Molly A. Metz, Miami University Dr. Michele M. Miller, University of Illinois Springfield Daniel C. Molden, Northwestern University J. Toby Mordkoff, University of Iowa Elizabeth Morgan, Springfield College Katie Mosack, University of Wisconsin, Milwaukee Erin Quinlivan Murdoch, George Mason University Stephanie C. Payne, Texas A&M University Anita Pedersen, California State University, Stanislaus Elizabeth D. Peloso, University of Pennsylvania M. Christine Porter, College of William and Mary Joshua Rabinowitz, University of Michigan Elizabeth Riina, Queens College, City University of New York James R. Roney, University of California, Santa Barbara Richard S. Rosenberg, Ph.D., California State University,

    Long Beach Carin Rubenstein, Pima Community College Silvia J. Santos, California State University, Dominguez Hills Pamela Schuetze, Ph.D., The College at Buffalo, State

    University of New York John N. Schwoebel, Ph.D., Utica College Mark J. Sciutto, Muhlenberg College Elizabeth A. Sheehan, Georgia State University Victoria A. Shivy, Virginia Commonwealth University Leo Standing, Bishop’s University

     

     

    xviii ACkNOwLEDGMENTs

    Harold W. K. Stanislaw, California State University, Stanislaus Kenneth M. Steele, Appalachian State University Mark A. Stellmack, University of Minnesota, Twin Cities Eva Szeli, Arizona State University Lauren A. Taglialatela, Kennesaw State University Alison Thomas-Cottingham, Rider University Chantal Poister Tusher, Georgia State University Allison A. Vaughn, San Diego State University Simine Vazire, University of California, Davis Jan Visser, University of Groningen John L. Wallace, Ph.D., Ball State University Shawn L. Ward, Le Moyne College Christopher Warren, California State University, Long Beach Shannon N. Whitten, University of Central Florida Jelte M. Wicherts, Tilburg University Antoinette R. Wilson, University of California, Santa Cruz James Worthley, University of Massachusetts, Lowell Charles E. (Ted) Wright, University of California, Irvine Guangying Wu, The George Washington University

    David Zehr, Plymouth State University Peggy Mycek Zoccola, Ohio University

    I have tried to make the best possible improvements from all of these capable reviewers.

    My life as a teaching professor has been enriched during the last few years because of the friendship and support of my students and colleagues at the Uni- versity of Delaware, colleagues I see each year at the SPSP conference, and all the faculty I see regularly at the National Institute for the Teaching of Psychology, affectionately known as NITOP.

    Three teenage boys will keep a person both enter- tained and humbled; thanks to Max, Alek, and Hugo for providing their services. I remain grateful to my mother-in-law, Janet Pochan, for cheerfully helping on the home front. Finally, I want to thank my husband Darrin for encouraging me and for always having the right wine to celebrate (even if it’s only Tuesday).

    Beth Morling

     

     

    Media Resources for Instructors and Students

    G

    N

    G

    INTERACTIVE INsTRUCTOR’s GUIDE Beth Morling, University of Delaware The Interactive Instructor’s Guide contains hundreds of downloadable resources and teaching ideas, such as a discussion of how to design a course that best utilizes the textbook, sample syllabus and assignments, and chapter-by-chapter teaching notes and suggested activities.

    POwERPOINTs The third edition features three types of PowerPoints. The Lecture PowerPoints provide an overview of the major headings and definitions for each chapter. The Art Slides contain a complete set of images. And the Active Learning Slides provide the author’s favorite in-class activities, as well as reading quiz- zes and clicker questions. Instructors can browse the Active Learning Slides to select activities that supplement their classes.

    TEsT BANk C. Veronica Smith, University of Mississippi, and Catherine Burrows, University of Miami The Test Bank provides over 750 questions using an evidence-centered approach designed in collabora- tion with Valerie Shute of Florida State University and Diego Zapata-Rivera of the Educational Testing Service. The Test Bank contains multiple-choice and short-answer questions classified by section, Bloom’s taxonomy, and difficulty, making it easy for instructors to construct tests and quizzes that are meaningful and diagnostic. The Test Bank is available in Word RTF, PDF, and ExamView® Assessment Suite formats.

    INQUIZITIVE Patrick Ewell, Kenyon College InQuizitive allows students to practice applying terminology in the textbook to numerous examples. It can guide the students with specific feedback for incorrect answers to help clarify common mistakes. This online assessment tool gives students the repetition they need to fully understand the material without cutting into valuable class time. InQuizitive provides practice in reading tables and figures, as well as identifying the research methods used in studies from popular media articles, for an integrated learning experience.

    EVERYDAY REsEARCH METHODs BLOG: www.everydayresearchmethods.com The Research Methods in Psychology blog offers more than 150 teachable moments from the web, curated by Beth Morling and occasional guest contributors. Twice a month, the author highlights examples of psychological science in the news. Students can connect these recent stories with textbook concepts. Instructors can use blog posts as examples in lecture or assign them as homework. All entries are searchable by chapter.

    COURsEPACk Emily Stanley, University of Mary Washington, and Jeong Min Lee, University of Delaware The Coursepack presents students with review opportunities that employ the text’s analytical frame- work. Each chapter includes quizzes based on the Norton Assessment Guidelines, Chapter Outlines created by the textbook author and based on the Learning Objectives in the text, and review flash- cards. The APA-style guidelines from the textbook are also available in the Coursepack for easy access.

    H

    r

    C

    xix

     

     

    xx

    Contents

    Preface ix Media Resources for Instructors and Students xix

    PART I Introduction to Scientific Reasoning

    CHAPTER 1

    Psychology Is a Way of Thinking 5

    Research Producers, Research Consumers 6 Why the Producer Role Is Important 6

    Why the Consumer Role Is Important 7

    The Benefits of Being a Good Consumer 8

    How Scientists Approach Their Work 10 Scientists Are Empiricists 10

    Scientists Test Theories: The Theory-Data Cycle 11

    Scientists Tackle Applied and Basic Problems 16

    Scientists Dig Deeper 16

    Scientists Make It Public: The Publication Process 17

    Scientists Talk to the World: From Journal to

    Journalism 17

    Chapter Review 22

    Contents

     

     

    xxiContents

    CHAPTER 2

    Sources of Information: Why Research Is Best and How to Find It 25

    The Research vs. Your Experience 26 Experience Has No Comparison Group 26

    Experience Is Confounded 29

    Research Is Better Than Experience 29

    Research Is Probabilistic 31

    The Research vs. Your Intuition 32 Ways That Intuition Is Biased 32

    The Intuitive Thinker vs. the Scientific Reasoner 38

    Trusting Authorities on the Subject 39 Finding and Reading the Research 42 Consulting Scientific Sources 42

    Finding Scientific Sources 44

    Reading the Research 46

    Finding Research in Less Scholarly Places 48

    Chapter Review 53

    CHAPTER 3

    Three Claims, Four Validities: Interrogation Tools for Consumers of Research 57

    Variables 58 Measured and Manipulated Variables 58

    From Conceptual Variable to Operational Definition 59

    Three Claims 61 Frequency Claims 62

    Association Claims 63

    Causal Claims 66

    Not All Claims Are Based on Research 68

    Interrogating the Three Claims Using the Four Big Validities 68 Interrogating Frequency Claims 69

    Interrogating Association Claims 71

    Interrogating Causal Claims 74

    Prioritizing Validities 79

    Review: Four Validities, Four Aspects of Quality 80 wORkING IT THROUGH Does Hearing About Scientists’ Struggles Inspire

    Young Students? 81

    Chapter Review 83

     

     

    xxii CONTENTs

    PART II Research Foundations for Any Claim

    CHAPTER 4

    Ethical Guidelines for Psychology Research 89

    Historical Examples 89 The Tuskegee Syphilis Study Illustrates Three Major Ethics Violations 89

    The Milgram Obedience Studies Illustrate a Difficult Ethical Balance 92

    Core Ethical Principles 94 The Belmont Report: Principles and Applications 94

    Guidelines for Psychologists: The APA Ethical Principles 98 Belmont Plus Two: APA’s Five General Principles 98

    Ethical Standards for Research 99

    Ethical Decision Making: A Thoughtful Balance 110 wORkING IT THROUGH Did a Study Conducted on Facebook Violate Ethical

    Principles? 111

    Chapter Review 113

    CHAPTER 5

    Identifying Good Measurement 117

    Ways to Measure Variables 118 More About Conceptual and Operational Variables 118

    Three Common Types of Measures 120

    Scales of Measurement 122

    Reliability of Measurement: Are the Scores Consistent? 124 Introducing Three Types of Reliability 125

    Using a Scatterplot to Quantify Reliability 126

    Using the Correlation Coefficient r to Quantify Reliability 128

    Reading About Reliability in Journal Articles 131

    Validity of Measurement: Does It Measure What It’s Supposed to Measure? 132

    Measurement Validity of Abstract Constructs 133

    Face Validity and Content Validity: Does It Look Like a

    Good Measure? 134

    Criterion Validity: Does It Correlate with Key Behaviors? 135

    Convergent Validity and Discriminant Validity: Does the

    Pattern Make Sense? 139

    The Relationship Between Reliability and Validity 142

     

     

    xxiiiContents

    Review: Interpreting Construct Validity Evidence 143

    wORkING IT THROUGH How Well Can We Measure the Amount of Gratitude Couples Express to Each Other? 145

    Chapter Review 147

    PART III Tools for Evaluating Frequency Claims

    CHAPTER 6

    Surveys and Observations: Describing What People Do 153

    Construct Validity of Surveys and Polls 153 Choosing Question Formats 154

    Writing Well-Worded Questions 155

    Encouraging Accurate Responses 159

    Construct Validity of Behavioral Observations 165 Some Claims Based on Observational Data 165

    Making Reliable and Valid Observations 169

    Chapter Review 175

    CHAPTER 7

    Sampling: Estimating the Frequency of Behaviors and Beliefs 179

    Generalizability: Does the Sample Represent the Population? 179 Populations and Samples 180

    When Is a Sample Biased? 182

    Obtaining a Representative Sample: Probability Sampling Techniques 186

    Settling for an Unrepresentative Sample: Nonprobability Sampling Techniques 191

    Interrogating External Validity: What Matters Most? 193 In a Frequency Claim, External Validity Is a

    Priority 193

    When External Validity Is a Lower Priority 194

    Larger Samples Are Not More Representative 196

    Chapter Review 198

     

     

    xxiv CONTENTs

    PART IV Tools for Evaluating Association Claims

    CHAPTER 8

    Bivariate Correlational Research 203

    Introducing Bivariate Correlations 204 Review: Describing Associations Between Two Quantitative

    Variables 205

    Describing Associations with Categorical Data 207

    A Study with All Measured Variables Is Correlational 209

    Interrogating Association Claims 210 Construct Validity: How Well Was Each Variable Measured? 210

    Statistical Validity: How Well Do the Data Support

    the Conclusion? 211

    Internal Validity: Can We Make a Causal Inference from

    an Association? 221

    External Validity: To Whom Can the Association Be Generalized? 226

    wORkING IT THROUGH Are Parents Happier Than People with No Children? 231

    Chapter Review 233

    CHAPTER 9

    Multivariate Correlational Research 237

    Reviewing the Three Causal Criteria 238 Establishing Temporal Precedence with Longitudinal

    Designs 239 Interpreting Results from Longitudinal Designs 239

    Longitudinal Studies and the Three Criteria for Causation 242

    Why Not Just Do an Experiment? 242

    Ruling Out Third Variables with Multiple-Regression Analyses 244 Measuring More Than Two Variables 244

    Regression Results Indicate If a Third Variable Affects

    the Relationship 247

    Adding More Predictors to a Regression 251

    Regression in Popular Media Articles 252

    Regression Does Not Establish Causation 254

    Getting at Causality with Pattern and Parsimony 256 The Power of Pattern and Parsimony 256

    Pattern, Parsimony, and the Popular Media 258

     

     

    xxvContents

    Mediation 259 Mediators vs. Third Variables 261

    Mediators vs. Moderators 262

    Multivariate Designs and the Four Validities 264 Chapter Review 266

    PART V Tools for Evaluating Causal Claims

    CHAPTER 10

    Introduction to Simple Experiments 273

    Two Examples of Simple Experiments 273 Example 1: Taking Notes 274

    Example 2: Eating Pasta 275

    Experimental Variables 276 Independent and Dependent Variables 277

    Control Variables 278

    Why Experiments Support Causal Claims 278 Experiments Establish Covariance 279

    Experiments Establish Temporal Precedence 280

    Well-Designed Experiments Establish Internal Validity 281

    Independent-Groups Designs 287 Independent-Groups vs. Within-Groups Designs 287

    Posttest-Only Design 287

    Pretest/Posttest Design 288

    Which Design Is Better? 289

    Within-Groups Designs 290 Repeated-Measures Design 290

    Concurrent-Measures Design 291

    Advantages of Within-Groups Designs 292

    Covariance, Temporal Precedence, and Internal Validity in Within-Groups Designs 294

    Disadvantages of Within-Groups Designs 296

    Is Pretest/Posttest a Repeated-Measures Design? 297

    Interrogating Causal Claims with the Four Validities 298 Construct Validity: How Well Were the Variables Measured and Manipulated? 298

    External Validity: To Whom or What Can the Causal Claim Generalize? 301

    Statistical Validity: How Well Do the Data Support the Causal Claim? 304

    Internal Validity: Are There Alternative Explanations for the Results? 306

    Chapter Review 307

     

     

    xxvi CONTENTs

    CHAPTER 11

    More on Experiments: Confounding and Obscuring Variables 311

    Threats to Internal Validity: Did the Independent Variable Really Cause the Difference? 312

    The Really Bad Experiment (A Cautionary Tale) 312

    Six Potential Internal Validity Threats in One-Group,

    Pretest/Posttest Designs 314

    Three Potential Internal Validity Threats in Any Study 322

    With So Many Threats, Are Experiments Still Useful? 325

    wORkING IT THROUGH Did Mindfulness Training Really Cause GRE Scores to Improve? 328

    Interrogating Null Effects: What If the Independent Variable Does Not Make a Difference? 330

    Perhaps There Is Not Enough Between-Groups Difference 332

    Perhaps Within-Groups Variability Obscured the Group Differences 335

    Sometimes There Really Is No Effect to Find 342

    wORkING IT THROUGH Will People Get More Involved in Local Government If They Know They’ll Be Publicly Honored? 344

    Null Effects May Be Published Less Often 345

    Chapter Review 346

    CHAPTER 12

    Experiments with More Than One Independent Variable 351

    Review: Experiments with One Independent Variable 351 Experiments with Two Independent Variables Can

    Show Interactions 353

    Intuitive Interactions 353

    Factorial Designs Study Two Independent Variables 355

    Factorial Designs Can Test Limits 356

    Factorial Designs Can Test Theories 358

    Interpreting Factorial Results: Main Effects and Interactions 360

    Factorial Variations 370 Independent-Groups Factorial Designs 370

    Within-Groups Factorial Designs 370

    Mixed Factorial Designs 371

    Increasing the Number of Levels of an Independent Variable 371

    Increasing the Number of Independent Variables 373

    Identifying Factorial Designs in Your Reading 378 Identifying Factorial Designs in Empirical Journal Articles 379

    Identifying Factorial Designs in Popular Media Articles 379

    Chapter Review 383

     

     

    xxviiContents

    PART VI Balancing Research Priorities

    CHAPTER 13

    Quasi-Experiments and Small-N Designs 389

    Quasi-Experiments 389 Two Examples of Independent-Groups

    Quasi-Experiments 390

    Two Examples of Repeated-Measures

    Quasi-Experiments 392

    Internal Validity in Quasi-Experiments 396

    Balancing Priorities in Quasi-Experiments 404

    Are Quasi-Experiments the Same as Correlational Studies? 405

    Small-N Designs: Studying Only a Few Individuals 406 Research on Human Memory 407

    Disadvantages of Small-N Studies 410

    Behavior-Change Studies in Applied Settings:

    Three Small-N Designs 411

    Other Examples of Small-N Studies 417

    Evaluating the Four Validities in Small-N Designs 418

    Chapter Review 420

    CHAPTER 14

    Replication, Generalization, and the Real World 425

    To Be Important, a Study Must Be Replicated 425 Replication Studies 426

    The Replication Debate in Psychology 430

    Meta-Analysis: What Does the Literature Say? 433

    Replicability, Importance, and Popular Media 436

    To Be Important, Must a Study Have External Validity? 438 Generalizing to Other Participants 438

    Generalizing to Other Settings 439

    Does a Study Have to Be Generalizable to Many People? 440

    Does a Study Have to Take Place in a Real-World Setting? 447

    Chapter Review 453

     

     

    xxviii CONTENTs

    Statistics Review Descriptive Statistics 457 Statistics Review Inferential Statistics 479 Presenting Results APA-Style Reports and Conference Posters 505 Appendix A Random Numbers and How to Use Them 545 Appendix B Statistical Tables 551 Areas Under the Normal Curve (Distribution of z) 551

    Critical Values of t 557

    Critical Values of F 559

    r to z’ Conversion 564

    Critical Values of r 565 Glossary 567 Answers to End-of-Chapter Questions 577 Review Question 577

    Guidelines for Selected Learning Actively Exercises 578 References 589 Credits 603 Name Index 607 Subject Index 611

     

     

    THIRD EDITION

    Research Methods in Psychology EVALUATING A WORLD OF INFORMATION

     

     

     

    PART I

    Introduction to Scientific Reasoning

     

     

    Your Dog Hates Hugs NYMag.com, 2016

    Mindfulness May Improve Test Scores Scientific American, 2013

     

     

    5

    Psychology Is a Way of Thinking THINKING BACK TO YOUR introductory psychology course, what do you remember learning? You might remember that dogs can be trained to salivate at the sound of a bell or that people in a group fail to call for help when the room fills up with smoke. Or perhaps you recall studies in which people administered increasingly stron- ger electric shocks to an innocent man although he seemed to be in distress. You may have learned what your brain does while you sleep or that you can’t always trust your memories. But how come you didn’t learn that “we use only 10% of our brain” or that “hitting a punching bag can make your anger go away”?

    The reason you learned some principles, and not others, is because psychological science is based on studies—on research—by psychologists. Like other scientists, psychologists are empiricists. Being an empiricist means basing one’s conclusions on systematic observations. Psychologists do not simply think intuitively about behavior, cognition, and emotion; they know what they know because they have conducted studies on people and animals acting in their natural environments or in specially designed situations. Research is what tells us that most people will administer electric shock to an innocent man in certain situations, and it also tells us that people’s brains are usually fully engaged—not just 10%. If you are to think like a psychologist, then you must think like a researcher, and taking a course in research methods is crucial to your understanding of psychology.

    This book explains the types of studies psychologists conduct, as well as the potential strengths and limitations of each type of study. You will learn not only how to plan your own studies but

    1 LEARNING OBJECTIVES

    A year from now, you should still be able to:

    1. Explain what it means to reason empirically.

    2. Appreciate how psychological research methods help you become a better producer of information as well as a better consumer of information.

    3. Describe five practices that psychological scientists engage in.

     

     

    6 CHAPTER 1 Psychology Is a Way of Thinking

    also how to find research, read about it, and ask questions about it. While gaining a greater appreciation for the rigorous standards psychologists maintain in their research, you’ll find out how to be a systematic and critical consumer of psychological science.

    RESEARCH PRODUCERS, RESEARCH CONSUMERS Some psychology students are fascinated by the research process and intend to become producers of research. Perhaps they hope to get a job studying brain anatomy, documenting the behavior of dolphins or monkeys, administering per- sonality questionnaires, observing children in a school setting, or analyzing data. They may want to write up their results and present them at research meetings. These students may dream about working as research scientists or professors.

    Other psychology students may not want to work in a lab, but they do enjoy reading about the structure of the brain, the behavior of dolphins or monkeys, the personalities of their fellow students, or the behavior of children in a school setting. They are interested in being consumers of research information—reading about research so they can later apply it to their work, hobbies, relationships, or personal growth. These students might pursue careers as family therapists, teachers, entrepreneurs, guidance counselors, or police officers, and they expect psychology courses to help them in these roles.

    In practice, many psychologists engage in both roles. When they are planning their research and creating new knowledge, they study the work of others who have gone before them. Furthermore, psychologists in both roles require a curi- osity about behavior, emotion, and cognition. Research producers and consumers also share a commitment to the practice of empiricism—to answer psychological questions with direct, formal observations, and to communicate with others about what they have learned.

    Why the Producer Role Is Important For your future coursework in psychology, it is important to know how to be a producer of research. Of course, students who decide to go to graduate school for psychology will need to know all about research methods. But even if you do not plan to do graduate work in psychology, you will probably have to write a paper following the style guidelines of the American Psychological Association (APA) before you graduate, and you may be required to do research as part of a course lab section. To succeed, you will need to know how to randomly assign people to groups, how to measure attitudes accurately, or how to interpret results from a graph. The skills you acquire by conducting research can teach you how psycho- logical scientists ask questions and how they think about their discipline.

     

     

    7Research Producers, Research Consumers

    As part of your psychology studies, you might even work in a research lab as an undergraduate (Figure 1.1). Many psy- chology professors are active researchers, and if you are offered the opportunity to get involved in their laboratories, take it! Your faculty supervisor may ask you to code behaviors, assign participants to different groups, graph an outcome, or write a report. Doing so will give you your first taste of being a research producer. Although you will be supervised closely, you will be expected to know the basics of conducting research. This book will help you understand why you have to protect the anonymity of your participants, use a cod- ing book, or flip a coin to decide who goes in which group. By participating as a research producer, you can expect to deepen your understanding of psychological inquiry.

    Why the Consumer Role Is Important Although it is important to understand the psychologist’s role as a producer of research, most psychology majors do not eventually become researchers. Regard- less of the career you choose, however, becoming a savvy consumer of informa- tion is essential. In your psychology courses, you will read studies published by psychologists in scientific journals. You will need to develop the ability to read about research with curiosity—to understand it, learn from it, and ask appropriate questions about it.

    Think about how often you encounter news stories or look up information on the Internet. Much of the time, the stories you read and the websites you visit will present information based on research. For example, during an election year, Americans may come across polling information in the media almost every day. Many online newspapers have science sections that include stories on the lat- est research. Entire websites are dedicated to psychology-related topics, such as treatments for autism, subliminal learning tapes, or advice for married couples. Magazines such as Scientific American, Men’s Health, and Parents summarize research for their readers. While some of the research—whether online or printed— is accurate and useful, some of it is dubious, and some is just plain wrong. How can you tell the good research information from the bad? Understanding research methods enables you to ask the appropriate questions so you can evaluate informa- tion correctly. Research methods skills apply not only to research studies but also to much of the other types of information you are likely to encounter in daily life.

    FIGURE 1.1 Producers of research. As undergraduates, some psychology majors work alongside faculty members as producers of information.

     

     

    8 CHAPTER 1 Psychology Is a Way of Thinking

    Finally, being a smart consumer of research could be crucial to your future career. Even if you do not plan to be a researcher—if your goal is to be a social worker, a teacher, a sales representative, a human resources professional, an entrepreneur, or a parent—you will need to know how to interpret published research with a critical eye. Clinical psychologists, social workers, and family therapists must read research to know which therapies are the most effective. In fact, licensure in these helping professions requires knowing the research behind evidence-based treatments—that is, therapies that are supported by research. Teachers also use research to find out which teaching methods work best. And the business world runs on quantitative information: Research is used to predict what sales will be like in the future, what consumers will buy, and whether investors will take risks or lie low. Once you learn how to be a consumer of information—psychological or otherwise—you will use these skills constantly, no matter what job you have.

    In this book, you will often see the phrase “interrogating information.” A con- sumer of research needs to know how to ask the right questions, determine the answers, and evaluate a study on the basis of those answers. This book will teach you systematic rules for interrogating research information.

    The Benefits of Being a Good Consumer What do you gain by being a critical consumer of information? Imagine, for exam- ple, that you are a correctional officer at a juvenile detention center, and you watch a TV documentary about a crime-prevention program called Scared Straight. The program arranges for teenagers involved in the criminal justice system to visit prisons, where selected prisoners describe the stark, violent realities of prison life (Figure 1.2). The idea is that when teens hear about how tough it is in prison, they will be scared into the “straight,” law-abiding life. The program makes a lot

    FIGURE 1.2 Scared straight. Although it makes intuitive sense that young people would be scared into good behavior by hearing from current prisoners, such intervention programs have actually been shown to cause an increase in criminal offenses.

     

     

    9Research Producers, Research Consumers

    of sense to you. You are considering starting a partnership between the residents of your detention center and the state prison system.

    However, before starting the partnership, you decide to investigate the efficacy of the program by reviewing some research that has been conducted about it. You learn that despite the intuitive appeal of the Scared Straight approach, the program doesn’t work—in fact, it might even cause criminal activity to get worse! Several published articles have reported the results of randomized, controlled studies in which young adults were assigned to either a Scared Straight program or a control program. The researchers then collected criminal records for 6–12 months. None of the studies showed that Scared Straight attendees committed fewer crimes, and most studies found an increase in crime among participants in the Scared Straight programs, compared to the controls (Petrosino, Turpin-Petrosino, & Finckenauer, 2000). In one case, Scared Straight attendees had committed 20% more crimes than the control group.

    At first, people considering such a program might think: If this program helps even one person, it’s worth it. However, we always need empirical evidence to test the efficacy of our interventions. A well-intentioned program that seems to make sense might actually be doing harm. In fact, if you investigate further, you’ll find that the U.S. Department of Justice officially warns that such programs are inef- fective and can harm youth, and the Juvenile Justice and Delinquency Prevention Act of 1974 was amended to prohibit youth in the criminal justice system from interactions with adult inmates in jails and prisons.

    Being a skilled consumer of information can inform you about other pro- grams that might work. For example, in your quest to become a better student, suppose you see this headline: “Mindfulness may improve test scores.” The prac- tice of mindfulness involves attending to the present moment, on purpose, with a nonjudgmental frame of mind (Kabat-Zinn, 2013). In a mindful state, people simply observe and let go of thoughts rather than elaborating on them. Could the practice of mindfulness really improve test scores? A study conducted by Michael Mrazek and his colleagues assigned people to take either a 2-week mindfulness training course or a 2-week nutrition course (Mrazek, Franklin, Philips, Baird, & Schooner, 2013). At the end of the training, only the people who had practiced mindfulness showed improved GRE scores (compared to their scores beforehand). Mrazek’s group hypothesized that mindfulness training helps people attend to an academic task without being distracted. They were bet- ter, it seemed, at controlling their minds from wandering. The research evidence you read about here appears to support the use of mindfulness for improving test scores.

    By understanding the research methods and results of this study, you might be convinced to take a mindfulness-training course similar to the one used by Mrazek and his colleagues. And if you were a teacher or tutor, you might consider advising your students to practice some of the focusing techniques. (Chapter 10 returns to this example and explains why the Mrazek study stands up to interro- gation.) Your skills in research methods will help you become a better consumer of

     

     

    10 CHAPTER 1 Psychology Is a Way of Thinking

    studies like this one, so you can decide when the research supports some programs (such as mindfulness for study skills) but not others (such as Scared Straight for criminal behavior).

    CHECK YOUR UNDERSTANDING

    1. Explain what the consumer of research and producer of research roles have in common, and describe how they differ.

    2. What kinds of jobs would use consumer-of-research skills? What kinds of jobs would use producer-of-research skills?

    HOW SCIENTISTS APPROACH THEIR WORK Psychological scientists are identified not by advanced degrees or white lab coats; they are defined by what they do and how they think. The rest of this chapter will explain the fundamental ways psychologists approach their work. First, they act as empiricists in their investigations, meaning that they systematically observe the world. Second, they test theories through research and, in turn, revise their theories based on the resulting data. Third, they take an empirical approach to both applied research, which directly targets real-world problems, and basic research, which is intended to contribute to the general body of knowledge. Fourth, they go further: Once they have discovered an effect, scientists plan further research to test why, when, or for whom an effect works. Fifth, psychologists make their work public: They submit their results to journals for review and respond to the opinions of other scientists. Another aspect of making work public involves sharing findings of psy- chological research with the popular media, who may or may not get the story right.

    Scientists Are Empiricists Empiricists do not base conclusions on intuition, on casual observations of their own experience, or on what other people say. Empiricism, also referred to as the empirical method or empirical research, involves using evidence from the senses (sight, hearing, touch) or from instruments that assist the senses (such as thermometers, timers, photographs, weight scales, and questionnaires) as the basis for conclusions. Empiricists aim to be systematic, rigorous, and to make their work independently verifiable by other observers or scientists. In Chapter 2,

    1. See pp. 6–7. 2. See pp. 7–8.

    ❯❯ For more on the contrast between empiricism and

    intuition, experience, and authority, see Chapter 2,

    pp. 26–31.

     

     

    11How Scientists Approach Their Work

    you will learn more about why empiricism is considered the most reliable basis for conclusions when compared with other forms of reasoning, such as expe- rience or intuition. For now, we’ll focus on some of the practices in which empiricists engage.

    Scientists Test Theories: The Theory-Data Cycle In the theory-data cycle, scientists collect data to test, change, or update their theories. Even if you have never been in a formal research situation, you have probably tested ideas and hunches of your own by asking specific questions that are grounded in theory, making predictions, and reflecting on data.

    For example, let’s say you need to take your bike to work later, so you check the weather forecast on your tablet (Figure 1.3). The application opens, but you see a blank screen. What could be wrong? Maybe your entire device is on the blink: Do the other apps work? When you test them, you find your calculator is working, but not your e-mail. In fact, it looks as if only the apps that need wireless are not working. Your wireless indicator looks low, so you ask your roommate, sitting nearby, “Are you having wifi problems?” If she says no, you might try resetting your device’s wireless connection.

    Notice the series of steps in this process. First, you asked a particular set of questions, all of which were guided by your theory about how such devices work. The questions (Is it the tablet as a whole? Is it only the wifi?) reflected your theory that the weather app requires a working electronic device as well as a wireless connection. Because you were operating under this theory, you chose not to ask other kinds of questions (Has a warlock cursed my tablet? Does my device have a bacterial infection?). Your theory set you up to ask certain questions and not others. Next, your questions led you to specific predictions, which you tested by collecting data. You tested your first idea about the problem (My device can’t run any apps) by making a specific prediction (If I test any application, it won’t work). Then you set up a situation to test your prediction (Does the calculator work?). The data (The calculator does work) told you your initial prediction was wrong. You used that out- come to change your idea about the problem (It’s only the wireless-based apps that aren’t working). And so on. When you take systematic steps to solve a problem, you are participating in something similar to what scientists do in the theory-data cycle.

    THE CUPBOARD THEORY VS. THE CONTACT COMFORT THEORY

    A classic example from the psychological study of attachment can illustrate the way researchers similarly use data to test their theories. You’ve probably observed that animals form strong attachments to their caregivers. If you have a dog, you know he’s extremely happy to see you when you come home, wagging his tail and jumping all over you. Human babies, once they are able to crawl, may follow their parents or caregivers around, keeping close to them. Baby monkeys exhibit similar behavior, spending hours clinging tightly to the mother’s fur. Why do animals form such strong attachments to their caregivers?

    FIGURE 1.3 Troubleshooting a tablet. Troubleshooting an electronic device is a form of engaging in the theory-data cycle.

     

     

    12 CHAPTER 1 Psychology Is a Way of Thinking

    One theory, referred to as the cupboard theory of mother-infant attachment, is that a mother is valu- able to a baby mammal because she is a source of food. The baby animal gets hungry, gets food from the mother by nursing, and experiences a pleas- ant feeling (reduced hunger). Over time, the sight of the mother is associated with pleasure. In other words, the mother acquires positive value for the baby because she is the “cupboard” from which food comes. If you’ve ever assumed your dog loves you only because you feed it, your beliefs are consistent with the cupboard theory.

    An alternative theory, proposed by psycholo- gist Harry Harlow (1958), is that hunger has little to do with why a baby monkey likes to cling to the warm, fuzzy fur of its mother. Instead, babies are attached to their mothers because of the comfort of cozy touch. This is the contact comfort theory. (In addition, it provides a less cynical view of why your dog is so happy to see you!)

    In the natural world, a mother provides both food and contact comfort at once, so when the baby

    clings to her, it is impossible to tell why. To test the alternative theories, Harlow had to separate the two influences—food and contact comfort. The only way he could do so was to create “mothers” of his own. He built two monkey foster “mothers”—the only mothers his lab-reared baby monkeys ever had. One of the mothers was made of bare wire mesh with a bottle of milk built in. This wire mother offered food, but not comfort. The other mother was covered with fuzzy terrycloth and was warmed by a lightbulb suspended inside, but she had no milk. This cloth mother offered comfort, but not food.

    Note that this experiment sets up three possible outcomes. The contact com- fort theory would be supported if the babies spent most of their time clinging to the cloth mother. The cupboard theory would be supported if the babies spent most of their time clinging to the wire mother. Neither theory would be supported if monkeys divided their time equally between the two mothers.

    When Harlow put the baby monkeys in the cages with the two mothers, the evidence in favor of the contact comfort theory was overwhelming. Harlow’s data showed that the little monkeys would cling to the cloth mother for 12–18 hours a day (Figure 1.4). When they were hungry, they would climb down, nurse from the wire mother, and then at once go back to the warm, cozy cloth mother. In short, Harlow used the two theories to make two specific predictions about how the monkeys would interact with each mother. Then he used the data he recorded (how much time the monkeys spent on each mother) to support only one of the theories. The theory-data cycle in action!

    FIGURE 1.4 The contact comfort theory. As the theory hypothesized, Harlow’s baby monkeys spent most of their time on the warm, cozy cloth mother, even though she did not provide any food.

     

     

    13How Scientists Approach Their Work

    THEORY, HYPOTHESIS, AND DATA

    A theory is a set of statements that describes general principles about how variables relate to one another. For example, Harlow’s theory, which he developed in light of extensive observations of primate babies and mothers, was about the overwhelming importance of bodily contact (as opposed to simple nourishment) in forming attachments. Contact comfort, not food, was the primary basis for a baby’s attachment to its mother. This theory led Harlow to investigate particular kinds of questions—he chose to pit contact comfort against food in his research. The theory meant that Harlow also chose not to study unrelated questions, such as the babies’ food preferences or sleeping habits.

    The theory not only led to the questions; it also led to specific hypothe- ses about the answers. A hypothesis, or prediction, is the specific outcome the researcher expects to observe in a study if the theory is accurate. Har- low’s hypothesis related to the way the baby monkeys would interact with two kinds of mothers he created for the study. He predicted that the babies would spend more time on the cozy mother than the wire mother. Notably, a sin- gle theory can lead to a large number of hypotheses because a single study is not sufficient to test the entire theory—it is intended to test only part of it. Most researchers test their theories with a series of empirical studies, each designed to test an individual hypothesis.

    Data are a set of observations. (Harlow’s data were the amount of time the baby monkeys stayed on each mother.) Depending on whether the data are consistent with hypotheses based on a theory, the data may either support or challenge the theory. Data that match the theory’s hypothe- ses strengthen the resea rcher ’s con- fidence in the the- ory. When the data do not match the theory’s hypotheses, however, those results indicate that the theory needs to be revised or the research design needs to be improved. Figure 1.5 shows how these steps work as a  cycle.

    FIGURE 1.5 The theory-data cycle.

    Theory leads researchers to

    pose particular

    research questions, which lead to an appropriate

    research design. In the context of the design,

    researchers formulate

    hypotheses. Researchers then

    collect and analyze

    Su pp

    or t Revision

    Nonsupporting data lead to revised theories or improved research

    design.

    Supporting data strengthen

    the theory.

    data, which feed back

    into the cycle.

     

     

    14 CHAPTER 1 Psychology Is a Way of Thinking

    FEATURES OF GOOD SCIENTIFIC THEORIES

    In scientific practice, some theories are better than others. The best theories are supported by data from studies, are falsifiable, and are parsimonious.

    Good Theories Are Supported by Data. The most important feature of a scientific theory is that it is supported by data from research studies. In this respect, the contact comfort theory of infant attachment turned out to be better than the cup- board theory because it was supported by the data. Clearly, primate babies need food, but food is not the source of their emotional attachments to their mothers. In this way, good theories, like Harlow’s, are consistent with our observations of the world. More importantly, scientists need to conduct mul- tiple studies, using a variety of methods, to address different aspects of their theories. A theory that is supported by a large quantity and variety of evi- dence is a good theory.

    Good Theories Are Falsifiable. A second impor- tant feature of a good scientific theory is falsifiability. A theory must lead to hypotheses that, when tested, could actually fail to support the theory. Harlow’s theory was falsifiable: If the monkeys had spent more time on the wire mother than the cloth mother,

    the contact-comfort theory would have been shown to be incorrect. Similarly, Mrazek’s mindfulness study could have falsified the researchers’ theory: If stu- dents in the mindfulness training group had shown lower GRE scores than those in the nutrition group, their theory of mindfulness and attention would not have been supported.

    In contrast, some dubious therapeutic techniques have been based on theories that are not falsifiable. Here’s an example. Some therapists practice facilitated communication (FC), believing they can help people with developmental disorders communicate by gently guiding their clients’ hands over a special keyboard. In simple but rigorous empirical tests, the facilitated messages have been shown to come from the therapist, not the client (Twachtman-Cullen, 1997). Such studies demonstrated FC to be ineffective. However, FC’s supporters don’t accept these results. The empirical method introduces skepticism, which, the supporters say, breaks down trust between the therapist and client and shows a lack of faith in people with disabilities. Therefore, these supporters hold a belief about FC that is not falsifiable. To be truly scientific, researchers must take risks, including being prepared to accept data indicating their theory is not supported. Even practi- tioners must be open to such risk, so they can use techniques that actually work. For another example of an unfalsifiable claim, see Figure 1.6.

    FIGURE 1.6 An example of a theory that is not falsifiable. Certain people might wear a tinfoil hat, operating under the idea that the hat wards off government mental surveillance. But like most conspiracy theories, this notion of remote government mindreading is not falsifiable. If the government has been shown to read people’s minds, the theory is supported. But if there is no physical evidence, that also supports the theory because if the government does engage in such surveillance, it wouldn’t leave a detectable trace of its secret operations.

     

     

    15How Scientists Approach Their Work

    Good Theories Have Parsimony. A third important feature of a good scientific theory is that it exhibits parsimony. Theories are supposed to be simple. If two theories explain the data equally well, most scientists will opt for the simpler, more parsimonious theory.

    Parsimony sets a standard for the theory-data cycle. As long as a simple theory predicts the data well, there should be no need to make the theory more com- plex. Harlow’s theory was parsimonious because it posed a simple explanation for infant attachment: Contact comfort drives attachment more than food does. As long as the data continue to support the simple theory, the simple theory stands. However, when the data contradict the theory, the theory has to change in order to accommodate the data. For example, over the years, psychologists have collected data showing that baby monkeys do not always form an attachment to a soft, cozy mother. If monkeys are reared in complete social isolation during their first, crit- ical months, they seem to have problems forming attachments to anyone or any- thing. Thus, the contact comfort theory had to change a bit to emphasize the importance of contact comfort for attachment especially in the early months of life. The theory is slightly less parsimonious now, but it does a better job of accommo- dating the data.

    THEORIES DON’T PROVE ANYTHING

    The word prove is not used in science. Researchers never say they have proved their theories. At most, they will say that some data support or are consistent with a theory, or they might say that some data are inconsistent with or compli- cate a theory. But no single confirming finding can prove a theory (Figure 1.7). New information might require researchers, tomorrow or the next day, to change and improve current ideas. Similarly, a single, disconfirming finding does not lead researchers to scrap a theory entirely. The disconfirming study may itself have been designed poorly. Or perhaps the theory needs to be mod- ified, not discarded. Rather than thinking of a theory as proved or disproved by a single study, scientists evaluate their theories based on the weight of the evidence, for and against. Harlow’s theory of attachment could not be “proved” by the single study involving wire and cloth mothers. His laboratory conducted dozens of individual studies to rule out alternative explanations and test the theory’s limits.

    ❮❮ For more on weight of the evidence, see Chapter 14, p. 436.

    FIGURE 1.7 Scientists don’t say “prove.” When you see the word prove in a headline, be skeptical. No single study can prove a theory once and for all. A more scientifically accurate headline would be: “Study Supports the Hypothesis that Hiking Improves Mental Health.” (Source: Netburn, LAtimes.com, 2015.)

     

     

    16 CHAPTER 1 Psychology Is a Way of Thinking

    Scientists Tackle Applied and Basic Problems The empirical method can be used for both applied and basic research questions. Applied research is done with a practical problem in mind; the researchers con- duct their work in a particular real-world context. An applied research study might ask, for example, if a school district’s new method of teaching language arts is work- ing better than the former one. It might test the efficacy of a treatment for depres- sion in a sample of trauma survivors. Applied researchers might be looking for better ways to identify those who are likely to do well at a particular job, and so on.

    Basic research, in contrast, is not intended to address a specific, practical problem; the goal is to enhance the general body of knowledge. Basic researchers might want to understand the structure of the visual system, the capacity of human memory, the motivations of a depressed person, or the limitations of the infant attachment system. Basic researchers do not just gather facts at random; in fact, the knowledge they generate may be applied to real-world issues later on.

    Translational research is the use of lessons from basic research to develop and test applications to health care, psychotherapy, or other forms of treatment and inter- vention. Translational research represents a dynamic bridge from basic to applied research. For example, basic research on the biochemistry of cell membranes might be translated into a new drug for schizophrenia. Or basic research on how mindful- ness changes people’s patterns of attention might be translated into a study skills intervention. Figure 1.8 shows the interrelationship of the three types of research.

    Scientists Dig Deeper Psychological scientists rarely conduct a single investigation and then stop. Instead, each study leads them to ask a new question. Scientists might start with a simple effect, such as the effect of comfort on attachment, and then ask, “Why

    Translational Research

    Basic Research

    Applied Research

    In a laboratory study, can meditation lessons improve college students’

    GRE scores?

    What parts of the brain are active when

    experienced meditators are

    meditating?

    Has our school’s new meditation

    program helped students focus longer on their math lessons?

    FIGURE 1.8 Basic, applied, and translational research. Basic researchers may not have an applied context in mind, and applied researchers may be less familiar with basic theories and principles. Translational researchers attempt to translate the findings of basic research into applied areas.

     

     

    17How Scientists Approach Their Work

    does this occur?” “When does this happen the most?” “For whom does this apply?” “What are the limits?”

    Mrazek and his team did not stop after only one study of mindfulness training and GRE performance. They dug deeper. They also asked whether mindfulness training was especially helpful for people whose minds wander the most. In other studies, they investigated if mindfulness training influenced skills such as people’s insight about their own memory (Baird, Mrazek, Phillips, & Schooler, 2014). And they have contrasted mindfulness with mind-wandering, attempting to find both the benefits and the costs of mind-wandering (Baird et al., 2012). This research team has conducted many related studies of how people can and cannot control their own attention.

    Scientists Make It Public: The Publication Process When scientists want to tell the scientific world about the results of their research, they write a paper and submit it to a scientific journal. Like magazines, journals usually come out every month and contain articles written by various qualified contributors. But unlike popular newsstand magazines, the articles in a scientific journal are peer-reviewed. The journal editor sends the submission to three or four experts on the subject. The experts tell the editor about the work’s virtues and flaws, and the editor, considering these reviews, decides whether the paper deserves to be published in the journal.

    The peer-review process in the field of psychology is rigorous. Peer reviewers are kept anonymous, so even if they know the author of the article professionally or personally, they can feel free to give an honest assessment of the research. They comment on how interesting the work is, how novel it is, how well the research was done, and how clear the results are. Ultimately, peer reviewers are supposed to ensure that the articles published in scientific journals contain innovative, well- done studies. When the peer-review process works, research with major flaws does not get published. However, the process continues even after a study is pub- lished. Other scientists can cite an article and do further work on the same subject. Moreover, scientists who find flaws in the research (perhaps overlooked by the peer reviewers) can publish letters, commentaries, or competing studies. Through publishing their work, scientists make the process of their research transparent, and the scientific community evaluates it.

    Scientists Talk to the World: From Journal to Journalism One goal of this textbook is to teach you how to interrogate information about psycho- logical science that you find not only in scientific journals, but also in more mainstream sources that you encounter in daily life. Psychology’s scientific journals are read

     

     

    18 CHAPTER 1 Psychology Is a Way of Thinking

    primarily by other scientists and by psychology students; the general public almost never reads them. Journalism, in contrast, includes the kinds of news and com- mentary that most of us read or hear on television, in magazines and newspa- pers, and on Internet sites—articles in Psychology Today and Men’s Health, topical blogs, relationship advice columns, and so on. These sources are usually written by journalists or laypeople, not scientists, and they are meant to reach the general public; they are easy to access, and understanding their content does not require specialized education.

    How does the news media find out about the latest scientific findings? A jour- nalist might become interested in a particular study by reading the current issue of a scientific journal or by hearing scientists talk about their work at a conference. The journalist turns the research into a news story by summarizing it for a popular audience, giving it an interesting headline, and writing about it using nontechnical terms. For example, the journal article by Mrazek and his colleagues on the effect of mindfulness on GRE scores was summarized by a journalist in the magazine Scientific American (Nicholson, 2013).

    BENEFITS AND RISKS OF JOURNALISM COVERAGE

    Psychologists can benefit when journalists publicize their research. By read- ing about psychological research in the newspaper, the general public can learn what psychologists really do. Those who read or hear the story might also pick up important tips for living: They might understand their children or themselves better; they might set different goals or change their habits. These important ben- efits of science writing depend on two things, however. First, journalists need to report on the most important scientific stories, and second, they must describe the research accurately.

    Is the Story Important? When journalists report on a study, have they chosen research that has been conducted rigorously, that tests an important question, and that has been peer-reviewed? Or have they chosen a study simply because it is cute or eye-catching? Sometimes journalists do follow important stories, especially when covering research that has already been published in a selective, peer- reviewed jour- nal. But sometimes journalists choose the sensational story over the important one.

    For example, one spring, headlines such as “Your dog hates hugs” and “You need to stop hugging your dog, study finds” began popping up in newsfeeds. Of course, this topic is clickbait, and dozens of news outlets shocked readers and listeners with these claims. However, the original claim had been made by a psychology professor who had merely reported some data in a blog post. The study he conducted had not been peer-reviewed or published in an empirical journal. The author had simply coded some Internet photographs of people hugging their dogs; according to the author, 82% of the dogs in the sample were showing signs of stress (Coren, 2016). Journalists should not have run with this story before it had been peer-reviewed. Scientific peer reviewers might have criticized the study because it didn’t include a comparison group of photos of dogs that weren’t being hugged.

     

     

    19How Scientists Approach Their Work

    The author also left out important details, such as how the photographs were selected and whether the dogs’ behavior actually meant they were stressed. In this case, journalists were quick to publish a headline that was sensational, but not necessarily important.

    Is the Story Accurate? Even when journalists report on reliable, important research, they don’t always get the story right. Some science writers do an excel- lent, accurate job of summarizing the research, but not all of them do (Figure 1.9). Perhaps the journalist does not have the scientific training, the motivation, or the time before deadline to understand the original science very well. Maybe the journalist dumbs down the details of a study to make it more accessible to a general audience. And sometimes a journalist wraps up the details of a study with a more dramatic headline than the research can support.

    FIGURE 1.9 Getting it right. Cartoonist Jorge Cham parodies what can happen when journalists report on scientific research. Here, an original study reported a relationship between two variables. Although the University Public Relations Office relates the story accurately, the strength of the relationship and its implications become distorted with subsequent retellings, much like a game of “telephone.”

     

     

    20 CHAPTER 1 Psychology Is a Way of Thinking

    Media coverage of a phenomenon called the “Mozart effect” provides an example of how jour- nalists might misrepresent science when they write for a popular audience (Spiegel, 2010). In 1993, researcher Frances Rauscher found that when stu- dents heard Mozart music played for 10 minutes, they performed better on a subsequent spatial intel- ligence test when compared with students who had listened to silence or to a monotone speaking voice (Rauscher, Shaw, & Ky, 1993). Rauscher said in a radio interview, “What we found was that the stu- dents who had listened to the Mozart sonata scored significantly higher on the spatial temporal task.” However, Rauscher added, “It’s very important to note that we did not find effects for general intelli- gence . . . just for this one aspect of intelligence. It’s a small gain and it doesn’t last very long” (Spiegel, 2010). But despite the careful way the scientists described their results, the media that reported on the story exaggerated its importance:

    The headlines in the papers were less subtle than her findings: “Mozart makes

    you smart” was the general idea. . . . But worse, says Rauscher, was that her

    very modest finding started to be wildly distorted. “Generalizing these results

    to children is one of the first things that went wrong. Somehow or another the

    myth started exploding that children that listen to classical music from a young

    age will do better on the SAT, they’ll score better on intelligence tests in general,

    and so forth.” (Spiegel, 2010)

    Perhaps because the media distorted the effects of that first study, a small industry sprang up, recording child-friendly sonatas for parents and teachers (Figure 1.10). However, according to research conducted since the first study was published, the effect of listening to Mozart on people’s intelligence test scores is not very strong, and it applies to most music, not just Mozart (Pietschnig, Voracek, & Formann, 2010).

    The journalist Ben Goldacre (2011) catalogs examples of how journalists and the general public misinterpret scientific data when they write about it for a pop- ular audience. Some journalists create dramatic stories about employment statis- tics that show, for example, a 0.9% increase in unemployment claims. Journalists may conclude that these small increases show an upward trend—when in fact, they may simply reflect sampling error. Another example comes from a happiness survey of 5,000 people in the United Kingdom. Local journalists picked up on tiny city-to-city differences, creating headlines about, for instance, how the city of Edinburgh is the “most miserable place in the country.” But the differences

    FIGURE 1.10 The Mozart effect. Journalists sometimes misrepresent research findings. Exaggerated reports of the Mozart effect even inspired a line of consumer products for children.

     

     

    21How Scientists Approach Their Work

    the survey found between the various places were not statistically significant (Goldacre, 2008). Even though there were slight differences in happiness from Edinburgh to London, the differences were small enough to be caused by random variation. The researcher who conducted the study said, “I tried to explain issues of [statistical] significance to the journalists who interviewed me. Most did not want to know” (Goldacre, 2008).

    How can you prevent being misled by a journalist’s coverage of science? One idea is to find the original source, which you’ll learn to do in Chapter 2. Reading the original scientific journal article is the best way to get the full story. Another approach is to maintain a skeptical mindset when it comes to popular sources. Chapter 3 explains how to ask the right questions before you allow yourself to accept the journalist’s claim.

    1. See the discussion of Harlow’s monkey experiment on p. 13. 2. See p. 16. 3. See p. 15. 4. See p. 17. 5. See pp. 18–21.

    CHECK YOUR UNDERSTANDING

    1. What happens to a theory when the data do not support the theory’s hypotheses? What happens to a theory when the data do support the

    theory’s hypotheses?

    2. Explain the difference between basic research and applied research, and describe how the two interact.

    3. Why can’t theories be proved in science?

    4. When scientists publish their data, what are the benefits?

    5. Describe two ways journalists might distort the science they attempt to publicize.

    ❮❮ To learn about sampling error, see Chapter 7, pp. 196–197.

     

     

    22 CHAPTER 1 Psychology Is a Way of Thinking

    Research Producers, Research Consumers • Some students need skills as producers of research;

    they develop the ability to work in research laborato- ries and make new discoveries.

    • Some students need skills as consumers of research; they need to be able to find, read, and evaluate the research behind important policies, therapies, and workplace decisions.

    • Having good consumer-of-research skills means being able to evaluate the evidence behind the claims of a salesperson, journalist, or researcher, and making better, more informed decisions by asking the right questions.

    How Scientists Approach Their Work • As scientists, psychologists are empiricists; they

    base their conclusions on systematic, unbiased observations of the world.

    • Using the theory-data cycle, researchers propose theories, make hypotheses (predictions), and collect data. A good scientific theory is supported by data, is falsifiable, and is parsimonious. A researcher might

    say that a theory is well supported or well established, rather than proved, meaning that most of the data have confirmed the theory and very little data have disconfirmed it.

    • Applied researchers address real-world problems, and basic researchers work for general understanding. Translational researchers attempt to translate the findings of basic research into applied areas.

    • Scientists usually follow up an initial study with more questions about why, when, and for whom a phenomenon occurs.

    • The publication process is part of worldwide scientific communication. Scientists publish their research in journals, following a peer-review process that leads to sharper thinking and improved communication. Even after publication, published work can be approved or criticized by the scientific community.

    • Journalists are writers for the popular media who are skilled at transforming scientific studies for the general public, but they don’t always get it right. Think critically about what you read online, and when in doubt, go directly to the original source—peer-reviewed research.

    Summary Thinking like a psychologist means thinking like a scientist, and thinking like a scientist involves thinking about the empirical basis for what we believe.

    CHAPTER REVIEW

    Key Terms

    evidence-based treatment, p. 8 empiricism, p. 10 theory, p. 13 hypothesis, p. 13 data, p. 13

    falsifiability, p. 14 parsimony, p. 15 weight of the evidence, p. 15 applied research, p. 16 basic research, p. 16

    translational research, p. 16 journal, p. 17 journalism, p. 18

     

     

    23Learning Actively

    To see samples of chapter concepts in the popular media, visit www.everydayresearchmethods.com and click the box for Chapter 1.r

    Review Questions

    1. Which of the following jobs most likely involves producer-of-research skills rather than consumer-of-research skills?

    a. Police officer

    b. University professor

    c. Physician

    d. Journalist

    2. To be an empiricist, one should:

    a. Base one’s conclusions on direct observations.

    b. Strive for parsimony.

    c. Be sure that one’s research can be applied in a real-world setting.

    d. Discuss one’s ideas in a public setting, such as on social media.

    3. A statement, or set of statements, that describes general principles about how variables relate to one another is a(n) .

    a. prediction

    b. hypothesis

    c. empirical observation

    d. theory

    4. Why is publication an important part of the empirical method?

    a. Because publication enables practitioners to read the research and use it in applied settings.

    b. Because publication contributes to making empirical observations independently verifiable.

    c. Because journalists can make the knowledge available to the general public.

    d. Because publication is the first step of the theory-data cycle.

    5. Which of the following research questions best illustrates an example of basic research?

    a. Has our company’s new marketing campaign led to an increase in sales?

    b. How satisfied are our patients with the sensitivity of the nursing staff?

    c. Does wearing kinesio-tape reduce joint pain?

    d. Can 2-month-old human infants tell the difference between four objects and six objects?

    Learning Actively

    1. To learn more about the theory-data cycle, look in the textbooks from your other psychology courses for examples of theories. In your introductory psychology book, you might look up the James Lange theory or the Cannon-Bard theory of emotion. You could look up Piaget’s theory of cognitive development, the Young-Helmholz theory of color vision, or the stage theory of memory. How do the data presented in your textbook show support for the theory? Does the textbook present any data that do not support the theory?

    2. Go to an online news website and find a headline that is reporting the results of a recently published study. Read the story, and ask: Has the research in the story been published yet? Does the journalist mention the name of a journal in which the results

    appeared? Or has the study only been presented at a research conference? Then, use the Internet to find examples of how other journalists have covered the same story. What variation do you notice in their stories?

    3. See what you can find online that has been written about the Mozart effect, about whether people should hug their dogs, or whether people should begin a mindfulness practice in their lives. Does the source you found discuss research evidence? Does the source provide the names of scientists and the journals in which data have been published? On the downside, does the coverage suggest that you purchase a product or that science has “proved” the effectiveness of a certain behavior or technique?

     

     

    Houston’s “Rage Room” a Smash as Economy Struggles The Guardian, 2016

    Six Great Ways to Vent Your Frustrations Lifehack.org, n.d.

     

     

    25

    Sources of Information: Why Research Is Best and How to Find It HAVE YOU EVER LOOKED online for a stress-relief technique? You might have found aggressive games such as Kick the Buddy or downloaded an app such as Vent. Maybe you’ve considered a for-profit “rage room” that lets you destroy plates, computers, or teddy bears. Perhaps a friend has suggested posting your complaints publicly and anonymously on Yik Yak. But does venting anger really make people feel better? Does expressing aggression make aggression go away?

    Many sources of information promote the idea that venting your frustrations works. You might try one of the venting apps yourself and feel good while you’re using it. Or you may hear from guidance counselors, friends, or online sources that venting negative feelings is a healthy way to manage anger. But is it accurate to base your conclusions on what authorities—even well-meaning ones—say? Should you believe what everyone else believes? Does it make sense to base your convictions on your own personal experience?

    This chapter discusses three sources of evidence for people’s beliefs— experience, intuition, and authority—and compares them to a superior source of evidence: empirical research. We will focus on evaluating a particular type of response to the question about handling anger: the idea of cathartically releasing bottled-up tension by hitting a punching

    2 LEARNING OBJECTIVES

    A year from now, you should still be able to:

    1. Explain why all scientists, including psychologists, value research-based conclusions over beliefs based on experience, intuition, or authority.

    2. Locate research-based information, and read it with a purpose.

     

     

    26 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    bag, screaming, or expressing your emotions (Figure 2.1). Is catharsis a healthy way to deal with feelings of anger and frustration? How could you find credible research on this subject if you wanted to read about it? And why should you trust the conclusions of researchers instead of those based on your own experience or intuition?

    THE RESEARCH VS. YOUR EXPERIENCE When we need to decide what to believe, our own experiences are powerful sources of information. “I’ve used tanning beds for 10 years. No skin cancer yet!” “My knee doesn’t give out as much when I use kinesio-tape.” “When I’m mad, I feel so much better after I vent my feelings online.” Often, too, we base our opinions on the experi- ences of friends and family. For instance, suppose you’re considering buying a new car. You want the most reliable one, so after consulting Consumer

    Reports, you decide on a Honda Fit, a top-rated car based on its objective road test- ing and a survey of 1,000 Fit owners. But then you hear about your cousin’s Honda Fit, which is always in the shop. Why shouldn’t you trust your own experience—or that of someone you know and trust—as a source of information?

    Experience Has No Comparison Group There are many reasons not to base beliefs solely on personal experience, but per- haps the most important is that when we do so, we usually don’t take a comparison group into account. Research, by contrast, asks the critical question: Compared to what? A comparison group enables us to compare what would happen both with and without the thing we are interested in—both with and without tanning beds, online games, or kinesio-tape (Figure 2.2).

    Here’s a troubling example of why a comparison group is so important. Centuries ago, Dr. Benjamin Rush drained blood from people’s wrists or ankles as part of a “bleeding,” or bloodletting, cure for illness (Eisenberg, 1977). The practice emerged from the belief that too much blood was the cause of illness. To restore an “appropriate” balance, a doctor might remove up to 100 ounces of blood from a patient over the course of a week. Of course, we now know that draining blood is one of the last things a doctor would want to do to a sick patient. Why did Dr. Rush, one of the most respected physicians of his time, keep on using such a practice? Why did he believe bloodletting was a cure?

    FIGURE 2.2 Your own experience. You may think you feel better when you wear kinesio-tape. But does placing stretchy tape on your body really reduce pain, prevent injury, or improve performance?

    FIGURE 2.1 Anger management. Some people believe that venting physically or emotionally the best way to work through anger. But what does the research suggest?

     

     

    27The Research vs. Your Experience

    In those days, a doctor who used the bleeding cure would have noticed that some of his patients recovered and some died; it was the doctor’s personal experience. Every patient’s recovery from yellow fever after bloodletting seemed to support Rush’s theory that the treatment worked. But Dr. Rush never set up a systematic comparison because doc- tors in the 1700s were not collecting data on their treatments. To test the bleeding cure, doctors would have had to systematically count death rates among patients who were bled versus those who received some compar- ison treatment (or no treatment). How many people were bled and how many were not? Of each group, how many died and how many recovered? Putting all the records together, the doctors could have come to an empirically derived conclusion about the effectiveness of bloodletting.

    Suppose, for example, Dr. Rush had kept records and found that 20 patients who were bled recovered, and 10 patients who refused the bleeding treatment recovered. At first, it might look like the bleeding cure worked; after all, twice as many bled patients as untreated patients improved. But you need to know all the numbers—the number of bled patients who died and the number of untreated patients who died, in addition to the number of patients in each group who recov- ered. Tables 2.1, 2.2, and 2.3 illustrate how we need all the data to draw the correct conclu- sion. In the first example (Table 2.1), there is no relationship at all between treatment and improvement. Although twice as many bled patients as untreated patients recovered, twice as many bled patients as untreated patients died, too. If you calculate the percentages, the recovery rate among people who were bled was 20%, and the recovery rate among people who were not treated was also 20%: The pro- portions are identical. (Remember, these data were invented for purposes of illustration.)

    TABLE 2.2

    One Value Decreased If we change the value in one cell (in red), survival rates change, and the bleeding cure is very ineffective.

    BLED NOT BLED

    Number of patients who recovered 20 10

    Number of patients who died 80 1

    (Number recovered divided by total number of patients)

    20/100 10/11

    Percentage recovered 20% 91%

    TABLE 2.1

    Baseline Comparisons At first, it looks like more patients who were bled survived (20 vs. 10), but when we divide by the total numbers of patients, survival rates were the same.

    BLED NOT BLED

    Number of patients who recovered 20 10

    Number of patients who died 80 40

    (Number recovered divided by total number of patients)

    20/100 10/50

    Percentage recovered 20% 20%

    TABLE 2.3

    One Value Increased If we change the value in the same cell (in red), now the bleeding cure looks effective.

    BLED NOT BLED

    Number of patients who recovered 20 10

    Number of patients who died 80 490

    (Number recovered divided by total number of patients)

    20/100 10/500

    Percentage recovered 20% 2%

     

     

    28 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    To reach the correct conclusion, we need to know all the values, including the number of untreated patients who died. Table 2.2 shows an example of what might happen if the value in only that cell changes. In this case, the number of untreated patients who died is much lower, so the treatment is shown to have a negative effect. Only 20% of the treated patients recovered, compared with 91% of the untreated patients. In contrast, if the number in the fourth cell were increased drastically, as in Table 2.3, the treatment would be shown to have a positive effect. The recovery rate among bled patients is still 20%, but the recovery rate among untreated patients is a mere 2%.

    Notice that in all three tables, changing only one value leads to dramatically different results. Drawing conclusions about a treatment—bloodletting, ways of venting anger, or using stretchy tape—requires comparing data systematically from all four cells: the treated/improved cell, the treated/unimproved cell, the untreated/improved cell, and the untreated/unimproved cell. These compa- rison cells show the relative rate of improvement when using the treatment, compared with no treatment.

    Because Dr. Rush bled every patient, he never had the chance to see how many would recover without the bleeding treatment (Figure 2.3). Similarly, when you rely on personal experience to decide what is true, you usually don’t have a systematic comparison group because you’re observing only one “patient”: yourself. The tape you’ve been using may seem to be working, but what would have happened to your knee pain with- out it? Maybe it would have felt fine anyway. Or perhaps you try an online brain-training course and get higher grades later that semester. But what kind of grades would you have gotten if you hadn’t taken the course? Or you might think using the Kick the Buddy game makes you feel better when you’re angry, but would you have felt better anyway, even if you had played a nonviolent game? What if you had done nothing and just let a little time pass?

    Basing conclusions on personal experience is prob- lematic because daily life usually doesn’t include com- parison experiences. In contrast, basing conclusions on systematic data collection has the simple but tremen- dous advantage of providing a comparison group. Only a systematic comparison can show you whether your knee improves when you use a special tape (compared with when you do not), or whether your anger goes away when you play a violent online game (compared with doing nothing).

    FIGURE 2.3 Bloodletting in the eighteenth century. Describe how Dr. Rush’s faulty attention to information led him to believe the bleeding treatment was effective.

     

     

    29The Research vs. Your Experience

    Experience Is Confounded Another problem with basing conclusions on personal experience is that in every- day life, too much is going on at once. Even if a change has occurred, we often can’t be sure what caused it. When a patient treated by Dr. Rush got better, that patient might also have been stronger to begin with, or may have been eating special foods or drinking more fluids. Which one caused the improvement? When you notice a difference in your knee pain after using kinesio-tape, maybe you also took it easy that day or used a pain reliever. Which one caused your knee pain to improve? If you play Kick the Buddy, it provides violent content, but you might also be distract- ing yourself or increasing your heart rate. Is it these factors, or the game’s violence that causes you to feel better after playing it?

    In real-world situations, there are several possible explanations for an outcome. In research, these alternative explanations are called confounds. Confounded can also mean confused. Essentially, a confound occurs when you think one thing caused an outcome but in fact other things changed, too, so you are confused about what the cause really was. You might think online brain-training exercises are making your grades better than last year, but because you were also taking differ- ent classes and have gained experience as a student, you can’t determine which of these factors (or combination of factors) caused the improvement.

    What can we do about confounds like these? For a personal experience, it is hard to isolate variables. Think about the last time you had an upset stomach. Which of the many things you ate that day made you sick? Or your allergies— which of the blossoming spring plants are you allergic to? In a research setting, though, scientists can use careful controls to be sure they are changing only one factor at a time.

    Research Is Better Than Experience What happens when scientists set up a systematic comparison that controls for potential confounds? For example, by using controlled, systematic comparisons, several groups of researchers have tested the hypothesis that venting anger is ben- eficial (e.g., Berkowitz, 1973; Bushman, Baumeister, & Phillips, 2001; Feshbach, 1956; Lohr, Olatunji, Baumeister, & Bushman, 2007). One such study was con- ducted by researcher Brad Bushman (2002). To examine the effect of venting, or catharsis, Bushman systematically compared the responses of angry people who were allowed to vent their anger with the responses of those who did not vent their anger.

    First, Bushman needed to make people angry. He invited 600 undergraduates to arrive, one by one, to a laboratory setting, where each student wrote a political essay. Next, each essay was shown to another person, called Steve, who was actually a confederate, an actor playing a specific role for the experimenter. Steve insulted the writer by criticizing the essay, calling it “the worst essay I’ve ever read,” among

    ❮❮ For more on confounds and how to avoid them in research designs, see Chapter 10, pp. 281–286.

     

     

    30 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    other unflattering comments. (Bushman knew this technique made students angry because he had used it in previous studies, in which students whose essays were criticized reported feeling angrier than those whose essays were not criticized.)

    Bushman then randomly divided the angry students into three groups, to sys- tematically compare the effects of venting and not venting anger. Group 1 was instructed to sit quietly in the room for 2 minutes. Group 2 was instructed to punch a punching bag for 2 minutes, having been told it was a form of exercise. Group 3 was instructed to punch a punching bag for 2 minutes while imagin- ing Steve’s face on it. (This was the important catharsis group.) Finally, all three groups of students were given a chance to get back at Steve. In the course of play- ing a quiz game with him, students had the chance to blast Steve’s ears with a loud noise. (Because Steve was a confederate, he didn’t actually hear the noises, but the students thought he did.)

    Which group gave Steve the loudest, longest blasts of noise? The catharsis hypothesis predicts that Group 3 should have calmed down the most, and as a result, this group should not have blasted Steve with very much noise. This group, however, gave Steve the loudest noise blasts of all! Compared with the other two groups, those who vented their anger at Steve through the punching

    bag continued to punish him when they had the chance. In contrast, Group 2, those who hit the punching bag for exercise, subjected him to less noise (not as loud or as long). Those who sat quietly for 2 minutes punished Steve the least of all. So much for the catharsis hypoth- esis. When the researchers set up the compar- ison groups, they found the opposite result: People’s anger subsided more quickly when they sat in a room quietly than if they tried to vent it. Figure 2.4 shows the study results in graph form.

    Notice the power of systematic comparison here. In a controlled study, re searchers can set up the conditions to include at least one comparison group. Contrast the researcher’s larger view with the more subjective view, in which each person consults only his or her own experience. For example, if you had asked some of the students in the catharsis group whether using the punching bag helped their anger subside, they could only consider their own, idiosyncratic experiences. When Bushman looked at the pattern overall— taking into account all three groups—the results indicated that the catharsis group still felt

    0.25More than average

    Less than average

    Subsequent aggression to partner (z score)

    0.2

    0.15

    0.1

    0.05

    0

    –0.1

    –0.15

    –0.2

    –0.25 Sit quietly

    Group 1

    Punching bag (exercise) Group 2

    Punching bag (Steve’s face)

    Group 3

    –0.05

    FIGURE 2.4 Results from controlled research on the catharsis hypothesis. In this study, after Steve (the confederate) insulted all the students in three groups by criticizing their essays, those in Group 1 sat quietly for 2 minutes, Group 2 hit a punching bag while thinking about exercise, and Group 3 hit a punching bag while imagining Steve’s face on it. Later, students in all three groups had the chance to blast Steve with loud noise. (Source: Adapted from Bushman, 2002, Table 1.)

     

     

    31The Research vs. Your Experience

    the angriest. The researcher thus has a privileged view—the view from the outside, including all possible comparison groups. In contrast, when you are the one acting in the situation, yours is a view from the inside, and you only see one possible condition.

    Researchers can also control for potential confounds. In Bushman’s study, all three groups felt equally angry at first. Bushman even separated the effects of aggres- sion only (using the punching bag for exercise) from the effects of aggression toward the person who made the participant mad (using the punching bag as a stand-in for Steve). In real life, these two effects—exercise and the venting of anger—would usu- ally occur at the same time.

    Bushman’s study is, of course, only one study on catharsis, and scientists always dig deeper. In other studies, researchers have made people angry, presented them with an opportunity to vent their anger (or not), and then watched their behavior. Research results have repeatedly indicated that people who physically express their anger at a target actually become more angry than when they started. Thus, practicing aggression only seems to teach people how to be aggressive (Berkowitz, 1973; Bushman et al., 2001; Feshbach, 1956; Geen & Quanty, 1977; Lohr et al., 2007; Tavris, 1989).

    The important point is that the results of a single study, such as Bushman’s, are certainly better evidence than experience. In addition, consistent results from several similar studies mean that scientists can be confident in the find- ings. As more and more studies amass evidence on the subject, theories about how people can effectively regulate their anger gain increasing support. Finally, psychologist Todd Kashdan applied this research when he was interviewed for a story about the “rage room” concept, in which people pay to smash objects. He advised the journalist that “it just increases your arousal and thus makes you even more angry. What you really need is to reduce or learn to better manage that arousal” (Dart, 2016).

    Research Is Probabilistic Although research is usually more accurate than individual experience, some- times our personal stories contradict the research results. Personal experience is powerful, and we often let a single experience distract us from the lessons of more rigorous research. Should you disagree with the results of a study when your own experience is different? Should you continue to play online games when you’re angry because you believe they work for you? Should you disregard Consumer Reports because your cousin had a terrible experience with her Honda Fit?

    At times, your experience (or your cousin’s) may be an exception to what the research finds. In such cases, you may be tempted to conclude: The research must be wrong. However, behavioral research is probabilistic, which means that its findings are not expected to explain all cases all of the time. Instead, the conclusions of research are meant to explain a certain proportion (prefer- ably a high proportion) of the possible cases. In practice, this means scientific

    ❮❮ For more on the value of conducting multiple studies, see Chapter 14, pp. 425–433.

     

     

    32 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    conclusions are based on patterns that emerge only when researchers set up comparison groups and test many people. Your own experience is only one point in that overall pattern. Thus, for instance, even though bloodletting does not cure illness, some sick patients did recover after being bled. Those excep- tional patients who recovered do not change the conclusion derived from all of the data. And even though your cousin’s Honda needed a lot of repairs, her case is only one out of 1,001 Fit owners, so it doesn’t invalidate the general trend. Similarly, just because there is a strong general trend (that Honda Fits are reliable), it doesn’t mean your Honda will be reliable too. The research may suggest there is a strong probability your Honda will be reliable, but the prediction is not perfect.

    CHECK YOUR UNDERSTANDING

    1. What are two general problems with basing beliefs on experience? How does empirical research work to correct these problems?

    2. What does it mean to say that research is probabilistic?

    1. See pp. 26–31. 2. See pp. 31–32. THE RESEARCH VS. YOUR INTUITION Personal experience is one way we might reach a conclusion. Another is intuition— using our hunches about what seems “natural,” or attempting to think about things “logically.” While we may believe our intuition is a good source of information, it can lead us to make less effective decisions.

    Ways That Intuition Is Biased Humans are not scientific thinkers. We might be aware of our potential to be biased, but we often are too busy, or not motivated enough, to correct and control for these biases. What’s worse, most of us think we aren’t biased at all! Fortunately, the formal processes of scientific research help prevent these biases from affecting our decisions. Here are five examples of biased reasoning.

    BEING SWAYED BY A GOOD STORY

    One example of a bias in our thinking is accepting a conclusion just because it makes sense or feels natural. We tend to believe good stories—even ones that are false. For example, to many people, bottling up negative emotions seems

     

     

    33The Research vs. Your Intuition

    unhealthy, and expressing anger is sensible. As with a pimple or a boiling kettle of water, it might seem better to release the pressure. One of the early propo- nents of catharsis was the neurologist Sigmund Freud, whose models of mental distress focused on the harmful effects of suppressing one’s feelings and the benefits of expressing them. Some biographers have speculated that Freud’s ideas were influenced by the industrial technology of his day (Gay, 1989). Back then, engines used the power of steam to create vast amounts of energy. If the steam was too compressed, it could have devastating effects on a machine. Freud seems to have reasoned that the human psyche functions the same way. Cathar- sis makes a good story, because it draws on a metaphor (pressure) that is familiar to most people.

    The Scared Straight program is another commonsense story that turned out to be wrong. As you read in Chapter 1, such programs propose that when teen- agers susceptible to criminal activity hear about the difficulties of prison from actual inmates, they will be scared away from committing crimes in the future. It certainly makes sense that impressionable young people would be frightened and deterred by such stories. However, research has consistently found that Scared Straight programs are ineffective; in fact, they sometimes even cause more crime. The intuitive appeal of such programs is strong (which accounts for why many communities still invest in them), but the research warns against them. One psychologist estimated that the widespread use of the program in New  Jersey might have “caused 6,500 kids to commit crimes they otherwise would not have committed” (Wilson, 2011, p. 138). Faulty intuition can even be harmful.

    Sometimes a good story will turn out to be accurate, of course, but it’s important to be aware of the limitations of intuition. When empirical evidence contradicts what your common sense tells you, be ready to adjust your beliefs on the basis of the research. Automatically believing a story that may seem to make sense can lead you astray.

    BEING PERSUADED BY WHAT COMES EASILY TO MIND

    Another bias in thinking is the availability heuristic, which states that things that pop up easily in our mind tend to guide our thinking (Tversky & Kahneman, 1974). When events or memories are vivid, recent, or memorable, they come to mind more easily, leading us to overestimate how often things happen.

    Here’s a scary headline: “Woman dies in Australian shark attack.” Dramatic news like this might prompt us to change our vacation plans. If we rely on our intuition, we might think shark attacks are truly common. However, a closer look at the frequency of reported shark attacks reveals that they are incredibly rare. Being killed by a shark (1 in 3.7 million) is less likely than dying from the flu (1 in 63) or in a bathtub (1 in 800,000; Ropeik, 2010).

    Why do people make this mistake? Death by shark attack is certainly more memorable and vivid than getting the flu or taking a bath, so people talk about it

     

     

    34 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    more. It comes to mind easily, and we inflate the associated risk. In contrast, more common methods of dying don’t get much press. Nevertheless, we are too busy (or too lazy) to think beyond the easy answer. We decide the answer that comes to mind easily must be the correct one. We avoid swimming in the ocean, but neglect to get the flu vaccine.

    The availability heuristic might lead us to wrongly estimate the number of something or how often some- thing happens. For example, if you visited my campus, you might see some women wearing a headcovering (hijab), and conclude there are lots of Muslim women here. The availability heuristic could lead you to overestimate, sim- ply because Muslim women stand out visually. People who practice many other religions do not stand out, so you may underestimate their frequency.

    Our attention can be inordinately drawn to certain instances, leading to overestimation. A professor may com- plain that “everybody” uses a cell phone during his class, when in fact only one or two students do so; it’s just that their annoying behavior stands out. You might overesti- mate how often your kid sister leaves her bike out in the rain, only because it’s harder to notice the times she put it away. When driving, you may complain that you always

    hit the red lights, only because you spend more time at them; you don’t notice the green lights you breeze through. What comes to mind easily can bias our conclusions about how often things happen (Figure 2.5).

    FAILING TO THINK ABOUT WHAT WE CANNOT SEE

    The availability heuristic leads us to overestimate events, such as how frequently people encounter red lights or die in shark attacks. A related problem prevents us from seeing the relationship between an event and its outcome. When deciding if there’s a pattern, for example, between bleeding a patient and the patient’s recov- ery, or between using kinesio-tape and feeling better, people forget to seek out the information that isn’t there.

    In the story “Silver Blaze,” the fictional detective Sherlock Holmes investigates the theft of a prize racehorse. The horse was stolen at night while two stable hands and their dog slept, undisturbed, nearby. Holmes reflects on the dog’s “curious” behavior that night. When the other inspectors protest that “the dog did nothing in the night-time,” Holmes replies, “That was the curious incident.” Because the dog did not bark, Holmes deduces that the horse was stolen by someone familiar to the dog at the stable (Doyle, 1892/2002, p. 149; see Gilbert, 2005). Holmes solves the crime because he notices the absence of something.

    When testing relationships, we often fail to look for absences; in con- trast, it is easy to notice what is present. This tendency, referred to as the

    FIGURE 2.5 The availability heuristic. Look quickly: Which color candy is most common in this bowl? You might have guessed yellow, red, or orange, because these colors are easier to see—an example of the availability heuristic. Blue is the most prevalent, but it doesn’t stand out in this context.

     

     

    35The Research vs. Your Intuition

    present/ present bias, is a name for our failure to consider appropriate com- parison groups (discussed earlier). Dr. Rush may have fallen prey to the present/ present bias when he was observing the effects of bloodletting on his patients. He focused on patients who did receive the treatment and did recover (the first cell in Table 2.1 where bleeding treatment was “present” and the recovery was also “present”). He did not fully account for the untreated patients or those who did not recover (the other three cells back in Table 2.1 in which treatment was “absent” or recovery was “absent”).

    Did you ever find yourself thinking about a friend and then get a text mes- sage or phone call from him? “I must be psychic!” you think. No; it’s just the present/present bias in action. You noticed the times when your thoughts coin- cided with a text message and concluded there was a psychic relationship. But you forgot to consider all the times you thought of people who didn’t subse- quently text you or the times when people texted you when you weren’t think- ing about them.

    In the context of managing anger, the present/present bias means we will easily notice the times we did express frustration at the gym, at the dog, or in an e-mail, and subsequently felt better. In other words, we notice the times when both the treatment (venting) and the desired outcome (feeling better) are present but are less likely to notice the times when we didn’t express our anger and just felt better anyway; in other words, the treatment was absent but the outcome was still present (Table 2.4). When thinking intuitively, we tend to focus only on experiences that fall in the present/present cell, the instances in which catharsis seemed to work. But if we think harder and look at the whole picture, we would conclude catharsis doesn’t work well at all.

    The availability heuristic plays a role in the present/present bias because instances in the “present/present” cell of a comparison stand out. But the pres- ent/present bias adds the tendency to ignore “absent” cells, which are essential for testing relationships. To avoid the present/present bias, scientists train themselves always to ask: Compared to what?

    TABLE 2.4

    The Present/Present Bias

    EXPRESSED FRUSTRATION (TREATMENT PRESENT)

    DID NOTHING (TREATMENT ABSENT)

    Felt better (outcome present)

    5 Present/present

    10 Absent/present

    Felt worse (outcome absent)

    10 Present/absent

    5 Absent/absent

    Note: The number in each cell represents the number of times the two events coincided. We are more likely to focus on the times when two factors were both present or two events occurred at the same time (the red present/present cell), rather than on the full pattern of our experiences.

     

     

    36 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    FOCUSING ON THE EVIDENCE WE LIKE BEST

    During an election season, you might check opinion polls for your favorite candidate. What if your candidate lags behind in the first opinion poll you see? If you’re like most people, you will keep looking until you find a poll in which your candidate has the edge (Wolfers, 2014).

    The tendency to look only at information that agrees with what we already believe is called the confirmation bias. We “cherry-pick” the information we take in—seeking and accepting only the evidence that supports what we already think. A lyric by the songwriter Paul Simon captures this well: “A man sees what he wants to see and disregards the rest.”

    One study specifically showed how people select only their preferred evidence. The participants took an IQ test and then were told their IQ was either high or low. Shortly afterward, they all had a chance to look at some magazine articles about IQ tests. Those who were told their IQ was low spent more time looking at articles that criticized the validity of IQ tests, whereas those who were told their IQ was high spent more time looking at articles that supported IQ tests as valid measures of intelligence (Frey & Stahlberg, 1986). They all wanted to think they were smart, so they analyzed the available information in biased ways that supported this belief. People keep their beliefs intact (in this case, the belief that they are smart) by selecting only the kinds of evidence they want to see.

    One way we enact the confirmation bias is by asking questions that are likely to give the desired or expected answers. Take, for example, a study in which the researchers asked students to interview fellow undergraduates (Snyder & Swann, 1978). Half the students were given the goal of deciding whether their target per- son was extroverted, and the other half were given the goal of deciding whether their target person was introverted.

    Before the interview, the students selected their interview questions from a prepared list. As it turned out, when the students were trying to find out whether their target was extroverted, they chose questions such as “What would you do if you wanted to liven things up at a party?” and “What kind of situations do you seek out if you want to meet new people?” You can see the problem: Even introverts will look like extroverts when they answer questions like these. The students were asking questions that would tend to confirm that their targets were extroverted. The same thing happened with the students who were trying to find out if their target was introverted. They chose questions such as “In what situations do you wish you could be more outgoing?” and “What factors make it hard for you to really open up to people?” Again, in responding to these questions, wouldn’t just about any- body seem introverted? Later, when the students asked these questions of real people, the targets gave answers that supported the expectations. The researchers asked some judges to listen in on what the targets said during the interviews. Regardless of their personality, the targets who were being

     

     

    37The Research vs. Your Intuition

    tested for extroversion acted extroverted, and the targets who were being tested for introver- sion acted introverted.

    Unlike the hypothesis-testing process in the theory-data cycle (see Chapter 1), confirmation bias operates in a way that is decidedly not sci- entific. If interviewers were testing the hypoth- esis that their target was an extrovert, they asked the questions that would confirm that hypothe- sis and did not ask questions that might discon- firm that hypothesis. Indeed, even though the students could have chosen neutral questions (such as “What do you think the good and bad points of acting friendly and open are?”), they hardly ever did. In follow-up studies, Snyder and Swann found that student interviewers chose hypothesis-confirming questions even if they were offered a big cash prize for being the most objective interviewer, suggesting that even when people are trying to be accurate, they cannot always be.

    Without scientific training, we are not very rigorous in gathering evidence to test our ideas. Psychological research has repeatedly found that when people are asked to test a hypothesis, they tend to seek the evidence that supports their expectations (Copeland & Snyder, 1995; Klayman & Ha, 1987; Snyder & Campbell, 1980; Snyder & White, 1981). As a result, people tend to gather only a certain kind of information, and then they conclude that their beliefs are supported. This bias is one reason clini- cal psychologists and other therapists are required to get a research methods education (Figure 2.6).

    BIASED ABOUT BEING BIASED

    Even though we read about the biased ways people think (such as in a research methods textbook like this one), we nevertheless conclude that those biases do not apply to us. We have what’s called a bias blind spot, the belief that we are unlikely to fall prey to the other biases previously described (Pronin, Gilovich, & Ross, 2004; Pronin, Lin, & Ross, 2002). Most of us think we are less biased than others, so when we notice our own view of a situation is different from that of somebody else, we conclude that “I’m the objective one here” and “you are the biased one.”

    In one study, researchers interviewed U.S. airport travelers, most of whom said the average American is much more biased than themselves (Pronin et al., 2002). For example, the travelers said that while most others would take

    FIGURE 2.6 Confirmation bias. This therapist suspects her client has an anxiety disorder. What kinds of questions should she be asking that would both potentially confirm and potentially disconfirm her hypothesis?

     

     

    38 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    personal credit for successes, the travelers themselves would not. Respondents believed other Americans would say a person is smart and competent, just because he is nice; however, they themselves do not have this bias. People believed other Americans would tend to “blame the victim” of random violence for being in the wrong place at the wrong time, even though they would do no such thing themselves (Figure 2.7).

    The bias blind spot might be the sneakiest of all of the biases in human thinking. It makes us trust our faulty rea- soning even more. In addition, it can make it difficult for us to initiate the scientific theory-data cycle. We might say, “I don’t need to test this conclusion; I already know it is correct.” Part of learning to be a scientist is learning not to use feelings of confidence as evidence for the truth of our beliefs. Rather than thinking what they want to, scientists use data.

    The Intuitive Thinker vs. the Scientific Reasoner When we think intuitively rather than scientifically, we make mistakes. Because of our biases, we tend to notice and actively seek information that confirms our ideas. To counteract your own biases, try to adopt the empi- rical mindset of a researcher. Recall from Chapter 1 that empiricism involves basing beliefs on systematic information from the senses. Now we have an additional

    nuance for what it means to reason empirically: To be an empiricist, you must also strive to interpret the data you collect in an objective way; you must guard against common biases.

    Researchers—scientific reasoners—create comparison groups and look at all the data. Rather than base their theories on hunches, researchers dig deeper and generate data through rigorous studies. Knowing they should not simply go along with the story everyone believes, they train themselves to test their intuition with systematic, empirical observations. They strive to ask questions objectively and collect potentially disconfirming evidence, not just evidence that confirms their hypotheses. Keenly aware that they have biases, scientific reasoners allow the data to speak more loudly than their own confidently held—but possibly biased—ideas. In short, while research- ers are not perfect reasoners themselves, they have trained themselves to guard against the many pitfalls of intuition—and they draw more accurate conclusions as a result.

    FIGURE 2.7 The bias blind spot. A physician who receives a free gift from a pharmaceutical salesperson might believe she won’t be biased by it, but she may also believe other physicians will be persuaded by such gifts to prescribe the drug company’s medicines.

     

     

    39Trusting Authorities on the Subject

    CHECK YOUR UNDERSTANDING

    1. This section described several ways in which intuition is biased. Can you name all five?

    2. Why might the bias blind spot be the most sneaky of all the intuitive reasoning biases?

    3. Do you think you can improve your own reasoning by simply learning about these biases? How?

    1. See pp. 32–38. 2. See pp. 37–38. 3. Answers will vary.

    TRUSTING AUTHORITIES ON THE SUBJECT You might have heard statements like these: “We only use 10% of our brains” and  “People are either right-brained or left-brained.” People—even those we trust—make such claims as if they are facts. However, you should be cautious about basing your beliefs on what everybody says—even when the claim is made by someone who is (or claims to be) an authority. In that spirit, how reliable is the advice of guidance counselors, TV talk show hosts, or psychology professors? All these people have some authority—as cultural messengers, as professionals with advanced degrees, as people with significant life experience. But should you trust them?

    Let’s consider this example of anger management advice from a person with a master’s degree in psychology, several published books on anger management, a thriving workshop business, and his own website. He’s certainly an authority on the subject, right? Here is his advice:

    Punch a pillow or a punching bag. Yell and curse and moan and holler. . . . If you are

    angry at a particular person, imagine his or her face on the pillow or punching bag,

    and vent your rage. . . . You are not hitting a person, you are hitting the ghost of

    that person . . . a ghost alive in you that must be exorcised in a concrete, physical

    way. (Lee, 1993, p. 96)

    Knowing what you know now, you probably do not trust John Lee’s advice. In fact, this is a clear example of how a self-proclaimed “expert” might be wrong.

    Before taking the advice of authorities, ask yourself about the source of their ideas. Did the authority systematically and objectively compare different con- ditions, as a researcher would do? Or maybe they have read the research and

     

     

    40 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    are interpreting it for you; they might be practitioners who are basing their conclusions on empirical evidence. In this respect, an authority with a scientific degree may be better able to accurately understand and interpret scientific evi- dence (Figure 2.8). If you know this is the case—in other words, if an authority refers to research evidence—their advice might be worthy of attention. However, authorities can also base their advice on their own experience or intuition, just like the rest of us. And they, too, might present only the studies that support their own side.

    Keep in mind, too, that not all research is equally reliable. The research an expert uses to support his or her argument might have been conducted poorly. In the rest of this book, you will learn how to interrogate others’ research and form conclusions about its quality. Also, the research someone cites to support an argu- ment may not accurately and appropriately support that particular argument. In Chapter 3, you’ll learn more about what kinds of research support different kinds of claims. Figure 2.9 shows a concept map illustrating the sources of information reviewed in this chapter. Conclusions based on research, outlined in black on the concept map, are the most likely to be correct.

    FIGURE 2.8 Which authority to believe? Jenny McCarthy (left), an actress and celebrity, claims that giving childhood vaccines later in life would prevent autism disorders. Dr. Paul Offit (right), a physician-scientist who has both reviewed and conducted scientific research on childhood vaccines, says that early vaccines save lives and that there is no link between vaccination and autism diagnosis.

     

     

    41Trusting Authorities on the Subject

    Based on authority

    Based on experience

    No comparison

    group

    Has confounds

    Based on research

    Wikis (think

    critically)

    Magazines and newspaper

    articles (look for research)

    Trade books (look for

    references)

    Chapters in edited

    books

    Full-length books

    Journal articles

    Review articles

    Empirical articles

    Based on intuition

    Scientific sources (by psychologists, for

    psychologists)

    Other sources (by psychologists,

    journalists, or laypeople for a popular audience)

    • Good story • Availability • Present/present bias • Confirmation bias • Bias blind spot

    Could be the authority’s

    intuition

    Could be the authority’s

    personal experience

    Could be based on the authority’s research

    “This, I believe…”

    FIGURE 2.9 A concept map showing sources of information. People’s beliefs can come from several sources. You should base your beliefs about psychological phenomena on research, rather than experience, intuition, or authority. Research can be found in a variety of sources, some more dependable than others. Ways of knowing that are mentioned in outlined boxes are more trustworthy.

     

     

    42 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    CHECK YOUR UNDERSTANDING

    1. When would it be sensible to accept the conclusions of authority figures? When might it not?

    1. See p. 40. When authorities base their conclusions on well-conducted research (rather than experience or intuition), it may be reasonable to accept them.

    FINDING AND READING THE RESEARCH In order to base your beliefs on empirical evidence rather than on experience, intuition, or authority, you will, of course, need to read about that research. But where do you find it? What if you wanted to read studies on venting anger? How would you locate them?

    Consulting Scientific Sources Psychological scientists usually publish their research in three kinds of sources. Most often, research results are published as articles in scholarly journals. In addi- tion, psychologists may describe their research in single chapters within edited books. Some researchers also write full-length scholarly books.

    JOURNAL ARTICLES: PSYCHOLOGY’S MOST IMPORTANT SOURCE

    Scientific journals come out monthly or quarterly, as magazines do. Unlike popular magazines, however, scientific journals usually do not have glossy, colorful cov- ers or advertisements. You are most likely to find scientific journals in college or university libraries or in online academic databases, which are generally available through academic libraries. For example, the study by Bushman (2002) described earlier was published in the journal Personality and Social Psychology Bulletin.

    Journal articles are written for an audience of other psychological scientists and psychology students. They can be either empirical articles or review articles. Empirical journal articles report, for the first time, the results of an (empirical) research study. Empirical articles contain details about the study’s method, the statistical tests used, and the results of the study. Figure 2.10 is an example of an empirical journal article.

    Review journal articles provide a summary of all the published studies that have been done in one research area. A review article by Anderson and his colleagues (2010), for example, summarizes 130 studies on the effects of playing violent video games on the aggressive behavior of children. Sometimes a review article uses a quantitative technique called meta-analysis, which combines the

    ❯❯ For a full discussion

    of meta-analysis, see Chapter 14,

    pp. 433–437.

     

     

    43Finding and Reading the Research

    Does Venting Anger Feed or Extinguish the Flame? Catharsis, Rumination, Distraction, Anger, and Aggressive Responding

    Brad J. Bushman Iowa State University

    Does distraction or rumination work better to diffuse anger? Catharsis theory predicts that rumination works best, but empir- ical evidence is lacking. In this study, angered participants hit a punching bag and thought about the person who had angered them (rumination group) or thought about becoming physically fit (distraction group). After hitting the punching bag, they reported how angry they felt. Next, they were given the chance to administer loud blasts of noise to the person who had angered them. There also was a no punching bag control group. People in the rumination group felt angrier than did people in the distrac- tion or control groups. People in the rumination group were also most aggressive, followed respectively by people in the distraction and control groups. Rumination increased rather than decreased anger and aggression. Doing nothing at all was more effective than venting anger. These results directly contradict catharsis theory.

    The belief in the value of venting anger has become widespread in our culture. In movies, magazine articles, and even on billboards, people are encouraged to vent their anger and “blow off steam.” For example, in the movie Analyze This, a psychiatrist (played by Billy Crystal) tells his New York gangster client (played by Robert De Niro), “You know what I do when I’m angry? I hit a pil- low. Try that.” The client promptly pulls out his gun, points it at the couch, and fires several bullets into the pillow. “Feel better?” asks the psychiatrist. “Yeah, I do,” says the gunman. In a Vogue magazine article, female model Shalom concludes that boxing helps her release pent-up anger. She said,

    I found myself looking forward to the chance to pound out the frustrations of the week against Carlos’s (her trainer) mitts. Let’s face it: A personal boxing trainer has advantages over a husband or lover. He won’t look at you accusingly and say, “I don’t know where this irritation is

    coming from.” . . . Your boxing trainer knows it’s in there. And he wants you to give it to him. (“Fighting Fit,” 1993, p. 179)

    In a New York Times Magazine article about hate crimes, Andrew Sullivan writes, “Some expression of prejudice serves a useful purpose. It lets off steam; it allows natural tensions to express themselves incrementally; it can siphon off conflict through words, rather than actions” (Sullivan, 1999, p. 113). A large billboard in Missouri states, “Hit a Pillow, Hit a Wall, But Don’t Hit Your Kids!”

    Catharsis Theory

    The theory of catharsis is one popular and authorita- tive statement that venting one’s anger will produce a positive improvement in one’s psychological state. The word catharsis comes from the Greek word katharsis, which literally translated means a cleansing or purging. According to catharsis theory, acting aggressively or even viewing aggression is an effective way to purge angry and aggressive feelings.

    Sigmund Freud believed that repressed negative emo- tions could build up inside an individual and cause psy- chological symptoms, such as hysteria (nervous out- bursts). Breuer and Freud (1893-1895/1955) proposed that the treatment of hysteria required the discharge of the emotional state previously associated with trauma. They claimed that for interpersonal traumas, such as

    Author’s Note: I would like to thank Remy Reinier for her help scan- ning photo IDs of students and photographs from health magazines. I also would like to thank Angelica Bonacci for her helpful comments on an early draft of this article. Correspondence concerning this article should be addressed to Brad J. Bushman, Department of Psychology, Iowa State University, Ames, IA 50011-3180; e-mail: bushman@ iastate.edu.

    PSPB, Vol. 28 No. 6, June 2002 724-731 © 2002 by the Society for Personality and Social Psychology, Inc.

    724

    FIGURE 2.10 Bushman’s empirical article on catharsis. The first page is shown here, as it appeared in Personality and Social Psychology Bulletin. The inset shows how the article appears in an online search in that journal. Clicking “Full Text pdf” takes you to the article shown. (Source: Bushman, 2002.)

    STRAIGHT FROM THE  SOURCE

     

     

    44 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    results of many studies and gives a number that summarizes the magnitude, or the effect size, of a relationship. In the Anderson review (2010), the authors computed the average effect size across all the studies. This technique is valued by psychol- ogists because it weighs each study proportion- ately and does not allow cherry-picking particular studies.

    Before being published in a journal, both empirical articles and review articles must be peer-reviewed (see Chapter 1). Both types are con- sidered the most prestigious forms of publication because they have been rigorously peer-reviewed.

    CHAPTERS IN EDITED BOOKS

    An edited book is a collection of chapters on a common topic; each chapter is written by a differ- ent contributor. For example, Michaela Riedeger and Kathrin Klipker published a chapter entitled

    “Emotional Regulation in Adolescence” in an edited book, The Handbook of Emotion Regulation (2014). There are over 30 chapters, all written by different researchers. The editor, James Gross, invited all the other authors to contribute. Generally, a book chapter is not the first place a study is reported; instead, the scientist is summarizing a collection of research and explaining the theory behind it. Edited book chapters can therefore be a good place to find a summary of a set of research a particular psychologist has done. (In this sense, book chapters are sim- ilar to review articles in journals.) Chapters are not peer-reviewed as rigorously as empirical journal articles or review articles. However, the editor of the book is careful to invite only experts—researchers who are intimately familiar with the empirical evidence on a topic—to write the chapters. The audience for these chapters is usually other psychologists and psychology students (Figure 2.11).

    FULL-LENGTH BOOKS

    In some other disciplines (such as anthropology, art history, or English), full- length books are a common way for scholars to publish their work. However, psychologists do not write many full-length scientific books for an audience of other psychologists. Those books that have been published are most likely to be found in academic libraries. (Psychologists may also write full-length books for a general audience, as discussed below.)

    Finding Scientific Sources You can find trustworthy, scientific sources on psychological topics by start- ing with the tools in your college or university’s library. The library’s reference

    FIGURE 2.11 The variety of scientific sources. You can read about research in empirical journal articles, review journal articles, edited books, and full-length books.

     

     

    45Finding and Reading the Research

    staff can be extremely helpful in teaching you how to find appropriate articles or chapters. Working on your own, you can use databases such as PsycINFO and Google Scholar to conduct searches.

    PsycINFO

    One comprehensive tool for sorting through the vast number of psychological research articles is a search engine and database called PsycINFO; it is main- tained and updated weekly. Doing a search in PsycINFO is like using Google, but instead of searching the Internet, it searches only sources in psychology, plus a few sources from related disciplines, including communication, marketing, and education. PsycINFO’s database includes more than 2.5 million records, mostly peer-reviewed articles.

    PsycINFO has many advantages. It can show you all the articles written by a single author (e.g., “Brad Bushman”) or under a single keyword (e.g., “autism”). It tells you whether each source was peer-reviewed. One of the best features of PsycINFO is that it shows other articles that have cited each target article (listed under “Cited by”) and other articles each target article has cited (listed under “References”). If you’ve found a great article for your project in PsychINFO, the “cited by” and “references” lists can be helpful for finding more papers just like it.

    The best way to learn to use PsycINFO is to simply try it yourself. Or, a reference librarian can show you the basic steps in a few minutes.

    One disadvantage is that you cannot use PsycINFO unless your college or uni- versity library subscribes to it. Another challenge—true for any search—is translat- ing your curiosity into the right keywords. Sometimes the search you run will give you too many results to sort through easily. Other times your search words won’t yield the kinds of articles you were expecting to see. Table 2.5 presents some strategies for turning your questions into successful searches.

    TABLE 2.5

    Tips for Turning Your Question into a Successful Database Search

    1. Find out how psychologists talk about your question. Use the Thesaurus tool in the PsycINFO search window to help you find the proper search term:

    Example question: Do eating disorders happen more frequently in families that eat dinner together?

    Instead of “eating disorders,” you may need to be more specific. The Thesaurus tool suggests “binge-eating disorder” or “binge eating.”

    Instead of “eating dinner together,” you may need to be more broad. Thesaurus terms include “family environment” and “home environment.”

    Example question: What motivates people to study?

    Search terms to try: “achievement motivation,” “academic achievement motivation,” “academic self concept,” “study habits,” “homework,” “learning strategies.”

    Example question: Is the Mozart effect real?

    Search terms to try: “Mozart-effect,” “music,” “performance,” “cognitive processes,” “reasoning.”

    2. An asterisk can help you get all related terms: Example: “adolescen*” searches for “adolescence” and “adolescents” and “adolescent.”

    3. If you get too few hits, combine terms using “or” (or gives you more):

    Example: “anorexia” or “bulimia” or “eating disorder.”

    Example: “false memory” or “early memory.”

    4. If you get too many hits, restrict using “and” or by using “not”:

    Example: “anorexia” and “adolescen*.”

    Example: “repressed memory” and “physical abuse.”

    Example: “repressed memory” not “physical abuse.”

    5. Did you find a suitable article? Great! Find similar others by looking through that article’s References and by clicking on Cited by to find other researchers who have used it.

     

     

    46 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    GOOGLE SCHOLAR

    If you want to find empirical research but don’t have access to PsycINFO, you can try the free tool Google Scholar. It works like the regular Google search engine, except the search results are only in the form of empirical journal articles and scholarly books. In addition, by visiting the User Profile for a particular scientist, you can see all of that person’s publications. The User Profile list is updated auto- matically, so you can easily view each scientist’s most recent work, as well as his or her most cited publications.

    One disadvantage of Google Scholar is that it doesn’t let you limit your search to specific fields (such as the abstract). In addition, it doesn’t categorize the articles it finds, for example, as peer-reviewed or not, whereas PsycINFO does. And while PsycINFO indexes only psychology articles, Google Scholar contains articles from all scholarly disciplines. It may take more time for you to sort through the articles it returns because the output of a Google Scholar search is less well organized.

    When you find a good source in Google Scholar, you might be able to immediately access a PDF file of the article for free. If not, then look up whether your university library offers it. You can also request a copy of the article through your college’s interlibrary loan office, or possibly by visiting the author’s university home page.

    Reading the Research Once you have found an empirical journal article or chapter, then what? You might wonder how to go about reading the material. At first glance, some journal articles contain an array of statistical symbols and unfamiliar terminology. Even the titles of journal articles and chapters can be intimidating. Take this one, for example: “Object Substitution Masking Interferes with Semantic Processing: Evidence from Event-Related Potentials” (Reiss & Hoffman, 2006). How is a student supposed to read this sort of thing? It helps to know what you will find in an article and to read with a purpose.

    COMPONENTS OF AN EMPIRICAL JOURNAL ARTICLE

    Most empirical journal articles (those that report the results of a study for the first time) are written in a standard format, as recommended by the Publication Manual of the American Psychological Association (APA, 2010). Most empirical journal articles include certain sections in the same order: abstract, introduction, method, results, discussion, and references. Each section contains a specific kind of information. (For more on empirical journal articles, see Presenting Results: APA-Style Reports at the end of this book.)

    Abstract. The abstract is a concise summary of the article, about 120 words long. It briefly describes the study’s hypotheses, method, and major results. When you are collecting articles for a project, the abstracts can help you quickly decide whether each article describes the kind of research you are looking for, or whether you should move on to the next article.

     

     

    47Finding and Reading the Research

    Introduction. The introduction is the first section of regular text, and the first paragraphs typically explain the topic of the study. The middle paragraphs lay out the background for the research. What theory is being tested? What have past studies found? Why is the present study important? Pay attention to the final paragraph, which states the specific research questions, goals, or hypotheses for the current study.

    Method. The Method section explains in detail how the researchers conducted their study. It usually contains subsections such as Participants, Materials, Pro- cedure, and Apparatus. An ideal Method section gives enough detail that if you wanted to repeat the study, you could do so without having to ask the authors any questions.

    Results. The Results section describes the quantitative and, as relevant, quali- tative results of the study, including the statistical tests the authors used to ana- lyze the data. It usually provides tables and figures that summarize key results. Although you may not understand all the statistics used in the article (especially early in your psychology education), you might still be able to understand the basic findings by looking at the tables and figures.

    Discussion. The opening paragraph of the Discussion section generally sum- marizes the study’s research question and methods and indicates how well the results of the study supported the hypotheses. Next, the authors usually discuss the study’s importance: Perhaps their hypothesis was new, or the method they used was a creative and unusual way to test a familiar hypothesis, or the partic- ipants were unlike others who had been studied before. In addition, the authors may discuss alternative explanations for their data and pose interesting questions raised by the research.

    References. The References section contains a full bibliographic listing of all the sources the authors cited in writing their article, enabling interested readers to locate these studies. When you are conducting a literature search, reference lists are excellent places to look for additional articles on a given topic. Once you find one relevant article, the reference list for that article will contain a treasure trove of related work.

    READING WITH A PURPOSE: EMPIRICAL JOURNAL ARTICLES

    Here’s some surprising advice: Don’t read every word of every article, from begin- ning to end. Instead, read with a purpose. In most cases, this means asking two questions as you read: (1) What is the argument? (2) What is the evidence to support the argument? The obvious first step toward answering these questions is to read the abstract, which provides an overview of the study. What should you read next?

    Empirical articles are stories from the trenches of the theory-data cycle (see Figure 1.5 in Chapter 1). Therefore, an empirical article reports on data that are generated to test a hypothesis, and the hypothesis is framed as a test of a particular theory. After reading the abstract, you can skip to the end of the introduction to

     

     

    48 CHAPTER 2 Sources of Information: Why Research Is Best and How to Find It

    find the primary goals and hypotheses of the study. After reading the goals and hypotheses, you can read the rest of the introduction to learn more about the the- ory that the hypotheses are testing. Another place to find information about the argument of the paper is the first paragraph of the Discussion section, where most authors summarize the key results of their study and state how well the results supported their hypotheses.

    Once you have a sense of what the argument is, you can look for the evidence. In an empirical article, the evidence is contained in the Method and Results sec- tions. What did the researchers do, and what results did they find? How well do these results support their argument (i.e., their hypotheses)?

    READING WITH A PURPOSE: CHAPTERS AND REVIEW ARTICLES

    While empirical journal articles use predetermined headings such as Method, Results, and Discussion, authors of chapters and review articles usually create headings that make sense for their particular topic. Therefore, a way to get an overview of a chapter or review article is by reading each heading.

    As you read these sources, again ask: What is the argument? What is the evi- dence? The argument will be the purpose of the chapter or review article—the author’s stance on the issue. In a review article or chapter, the argument often presents an entire theory (whereas an empirical journal article usually tests only one part of a theory). Here are some examples of arguments you might find in chapters or review articles:

    • Playing violent video games causes children to be more aggressive (Anderson et al., 2010).

    • While speed reading is possible, it comes at the cost of comprehension of the text (Rayner, Schotter, Masson, Potter, & Treiman, 2016).

    • “Prolonged exposure therapy” is effective for treating most people who suffer from posttraumatic stress disorder, though many therapists do not yet use this therapy with their clients (Foa, Gillihan, & Bryant, 2013).

    In a chapter or review article, the evidence is the research that the author reviews. How much previous research has been done? What have the results been? How strong are the results? What do we still need to know? With practice, you will get better at reading efficiently. You’ll learn to categorize what you read as argument or evidence, and you will be able to evaluate how well the evidence supports the argument.

    Finding Research in Less Scholarly Places Reading about research in its original form is the best way to get a thorough, accurate, and peer-reviewed report of scientific evidence. There are other sources

     

     

    49Finding and Reading the Research

    for reading about psychological research, too, such as nonacademic books written for the general pub- lic, websites, and popular newspapers and mag- azines. These can be good places to read about psychological research, as long as you choose and read your sources carefully.

    THE RETAIL BOOKSHELF

    If you browse through the psychology section in a bookstore, you will mostly find what are known as trade books about psychology, written for a general audience (Figure 2.12). Unlike the scientific sources we’ve covered, these books are written for people who do not have a psychology degree. They are written to help people, to inform, to entertain, and to make money for their authors.

    The language in trade books is much more read- able than the language in most journal articles. Trade books can also show how psychology applies to your everyday life, and in this way they can be useful. But how well do trade books reflect current research in psychology? Are they peer-reviewed? Do they contain the best research, or do they simply present an uncritical summary of common sense, intuition, or the author’s own experience?

 
"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!"