Duped Read online

Page 2


  A number of other students, from high school students to PhD candidates, have contributed in various ways to the research reported here. These include (but are not limited to) Yasuhiro Daiku, Hillary Shulman, Allison Shaw, Doug Messer, Frankie Carey, Kelli Asada, Mikala Hughes, Dave DeAndrea, Chris Carpenter, Alex Davis, Darcy Dietrich, April Li, David Yuan, Tracie Greene, Eric Jickling, and the students of my Com 399 and UGS200H deception classes. Yasuhiro made many of the high-resolution figures for the book.

  Thanks also to the National Security Agency, the Baton Rouge District Attorney’s Office, and the US Customs and Border Protection office in Detroit for their involvement in the research. Dan Baxter was especially helpful. Big thanks to the National Science Foundation for the initial funding and to the FBI for additional financial support.

  Much of my own work summarized in this book originally appeared in journal articles over the past twenty-eight years. Numerous journal editors and journal reviewers have provided invaluable feedback over the years. Credit is given to specific journals in notes and the bibliography. Here, I express my appreciation and recognition to all the journals, editors, and reviewers who contributed to the ideas, presentation, and dissemination of TDT and research behind it.

  My friend Tom Hove provided valuable feedback on many of the ideas presented in the book. Dave DeAndrea has also read early drafts, provided wonderful feedback, and even assigned drafts in his classes. Torsten Reimer coined the terms “truth-default” and “trigger.” Mark Knapp, Michael Beatty, and Howie Giles have been very supportive of my work and my career. Harry David provided initial editing and proofreading.

  Dan Waterman is my editor at the University of Alabama Press. Dan has been terrifically supportive and helpful. He really got behind the project. I am very fortunate to have Dan as an editor. Thanks for everything, Dan!

  Credit also goes to former professors who taught me well: Buddy Wheeless, Jim McCroskey, G. R. Miller, Frank Boster, and Jack Hunter. Finally, credit also goes to four individuals who have especially influenced me through their wonderful scholarship: Paul Meehl, Gerd Gigerenzer, Bob Cialdini, and Dan Gilbert.

  TIM LEVINE

  BIRMINGHAM, ALABAMA

  List of Studies and Experiments

  Chapter Eight

  IMT Study One: The First IMT Study

  IMT Study Two: Replication and Inclusion of a Dichotomous Assessment

  IMT Study Three: Effects of Deception Severity

  IMT Study Four: Relationship among IMT Dimensions

  Chapter Nine

  TDT Study One: The Prevalence of Lying in America

  TDT Study Two: Diary Studies Reexamined

  TDT Study Three: A Replication with College Students

  TDT Study Four: Teens Lie a Lot

  TDT Study Five: The Prevalence of Lying in the UK

  Chapter Ten

  TDT Experiment Six: Selecting Truths and Lies

  TDT Experiment Seven: Generating Truths and Lies

  TDT Experiment Eight: Introducing the NSF Cheating Experiments

  TDT Study Nine: Toward a Pan-Cultural List of Deception Motives

  TDT Experiment Ten: Projected Motives One

  TDT Experiment Eleven: Projected Motives Two

  TDT Experiment Twelve: Projected Motives Three

  Chapter Eleven

  TDT Experiment Thirteen: Lovers Just Aren’t Leery

  TDT Experiment Fourteen: Rachel Kim and Suspicion Redux

  TDT Experiment Fifteen: David Clare and Evidence for the Truth-Default

  TDT Experiment Sixteen: David Clare and Evidence for the Truth-Default Two

  Chapter Twelve

  TDT Experiment Seventeen: The First Base-Rate Experiment

  TDT Experiment Eighteen: Accuracy Is a Predictable Linear Function of Base-Rates

  TDT Experiment Nineteen: The First Interactive Base-Rate Experiment

  TDT Experiment Twenty: The Second Interactive Base-Rate Experiment

  TDT Experiment Twenty-one: The Third Interactive Base-Rate Experiment

  TDT Experiment Twenty-two: The First Korean Base-Rate Replication

  TDT Experiment Twenty-three: The Second Korean Base-Rate Replication

  Chapter Thirteen

  TDT Experiment Twenty-four: Cues-Biased Lie Detection Training Lacks Efficacy (Bogus Training One)

  TDT Experiment Twenty-five: Bogus Training Two

  TDT Experiment Twenty-six: Lie to Me

  TDT Experiment Twenty-seven: Senders Vary Much More than Judges

  TDT Experiment Twenty-eight: Looking for a Transparent Liar

  TDT Experiment Twenty-nine: Gaming Accuracy by Stacking the Demeanor Deck

  TDT Experiment Thirty: Replicating Demeanor Effects across Different Types of Judges (Students)

  TDT Experiment Thirty-one: Replicating Demeanor Effects across Different Types of Judges (Professors)

  TDT Experiment Thirty-two: Replicating Demeanor Effects across Different Types of Judges (Korean Sample)

  TDT Experiment Thirty-three: Replicating Demeanor Effects across Different Types of Judges (Experts)

  TDT Study Thirty-four: Uncovering the BQ

  TDT Study Thirty-five: Uncovering the BQ Two

  TDT Experiment Thirty-six: A Full Cross-Validation of the BQ

  Chapter Fourteen

  TDT Study Thirty-seven: How People Really Detect Lies

  TDT Experiment Thirty-eight: A First Peek at Situational Familiarity

  TDT Experiment Thirty-nine: Content in Context (Students, Cheating)

  TDT Experiment Forty: Content in Context (Students, Mock Crime)

  TDT Experiment Forty-one: Content in Context (Students, Real Crime)

  TDT Experiment Forty-two: Content in Context (Experts, Cheating)

  TDT Experiment Forty-three: Content in Context (Experts, Mock Crime)

  TDT Experiment Forty-four: Content in Context (Experts, Real Crime)

  TDT Experiment Forty-five: A (Failed?) Questioning Style Experiment

  TDT Study Forty-six: Improved Accuracy with the Fourth Question Set

  TDT Study Forty-seven: Improved Accuracy with the Fourth Question Set Two

  TDT Study Forty-eight: Improved Accuracy with the Fourth Question Set Three

  TDT Experiment Forty-nine: Head-to-Head Comparisons

  TDT Experiment Fifty: Head-to-Head Comparisons Two

  TDT Experiment Fifty-one: Head-to-Head Comparisons Three

  TDT Experiment Fifty-two: Smashing the Accuracy Ceiling with Expert Questioning (Pete does the questioning)

  TDT Experiment Fifty-three: Smashing the Accuracy Ceiling with Expert Questioning (Students watch Pete’s Interviews)

  TDT Experiment Fifty-four: Smashing the Accuracy Ceiling with Expert Questioning (New Experts)

  TDT Experiment Fifty-five: Smashing the Accuracy Ceiling with Expert Questioning (Student Judges of Expert Interviews)

  PART I

  The Social Science of Deception

  1

  The Science of Deception

  ON THE MORNING OF JULY 19, 2012, I listened with interest as two former CIA agents, Philip Houston and Michael Floyd, were interviewed during an episode of The Diane Rehm Show on National Public Radio about their new book, Spy the Lie.¹ The book was a popular-press attempt to share the secrets of lie detection with the book-buying public. It seemed to me that some of what they were saying was right on target, while some of their other advice was pure bunk. What really caught my ear, however, was their response to one particular caller who asserted that we might not be able to detect lies very well. Some liars, this caller said, were really good at lying. The caller referenced Robert Hanssen, the FBI agent who spied against the United States for Russia and successfully evaded detection for more than twenty years. Didn’t Robert Hanssen exemplify our inability to catch some lies and liars?

  The authors replied that their approach to detecting lies was not based on scientific research but on anecdotes and personal experience. Experience had taught them that their approach
was highly effective.

  Wow! I found this response really curious. First, the caller had not referenced academic research. The caller’s comments involved anecdote, not science. As we will see in chapter 3, much research shows that people are poor lie detectors. Science clearly contradicts some of these authors’ assertions. But why bring scientific research up at that time? Second, I did not expect the authors to volunteer information implying that their view might contradict scientific evidence, and to openly express their hopeful desire that listeners trust their anecdotes over science. Is that really persuasive? Presumably, they were on the radio to promote their book. To my ear, they had just undercut themselves. Third, while they might have had good reasons to believe that the lies they detected were valid, how could they have known how many lies they had missed? They had no way of knowing how often they had been suckered. The best lies are never detected. In the lab, researchers know what is truth and what is lie. In everyday life, we often cannot know what is truth and what is lie with 100 percent certainty. Sometimes we are wrong and never know we are wrong. This was the caller’s point, and it was a good one. As we will see in chapter 13, some liars are really good at telling convincing lies.

  Here is an observation from my own research that applies to using examples as evidence for what works in lie detection. From 2007 to 2010, I had funding from the National Science Foundation to create a collection of videotaped truths and lies for use in deception detection research. I ended up creating more than three hundred taped interviews during that time. (I have made more since then with funding from the FBI.) I have watched these three hundred interviews many times over the years. For every liar you can point to multiple things that seem to give the lie away. If you watch the tapes carefully, the clues are almost always there to see.

  There are, however, a couple important catches. First, what gives away one liar is usually different from the signals that reveal the next liar. The signs seem to be unique to the particular lie, and the science discussed later in the book backs this up. Second, if you go through the tapes carefully, for every liar who seems to be exposed by a telltale sign or collection of clues, there are honest people who act the same way and do the same things. That is, most of the behaviors that seem to give away liars also serve to misclassify honest people.

  That honest people can appear deceptive is one of the many insights I have gained from watching so many tapes where the actual truth (called “ground truth” by researchers) is known for certain. If you know some statement is a lie, you can usually point to clues indicating deceit. That is because of hindsight.² Pointing out clues with hindsight is one thing. Using those same behaviors to correctly distinguish truthful from deceptive communication is quite another. Different liars do different things, and some honest people do those things too. When I watch tapes of truths and lies without knowing in advance which is which, whether some behavior indicates a lie is not entirely clear. I find that when I don’t know who is a liar beforehand, I miss some lies, and I falsely suspect some honest people of deceit. Many times, I am just not sure one way or the other. Cherry-picking examples is easy and makes for persuasive and appealing anecdotes. Cherry-picked examples informed by hindsight do not lead to useful knowledge and understanding that extend beyond the specific example.

  This makes me suspicious of knowledge by common sense, anecdote, and personal experience. I want scientifically defensible evidence. If what I think does not mesh with the data, we have to question what I think. This principle, by the way, is not only useful in assessing the quality of advice. It is also a good way to detect lies. There is much more on the use of evidence in lie detection in chapter 14.

  What the scientific research says (at least up through 2006; current findings are more nuanced) is that people are typically not very good at accurately distinguishing truths from lies. The research behind this conclusion is extensive and solid, and that will be covered in chapter 3.

  Nevertheless, we should not be too quick to dismiss professionals with expertise in interrogation and interviewing who believe there are ways to catch liars and detect lies. Just because their evidence is anecdotal does not mean that it is false or necessarily incorrect. In fact, my research described in this book is, in part, an effort to use science to reconcile research findings with practitioners’ experience. That is, rather than trying to debunk practitioners, as so many of my fellow academics try to do, I began designing experiments to explain the differences between successful interrogations and typical deception detection laboratory experiments. Much of the research, I have come to believe, tells us more about lie detection in the lab than in real life.³

  A more accurate scientific conclusion is that people were not very good at detecting lies in lie-detection experiments published prior to 2006. The research did not prove that lies can’t be detected! The research showed that lies cannot be accurately detected in the type of experiments that were used to study lie detection. Conclusions from research are always limited by how the research was done. In the case of research on the accuracy of deception detection, I have come to believe that this is a critical, game-changing point (see chapters 12, 13, and 14, and compare the research reviewed in chapter 14 to that reviewed in chapter 3).

  I have come to believe that many approaches to lie detection are ineffectual, especially those that involve what I call “cues.” Some approaches do have more promise. Improving accuracy involves understanding what works, what does not work, and why. Because I am a social scientist, anecdotes, yarns, and good stories are not going to cut it as evidence. Scientific evidence is required. Real-world observations are critical in generating the ideas that I research, but such observations are only the starting point. I prove my points with controlled experiments. I insist that my results replicate. I think my readers should expect this. And it is this insistence on scientific evidence that can be replicated that makes my approach better than the alternative approaches and theories out there.

  This, however, does not mean I have an aversion to good stories. I began the chapter with a story about listening to the radio one morning. Then there was a second story about my repeated viewing of the NSF deception tapes. Stories are great for explaining ideas, making ideas understandable, and generating research ideas. Stories are essential for making points interesting and engaging. I will tell plenty of stories throughout the book. I will also present hard data that are scientifically defensible and have passed the dual tests of replication and publication in peer-reviewed academic journals. At the end of the day, I am doing science, and this book is about a scientific approach to deception and deception detection.

  Speaking of stories, here is another, and while it is a digression, I think it addresses a question many readers may have at the outset. People tend to find deception interesting. People are naturally curious about lies and lie detection. And it’s not every day that people meet a deception detection researcher. Actually, there are not many of us around to meet. People who have sustained careers studying the social science of deception probably number less than two or three dozen worldwide. Anyway, when people find out that I study deception, one common question is how and why I got into deception research. This is a question that I have been asked too many times to count, and this is a good time and place to answer it.

  The truth is that I stumbled into deception research. I became a deception researcher largely out of serendipity. Remaining a deception researcher was opportunistic. Back in grade school, I was very interested in the physical sciences. Other little kids wanted to be policemen or firemen or astronauts, but I wanted to be a geologist when I grew up. That changed in junior high and high school. I had a strong fascination for why people did things and with social dynamics. One of my nicknames in high school was Freud; but I wasn’t interested in psychological disorders. I was curious about normal, everyday social behavior. And I still am. What I call truth-default theory (TDT) is about deception in the lives of normal people in their everyday social interactions.
>
  When I was in high school, it was unclear whether I could go to college. I’m dyslexic. A psychologist told my parents it would be a waste of money to send me to college. I was sure to flunk out. Fortunately for me, my grades were good enough, and I scored well enough on the ACT test, to be admitted to all the colleges to which I applied. My parents agreed to give me a chance, as long as I selected a public, in-state university with low tuition. I chose Northern Arizona University.

  When I went off to college, I knew I would be a psychology major. As I learned more, I gravitated toward persuasion as a topic. I grew up the son of a real estate salesperson, and sales and social influence intrigued me. During my third year in college, I learned that persuasion was a topic of research, and when I went on to graduate school, that was the topic that drew me in. I switched from psychology to communication mostly for practical reasons. It was easier to get into top-rated graduate programs with full-ride funding in communication than in the more competitive field of psychology. I did both my MA thesis and my PhD thesis on the topic of persuasion. I teach classes on persuasion to this day. It was early in graduate school that I started picking up interpersonal-communication processes as a second area of focus.

  By the time I finished my first semester of graduate school, I pretty much knew my career choice was in academia and that I wanted to be a professor.⁴ It turned out that I had some talent in research and that I enjoyed teaching. I managed to get into the highly regarded PhD program at Michigan State University, where two leaders in persuasion (Gerry Miller and Frank Boster) were on the faculty. Miller’s health was in decline, and I ended up studying under Boster.