So, like we usually do, I decided to do a simple experiment. And here’s how it went. If you were in the experiment, I would pass you a sheet of paper with 20 simple math problems that everybody could solve, but I wouldn’t give you enough time. When the five minutes were over, I would say, “Pass me the sheets of paper, and I’ll pay you a dollar per question.” People did this. I would pay people four dollars for their task — on average people would solve four problems.
Other people I would tempt to cheat. I would pass their sheet of paper. When the five minutes were over, I would say, “Please shred the piece of paper. Put the little pieces in your pocket or in your backpack, and tell me how many questions you got correctly.” People now solved seven questions on average.
Now, it wasn’t as if there was a few bad apples — a few people cheated a lot. Instead, what we saw is a lot of people who cheat a little bit.
Now, in economic theory, cheating is a very simple cost-benefit analysis. You say, what’s the probability of being caught? How much do I stand to gain from cheating? And how much punishment would I get if I get caught? And you weigh these options out — you do the simple cost-benefit analysis, and you decide whether it’s worthwhile to commit the crime or not. So, we try to test this.
For some people, we varied how much money they could get away with — how much money they could steal. We paid them $0.10 per correct question, $0.50, $1, $5, $10 per correct question.
You would expect that as the amount of money on the table increases, people would cheat more, but in fact it wasn’t the case. We got a lot of people cheating by stealing by a little bit.
What about the probability of being caught? Some people shredded half the sheet of paper, so there was some evidence left. Some people shredded the whole sheet of paper. Some people shredded everything, went out of the room, and paid themselves from the bowl of money that had over $100.
You would expect that as the probability of being caught goes down, people would cheat more, but again, this was not the case. Again, a lot of people cheated by just by a little bit, and they were insensitive to these economic incentives.
So we said, “If people are not sensitive to the economic rational theory explanations, to these forces, what could be going on?” And we thought maybe what is happening is that there are two forces. At one hand, we all want to look at ourselves in the mirror and feel good about ourselves, so we don’t want to cheat. On the other hand, we can cheat a little bit, and still feel good about ourselves. So, maybe what is happening is that there’s a level of cheating we can’t go over, but we can still benefit from cheating at a low degree, as long as it doesn’t change our impressions about ourselves. We call this like a personal fudge factor.
Now, how would you test a personal fudge factor? Initially we said, what can we do to shrink the fudge factor? So, we got people to the lab, and we said, “We have two tasks for you today.” First, we asked half the people to recall either 10 books they read in high school, or to recall The Ten Commandments, and then we tempted them with cheating.
Turns out the people who tried to recall The Ten Commandments — and in our sample nobody could recall all of The Ten Commandments — but those people who tried to recall The Ten Commandments, given the opportunity to cheat, did not cheat at all. It wasn’t that the more religious people — the people who remembered more of the Commandments — cheated less, and the less religious people — the people who couldn’t remember almost any Commandments — cheated more.
The moment people thought about trying to recall The Ten Commandments, they stopped cheating. In fact, even when we gave self-declared atheists the task of swearing on the Bible and we give them a chance to cheat, they don’t cheat at all.
Now, Ten Commandments is something that is hard to bring into the education system, so we said, “Why don’t we get people to sign the honor code?” So, we got people to sign, “I understand that this short survey falls under the MIT Honor Code.” Then they shredded it. No cheating whatsoever. And this is particularly interesting, because MIT doesn’t have an honor code. So, all this was about decreasing the fudge factor.
What about increasing the fudge factor? The first experiment — I walked around MIT and I distributed six-packs of Cokes in the refrigerators — these were common refrigerators for the undergrads. And I came back to measure what we technically call the half-lifetime of Coke — how long does it last in the refrigerators? As you can expect it doesn’t last very long; people take it.
In contrast, I took a plate with six one-dollar bills, and I left those plates in the same refrigerators. No bill was ever disappeared.
Now, this is not a good social science experiment, so to do it better I did the same experiment as I described to you before. A third of the people we passed the sheet, they gave it back to us. A third of the people we passed it to, they shredded it, they came to us and said, “Mr. Experimenter, I solved X problems. Give me X dollars.” A third of the people, when they finished shredding the piece of paper, they came to us and said, “Mr Experimenter, I solved X problems. Give me X tokens.” We did not pay them with dollars; we paid them with something else. And then they took the something else, they walked 12 feet to the side, and exchanged it for dollars.