WEBVTT

1
00:00:09.437 --> 00:00:13.205
So far, we have talked about
the importance of understanding absolute

2
00:00:13.205 --> 00:00:16.760
levels of rewards and
what rewards people value.

3
00:00:16.760 --> 00:00:20.730
We also discussed the importance of
understanding relative rewards and

4
00:00:20.730 --> 00:00:23.680
the notion of equity where we
socially compare to other peers.

5
00:00:23.680 --> 00:00:29.850
I would like to expand our discussion by
talking about schedules of reinforcement,

6
00:00:29.850 --> 00:00:33.719
which is when and
how a given reward is distributed.

7
00:00:35.490 --> 00:00:37.020
Consider the following problem.

8
00:00:37.020 --> 00:00:38.910
You're trying to motivate your
daughter to do her homework.

9
00:00:40.290 --> 00:00:43.730
Which schedule of reinforcement would most
effectively motivate your child to do her

10
00:00:43.730 --> 00:00:46.330
homework and
keep doing her homework in the future?

11
00:00:47.510 --> 00:00:50.940
Option A, you give your daughter
a piece of her favorite candy after

12
00:00:50.940 --> 00:00:53.110
every 20 minutes of studying.

13
00:00:53.110 --> 00:00:57.090
Option B, you give your daughter three
pieces of her favorite candy after sixty

14
00:00:57.090 --> 00:00:58.710
minutes of studying.

15
00:00:58.710 --> 00:01:02.110
Option C, you give your daughter two
pieces of candy after the first thirty

16
00:01:02.110 --> 00:01:06.060
minutes of studying,
no candy after the next thirty minutes and

17
00:01:06.060 --> 00:01:08.889
then one piece of candy after
every fifteen minutes of studying.

18
00:01:10.720 --> 00:01:11.367
What would you choose?

19
00:01:14.067 --> 00:01:18.110
Now let's assume that your daughter's
willing to study for about two hours.

20
00:01:18.110 --> 00:01:21.890
So the absolute level of reward
distributed is exactly the same.

21
00:01:23.270 --> 00:01:26.110
What varies is when and how its given out.

22
00:01:27.250 --> 00:01:30.080
So we'll come back to this
question shortly, but

23
00:01:30.080 --> 00:01:34.700
I would like to first of all say that the
most prevalent schedules of reinforcement

24
00:01:34.700 --> 00:01:38.050
in contemporary organizations
are fixed interval and fixed ratio.

25
00:01:39.150 --> 00:01:43.380
By fixed interval I mean you receive
a reward after a fixed time interval

26
00:01:43.380 --> 00:01:45.448
receiving a paycheck at the end
of the month or biweekly.

27
00:01:45.448 --> 00:01:50.570
A fixed ratio is when you receive a reward
after a fixed number of responses,

28
00:01:50.570 --> 00:01:52.830
such as you get paid after
every five cars sold.

29
00:01:54.850 --> 00:01:58.200
And even though these are the most
prevalent schedules of reinforcement,

30
00:01:58.200 --> 00:01:59.755
they're not necessarily
the most effective.

31
00:01:59.755 --> 00:02:04.764
And In the words of B.F.
Skinner, no one works on Monday morning

32
00:02:04.764 --> 00:02:09.420
because he is reinforced by
a paycheck on Friday afternoon.

33
00:02:09.420 --> 00:02:13.326
I'm gonna show you a video that will
give us insight into a vastly different

34
00:02:13.326 --> 00:02:14.880
schedule of reinforcement.

35
00:02:15.970 --> 00:02:16.514
Take a look.

36
00:02:24.702 --> 00:02:26.782
One of the reasons slot machines are so

37
00:02:26.782 --> 00:02:31.540
addictive is because they're based on
the variable schedule of reinforcement.

38
00:02:31.540 --> 00:02:35.320
Specifically variable ratio
schedule of reinforcement.

39
00:02:35.320 --> 00:02:38.610
What I mean by this is that
the probability is constant.

40
00:02:38.610 --> 00:02:41.350
But the number of lever presses
needed to win is variable.

41
00:02:43.000 --> 00:02:46.430
So again, the variable ratio
schedule of reinforcement

42
00:02:46.430 --> 00:02:50.170
is the number of units produced
to receiver award various.

43
00:02:50.170 --> 00:02:52.340
An example would be a lottery for
your employees.

44
00:02:52.340 --> 00:02:53.980
Introducing probabilistic rewards.

45
00:02:53.980 --> 00:02:55.450
I'll give an example of those shortly.

46
00:02:57.000 --> 00:03:00.340
There's also variable interval
schedule reinforcement,

47
00:03:00.340 --> 00:03:04.020
where you receive reward after timed
intervals of different length.

48
00:03:04.020 --> 00:03:05.980
An example would be receiving
praise only now and

49
00:03:05.980 --> 00:03:07.980
then, a surprise inspection, a pop quiz.

50
00:03:09.510 --> 00:03:11.830
So what we know from
research is the following.

51
00:03:11.830 --> 00:03:15.090
First of all,
ratio reinforcement schedules

52
00:03:15.090 --> 00:03:18.100
typically outperform interval
reinforcement schedules.

53
00:03:20.660 --> 00:03:25.220
And secondly is variable reinforcement
schedules typically outperform their

54
00:03:25.220 --> 00:03:26.760
fixed counterparts.

55
00:03:26.760 --> 00:03:32.100
So, variable interval schedule
outperforms fixed interval schedule.

56
00:03:32.100 --> 00:03:35.370
The variable ration schedule
outperforms fixed ration schedule.

57
00:03:36.940 --> 00:03:40.790
When I say outperforms, it means it leads
to high levels of motivation, engagement,

58
00:03:40.790 --> 00:03:41.550
and performance.

59
00:03:41.550 --> 00:03:47.190
Let me give you an example of a study,
that directly compares the performance

60
00:03:47.190 --> 00:03:52.190
effects of a fixed ratio reinforcement
schedule, relative to a variable ratio.

61
00:03:53.940 --> 00:03:56.990
So in this study participants
were asked to do a simple task,

62
00:03:56.990 --> 00:03:58.840
which was grading exams.

63
00:03:58.840 --> 00:04:04.591
And for the first week all they received
was fixed compensation, so $1.5 per hour.

64
00:04:05.700 --> 00:04:10.430
In the second week, that's where it gets
interesting, bonuses were introduced.

65
00:04:10.430 --> 00:04:14.445
For the first group
the bonus was as follows,

66
00:04:14.445 --> 00:04:19.840
$0.50 for each exam sheet graded if
you correctly guess a coin flip.

67
00:04:19.840 --> 00:04:21.930
So you create an exam sheet,
you submit it,

68
00:04:21.930 --> 00:04:23.960
and you have to guess
a coin flip correctly.

69
00:04:23.960 --> 00:04:25.850
If you guess it correctly, you get $0.50.

70
00:04:25.850 --> 00:04:27.800
If you don't guess it,
you receive nothing.

71
00:04:29.670 --> 00:04:33.653
In the second condition in the second
group, you get $0.25 guaranteed for

72
00:04:33.653 --> 00:04:35.710
every exam sheet graded.

73
00:04:35.710 --> 00:04:36.720
No coin flip involved.

74
00:04:37.840 --> 00:04:40.340
So in the first group, you can see
this is a probabilistic reward.

75
00:04:40.340 --> 00:04:46.900
Now the first thing to recognize here,
is that the absolute levels of reward,

76
00:04:46.900 --> 00:04:50.350
of compensation, are very much
comparable across the two groups,

77
00:04:50.350 --> 00:04:53.500
assuming that you can guess coin
flips with about 50% probability.

78
00:04:53.500 --> 00:04:55.910
And that's exactly what happened here.

79
00:04:55.910 --> 00:04:58.740
So most people guessed coin flips
with about a 50% probability.

80
00:04:59.980 --> 00:05:04.880
The only thing that's different
across the two bonuses, is that

81
00:05:04.880 --> 00:05:09.390
we have a variable ratio reinforcement
schedule in the first condition, and

82
00:05:09.390 --> 00:05:11.150
fixed ratio in the second condition.

83
00:05:12.850 --> 00:05:17.620
I call the variable ratio reinforcement
schedule In the first condition,

84
00:05:17.620 --> 00:05:21.910
because I can guess the coin flip
correctly on the first exam sheet graded,

85
00:05:21.910 --> 00:05:25.400
then miss it on the second and third, then
guess it again correctly on the fourth,

86
00:05:25.400 --> 00:05:28.490
and miss it again on the fifth, and
guess it correctly on the sixth.

87
00:05:28.490 --> 00:05:35.150
So the rewards were becoming after
different numbers of units graded.

88
00:05:35.150 --> 00:05:36.790
Let's look at the performance results.

89
00:05:36.790 --> 00:05:41.840
So what this graph shows on the y axis,
is the increase

90
00:05:41.840 --> 00:05:46.520
in the number of exams graded per day
following the introduction of bonuses.

91
00:05:48.190 --> 00:05:51.330
And as you can see, for
the fixed-ratio reinforcement schedule,

92
00:05:51.330 --> 00:05:56.320
where people had $0.25 per exemption
guaranteed, no probabilities involved,

93
00:05:56.320 --> 00:05:57.720
no coin flip, no uncertainty.

94
00:05:58.780 --> 00:06:02.158
The productivity went up by about 36.5%.

95
00:06:02.158 --> 00:06:07.080
And in the second condition, for
the second group, which had the variable

96
00:06:07.080 --> 00:06:12.224
ratio of schedule reinforcement,
the productivity went up by 44.8%.

97
00:06:14.690 --> 00:06:17.380
So, you can see that the schedule
of reinforcement matters.

98
00:06:20.250 --> 00:06:26.170
Variable ratio schedule reinforcement can
increase productivity and engagement.

99
00:06:26.170 --> 00:06:26.820
So keep that in mind.

100
00:06:26.820 --> 00:06:28.890
Let me give you another example.

101
00:06:28.890 --> 00:06:31.360
New York Life Insurance.

102
00:06:31.360 --> 00:06:32.940
What they start is,

103
00:06:32.940 --> 00:06:37.290
employees with perfect attendance
are entered into lottery system.

104
00:06:37.290 --> 00:06:40.380
The odds of winning are extremely low,
and people compete for

105
00:06:40.380 --> 00:06:44.530
such prizes as small cash prizes or
extra days of vacation.

106
00:06:44.530 --> 00:06:47.770
But what they found is a stunning
effect of reducing absenteeism

107
00:06:47.770 --> 00:06:50.070
throughout the entire organization by 21%.

108
00:06:50.070 --> 00:06:55.435
[SOUND] How many of you have
just looked at your phones?

109
00:06:55.435 --> 00:07:00.120
Now our phones is a great a example
of a variable reinforcement schedule.

110
00:07:00.120 --> 00:07:03.510
And just like with gambling,
we sometimes get addicted to or phones and

111
00:07:03.510 --> 00:07:06.630
iWatches, and
hear this phantom buzzes and rings.

112
00:07:08.530 --> 00:07:11.710
Now coming back to the question I posted
for you in the beginning of the session.

113
00:07:12.840 --> 00:07:16.430
I'm sure you recognize by
now that options A and

114
00:07:16.430 --> 00:07:20.950
B are fixed interval
reinforcement schedules.

115
00:07:20.950 --> 00:07:24.800
Option C is a variable interval
reinforcement schedule.

116
00:07:24.800 --> 00:07:28.350
And so, out of these three choices,
you might consider giving option C a shot.

117
00:07:28.350 --> 00:07:32.530
Because on average tends to outperform
fixed interval reinforcement schedules.

118
00:07:33.720 --> 00:07:36.800
Now if you want to reward your daughter
for the work actually done and not for

119
00:07:36.800 --> 00:07:39.900
the time spent studying,
think of the folly

120
00:07:39.900 --> 00:07:43.210
of rewarding one thing while expecting
else that we just discussed.

121
00:07:43.210 --> 00:07:47.410
You may consider a variable-ratio
reinforcement schedule.

122
00:07:47.410 --> 00:07:51.280
That would look like giving her a piece
of candy for the first problem solved,

123
00:07:51.280 --> 00:07:54.930
another piece of candy for the second
problem solved, no piece of candy for

124
00:07:54.930 --> 00:07:57.000
the third, maybe four pieces of candy for
the fourth.

125
00:08:00.560 --> 00:08:04.470
So, to reflect on the key insights
from this discussion so far.

126
00:08:05.620 --> 00:08:09.370
I think the most important thing to
recognize is that in addition to attaining

127
00:08:09.370 --> 00:08:13.960
to absolute levels of rewards, and
what rewards people value, and

128
00:08:13.960 --> 00:08:18.370
to relative levels of rewards, as we
discussed in the equity conversation.

129
00:08:19.450 --> 00:08:22.990
It's really important to understand that
the schedule of reinforcement, when and

130
00:08:22.990 --> 00:08:25.850
how a given reward is
distributed matters greatly.

131
00:08:26.880 --> 00:08:32.480
What we know from research is that ratio
schedules typically are more effective

132
00:08:32.480 --> 00:08:33.970
than interval schedules.

133
00:08:33.970 --> 00:08:37.420
But what's even more insightful
is that fixed interval and

134
00:08:37.420 --> 00:08:42.290
fixed ratio schedules even though they're
the most prevalent in organizations

135
00:08:42.290 --> 00:08:44.490
doesn't mean they're the most effective.

136
00:08:44.490 --> 00:08:49.300
Consider using variable interval over
fixed interval reinforcement schedule.

137
00:08:49.300 --> 00:08:51.960
And variable ratio over fixed
ratio reinforcement schedule.

138
00:08:53.570 --> 00:08:56.470
And also think about the fact
that incentives are pervasive.

139
00:08:58.330 --> 00:09:02.260
Think about how you can use incentive
to improve not just your work life, but

140
00:09:02.260 --> 00:09:03.240
also life outside of work.