INFORMS Annual Meeting - Gumball
11/17/2011 12:00 AM Posted by: Jeremy Walker, Decision Frameworks Advisor and ConsultantFrom Sunday, November 7 through Wednesday, November 10, 2010, we participated in the 2010 INFORMS Annual Meeting at the Conference Center in Austin, Texas We had a booth, ran a pre-meeting seminar and showcased our latest decision framing (DTrio) and decision tree software (TreeTop) at the booth and in a Tuesday morning demonstration.
During the conference, we ran another in a series of tests based around the "Wisdom of Crowds", James Surowiecki's best-selling book. In it he contends that we put too much emphasis on expert opinion and if you solicit a group of individuals in an independent manner and collect data from their responses, you may be very pleasantly surprised by the results.
This year's experiment saw an oddly shaped glass container standing at the corner of our booth filled with red, blue, green and yellow gumballs, as along with an unknown number of rubber squishy owls. Visitors were asked to guess the number of gumballs in the container. They were first asked for their P10 and P90 estimates. In other words, they should be 80% confident that the actual number of gumballs should fit between their range. That should be pretty easy to do, right?
At the bottom of the ballot, they were asked for their best guess on the exact number of gumballs in the container. In an effort to reward the best estimator in this experiment, we offered an iPod to the individual with the closest guess.
So without another wasted breath, here are the results:
Number of estimates: 78
Range of estimates: 45-11075
Average of all estimates: 1914
Actual number of gumballs: 1371
P50 estimate of gumballs (calculated): 1350
Closest Best Estimate: 1400
Winner: Justin Mao-Jones
Figure A: Distribution of Gumball Estimates
(Shown in Decision Frameworks new decision tree tool - TreeTop)
Let's take a look at the results. Firstly, two people guessed that there were 1400 gumballs in the container. Justin was judged to be the winner because he handed his ballot in the earliest (We knew that small print on the back side of the ballot on how the winner would be determined would come in handy one day). Upon hearing the news, he mentioned donating it to a worthy cause - which he named as himself. To the winner go the spoils! Good job.
Secondly, let's step back and take a look at what the data is telling us. We see we had a fairly wide range of estimates 45 - 11,075. What is noteworthy is that the EV (expected Value) was skewed so by the "fat tails" on the high side. There was an equal number of estimates above and below the closest answer, so those who thought they saw more gumballs ready thought they saw lots more gumballs, thereby skewing the average to the upside. Does anyone have any evidence that shows when people are confronted with a large number of objects that they tend to disproportionally overestimate than underestimate the correct number?
As stated above, when you order the estimates from lowest to highest, Justin's estimate was interestingly at the exact mid-point. His was the 39th out of 78 sequentially ordered estimates. His was the mid-point estimate and it was also closest. Does this support Surowiecki's assertions about the wisdom of crowds? Though this is not the statistical average of all estimates, it is smack-dab in the middle of everyone's wisdom. A gumball for your thoughts on this little tidbit of information"¦
Here's another interesting finding from looking at the data. If you remember, you were asked when you submitted your estimate to write down a bounded range with your P10 and P90 limits. When we look at the results, only 34.1% of you had a range that was wide enough to contain the actual number. The means that roughly 2 out of 3 people were not able to create a reasonable estimate, an 80% confidence range around this uncertainty. This confirms other experiences we see in our daily consulting and training courses. People seem to have an inherent blind spot to creating broad enough ranges. Perhaps because people are often rewarded for the exactness/precision of their answers, there is a perceived weakness if too broad a range is given as an answer. Alternatively, maybe gumballs are just too tricky to count.
Regardless, what it shows is that proper care needs to be taken when it comes around to gathering range estimates. Do your best to de-bias and create more realistic extremes. There are several techniques we use to do this including structured interviews with expert interview templates. We can teach you if you are unfamiliar with these tools and techniques.
There is much more that bigger brains than mine might glean from this data. If you would like a copy of the raw excel file, shoot me a note and I'll send you a copy.
Thanks for everyone who participated in this contest. We hope to see many of you before the next INFORMS Conference. And remember great decision making is not divined - it is derived!
6Monte Carlo ModelMonte Carlo Model (1)
7Monte Carlo vs. Tree ModelMonte Carlo vs. Tree Model (1)
5Tree ModelTree Model (1)
1Wisdom of CrowdsWisdom of Crowds (2)
201111November2November 2011 (2)
201010October1October 2010 (1)
All Content © Decision Frameworks, L.P.