I m a student of unusual markets and forecasts, and this forecasting approach with tournaments is described in some detail in a recent Edge article. Quite an eminent science group of participants in the project are commenting. Note mention of the Good Judgement Project. Interesting connection to crowd sourcing methods.
" .... When IARPA originally launched this project, they thought that beating the unweighted average of the crowd by 20 percent would be an ambitious goal in year one, 30 percent in year two, 40 percent in year three, and 50 percent in year four. The Good Judgment Project, for reasons that are interesting, was able to beat IARPA's fourth year benchmark in the first year and in all subsequent years. For reasons that are also maybe a little less interesting, other teams were not. I say the reasons are less interesting, I don’t think it was due to them not having the right research expertise. There were issues of mismanagement, of how they went about it. We had a way better project manager.
Putting that to the side, the Good Judgment Project was able to do far better than IARPA or any of the other researchers who were consulted on the design of the project thought possible. We were able to knock out some pretty formidable competitors. Slide twenty-nine tells you what the four big drivers of performance were in the tournament: Getting the right people on the bus, the benefits of interaction, the benefits of training, and the benefits of that strange algorithm that I call the “extremizing algorithm.”
... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment