don't just agree to disagree ...
"Well, I believe there is a very critical statistical difference here which, before we get very far down the road, I think requires resolution. You three gentlemen are talking about different sets of reality, and you’re using the same basic data system."
That was then-Fed-Chairman Greenspan after a presentation in June 2005 on house prices and monetary policy. I re-read that transcript whenever I need a dose of forecaster humility ... all the pieces of the puzzle were there, but it did not come together.
What caught my eye today was Greenspan's complaint about "different sets of reality." Feels so 2016. Maybe you've seen Nate Silver (538) and David Rothschild (Predictwise) duke it out on election forecasts or Justin Wolfers and Adam Ozimek reading the tea leaves of the stock market differently. Or what about Paul Romer and his critique of post-real macro? Less dorky examples out there too ... but how do "different sets of reality" even exist when we all have 24-7 access to the same facts?
one reality in most macro models
Say what you will about mainstream macro models, but most dodge the tricky Facebook exchanges friends and relatives or the ugliness in Twitter mentions ... as the models assume away disagreement. Nice. If all you have is a forward-looking, sensible, all-knowing representative agent ... well, you end up with one reality. If not, check your math. No disagreements, no noisy data, no parameter uncertainty. Of course, that model is not reality but it can be a useful benchmark and give us clues as to why we're stuck arguing.
Forward-looking, sensible, and all-knowing is a tall order. Plenty of research has found gaps in the first two criteria whether its signs of myopia, hyperbolic discounting, cognitive biases, or irrationality. And yet, it's the third criteria ... imperfect information ... that feels most relevant when thinking about our disagreements. (The representative agent is something of an aggregation trick so I won't attack him directly but he doesn't help our intuition on why we individually can hold onto different realities.)
I am not always fond of the tone in 'what's wrong with macro' posts. But I do enjoy how these debates kick up counter examples of macro research trying to relax the usual assumptions. I wrote earlier about my guarded optimism with heterogeneous agent models and I'm always on the lookout for good empirical macro-consumption work. (As an aside check out how many rejections of the permanent-income hypothesis are catalogued in Appendix Table 1 of this forthcoming macro handbook chapter. Sometimes makes me wonder what it would take to change our models.)
imperfect information and disagreement
Most recently, I have been reading about macro models with imperfect information. There are many flavors and vintages of these models. The two main versions are sticky information (some costs to acquire info), such as Mankiw and Reis (2002), and noisy information (true state of the world unknown) such as Woodford (2001) or Sims (2003). It's not hard to see how disagreements arise when we have different information sets. And given the costs of getting information and/or interpreting it ... the disagreements are entirely sensible. I highly recommend two empirical papers testing these models ... Mankiw, Reis, and Wolfers (2004) and Coibion and Gorodnichenko (2015). Not only does this research find disagreement across professional forecasters and households alike in their expectations ... but the degree of disagreement changes over time. The imperfectness of our information, at least in terms of the economy, appears to be related to its underlying conditions. (State dependence in informational rigidities, if you prefer jargon.) So on the bright side, times with lots of disagreement may be exactly when we have lots to learn.
On the less bright side, that learning can be painful. The challenge of inference is often underappreciated. I love data, moar data but data alone does not tell us how the world works or where it is headed. Plus it takes effort to gather data, understand its flaws, combine it with other data, and interpret its meaning. And those costs only rise when the information runs against your world view, your model of the world, your identity. So we do our best to balance the costs and benefits of new information. Surveys, online recommendations, search tools, and markets can help with the filtering. But even when you have many many people working hard on a question, as with the Fed and the housing market, inference and judgment is required ... we are not all knowing.
don't just agree to disagree ... learn
Disagreement should push us to think harder and seek out more information. It's easy to imagine a less productive discussion of house prices in mid-2005 ... one in which everyone agreed it was going to be a-okay. It could have been much better too, but different viewpoints were considered and the groundwork laid for more learning. In contrast, it's not hard in 2016 to tailor your news feed to your reality (especially when algorithms quietly do it for you). And too often disagreement is explained away as a character flaw, the other person's. That's unfortunate and doesn't get us any closer to the roots of our disagreement and to learning.
Addendum: Noah Smith has a new post with more details on tests of imperfect information models. One can quibble with Noah's rational expectations framing of the results, see Twitter convo, but his is good summary of some very cool research.
**Opinions here are mine and should not to be attributed to anyone with whom I work.**