In a VoxEU eBook, Refet Gürkaynak and Cédric Tille collect the views of central bank and academic economists on DSGE models. In the introduction to the eBook, Gürkaynak and Tille summarize these views as follows:
… there is agreement on the place of DSGE models in policy analysis. All see these models as part of the policymaker tool kit, while understanding their limitations and perceiving a similar road ahead.
The “Trouble with Macroeconomics,” according to a working paper by Paul Romer that is posted on his website, relates to dishonest identification assumptions, in particular in DSGE models used for policy analysis. Romer singles out calibration, assumptions about distribution functions and strong priors as culprits.
Romer argues that
[b]eing a Bayesian means that your software never barfs
I agree with the harsh judgment by Lucas and Sargent (1979) that the large Keynesian macro models of the day relied on identifying assumptions that were not credible. The situation now is worse. Macro models make assumptions that are no more credible and far more opaque.
Romer also offers a meta-model of himself as a critic of “post-real models” and “facts with unknown truth value.”
Noah Smith wonders in a blog post why the private sector does not use DSGE models for forecasting purposes if these models are useful. He writes:
As far as I’m aware, private-sector firms don’t hire anyone to make DSGE models, implement DSGE models, or even scan the DSGE literature. There are a lot of firms that make macro bets in the finance industry – investment banks, macro hedge funds, bond funds. To my knowledge, none of these firms spends one thin dime on DSGE. I’ve called and emailed everyone I could think of who knows what financial-industry macroeconomists do, and they’re all unanimous – they’ve never heard of anyone in finance using a DSGE model.