Tag Archives: *

The SNB’s Financial Result, Currency Reserves, and Distribution Reserve

How are SNB profits and losses distributed and what issues are debated?

Annual Result Funds two “Reserves”

The annual result (Jahresergebnis) of the Swiss National Bank (SNB) is split into two parts. The first part funds “provisions for currency reserves” (Zuweisungen an Rückstellungen für Währungsreserven) which are meant to provide a buffer against future losses on the SNB’s asset positions. The second part funds current and future profit distributions to the Confederation and cantons (Ausschüttungen an Bund und Kantone) and dividend payments to SNB shareholders. The ad hoc announcement regarding the SNB’s 2021 annual result (English, German) provides an overview.

Allocation Rules

The SNB decides how the annual result is split, subject to some guidance in the National Bank Law (NBG, English, German, e.g., Art. 30 (1) and Art. 42 (2d) NBG). In practice the SNB follows a mechanical rule to determine the provisions for currency reserves. This rule operates “on the basis of double the average nominal GDP growth rate over the previous five years” or “10% of the provisions at the end of the previous year,” whatever yields higher provisions (source).

How the second part of the annual result is split between current and future distributions is governed by an agreement between the SNB and the Federal Department of Finance (English, German). The law prescribes that the “[t]he Department and the National Bank shall, for a specified period of time, agree on the amount of the annual profit distribution with the aim of smoothing these distributions in the medium term” (31(2) NBG). In practice the SNB and the Federal Department of Finance have frequently revised the agreement. This reflected the SNB’s rapidly growing balance sheet and larger profits.

The current agreement determines the profit distributions and dividends to shareholders as follows: Define the “distributable annual result” (Ausschüttbares Jahresergebnis) as the annual result net of the allocation to provisions for currency reserves. The distribution reserve (Ausschüttungsreserve), a liability item in the SNB’s balance sheet, amounts to the cumulative past distributable annual results, net of the payments to Confederation, cantons and shareholders. The sum of distribution reserve and distributable annual result yields the “net profit” (Bilanzgewinn). When the net profit is negative the agreement prescribes zero distributions to the Confederation and the cantons. When it is positive the agreement prescribes distributions that rise up to CHF 6 billion, depending on the size of net profits. Under no circumstances must distributions be so high as to directly imply that the distribution reserve becomes negative.

Discussion

That the SNB determines how the annual result is split certainly makes sense. After all the SNB bears responsibility for monetary policy and thus needs to be able to employ its balance sheet as far as this has current and future monetary policy implications. It is doubtful, however, that the mechanical rule the SNB follows adequately reflects foreign exchange and investment risks as well as monetary policy needs going forward. Preferably, the SNB should determine the adequate provisions based on an analysis of risks and monetary policy needs and communicate its analysis and conclusions to the public (see my proposal from February 2021). In June 2021 the SNB Observatory made a similar proposal, arguing that the SNB should “[d]etermine a target ratio of provisions-to-balance sheet or provisions-to-foreign investments. Provisions should not be accumulated beyond this point.” More specifically, the SNB Observatory criticized that the SNB never actually uses the provisions to cover losses when they occur; it proposed that the SNB “[u]se the provisions for foreign investments to cover losses when they occur. Replenish provisions with profits of subsequent years.”

The procedure to determine the split between current and future distributions is rather inflexible and thus requires frequent adjustment if the SNB’s balance sheet changes. The fact that the SNB smoothes payouts from the distribution reserve (at too low a rate according to the SNB Observatory) suggests a lack of trust in the ability of decision makers at the federal and cantonal level to responsibly manage the funds received from the SNB. I find this questionable (see my comments from February 2021) but I realize that the law does require some degree of smoothing.

Finally, many of the political discussions surrounding the amount of SNB distributions are misguided. The debate neglects that profit distributions do not significantly alter the net worth of the Confederation or the cantons. After all, SNB profit distributions are not transfers from a third party—they just swap one asset item in the balance sheets of the Confederation and cantons against another one, like dividend payouts of a firm. The main effect of distributions is to temporarily relax restrictions such as the debt brake (see my explanations with links to further analysis); that might be the reason why some politicians and voters like them.

Details

  • The agreement between the SNB and the Federal Department of Finance states that “[t]he non-distributed amount of the annual result is allocated to this [distribution] reserve, and any shortfall for a distribution is drawn from it.” I think it should read “[t]he non-distributed amount of the annual result net of provisions for currency reserves is allocated …”
  • Per January 2022 the provisions for currency reserves amounted to CHF 95 billion. The distribution reserve amounted to CHF 103 billion.
  • Between 2005 and 2020 the return rates on SNB investments never fell below -6% (source).
  • As of mid 2022 the return rate appears to be on the order of -8% (balance sheet length approximately CHF 1 000 billion, first-quarter loss CHF 33 billion (source), prospective second-quarter loss 50 billion).
  • Swiss net foreign assets amount to roughly CHF 600 billion.

Updates: Minor editorial changes, 29 July.

Fabio Panetta on the Digital Euro

In a speech, the ECB’s Fabio Panetta argues that a digital Euro is necessary because

[i]n the digital age … banknotes could lose their role as a reference value in payments, undermining the integrity of the monetary system. Central banks must therefore consider how to ensure that their money can remain a payments anchor in a digital world.

He argues that

outsourcing the provision of central bank money [to stable coin providers] … would endanger monetary sovereignty [as would the absence of a national digital currency].

Panetta also argues that a digital Euro could

  • improve the confidentiality of digital payments and
  • increase choice and reduce costs

and should

  • avoid interfering with the functioning of the financial system and
  • be available within private payment solutions.

Panetta does not discuss

  • seignorage and
  • time consistency motivations.

Aggregation

David Baqaee in SED Newsletter, November 2021.

Hulten’s theorem:

… the elasticity of aggregate TFP to a microeconomic TFP shock is equal to the sales of the producer being shocked divided by GDP. … Furthermore, if labor supply is inelastic or if the definition of GDP is expanded to include the market value of leisure, then this irrelevance result also applies to real GDP (or under some additional assumptions to welfare).

This result, oftentimes known as Hulten’s Theorem (Hulten, 1978), is a consequence of the first welfare theorem, and therefore, is remarkably general. … As with other irrelevance results in economics, like the Modigliani-Miller Theorem or Ricardian Equivalence, much of the economics of aggregation can be understood in terms of deviations from Hulten’s theorem.

Nonlinearities:

… disaggregated details … that do not matter to a first-order, do matter for understanding the nonlinear effect of shocks.

The key conceptual breakthrough is to recognize that nonlinearities are captured by changes in sales shares. Intuitively, in response to a negative shock to oil or electricity, we expect the sales shares of oil or electricity to skyrocket. On the other hand, in response to a negative shock to Walmart, we expect the sales share of Walmart to decline (perhaps rapidly). The sign and magnitude of changes in sales shares tell us that output is very concave with respect to energy shocks and convex with respect to Walmart shocks. … we characterize in very general and abstract terms the equations that determine changes in sales shares

Changes in sales shares are determined by what we call forward and backward propagation equations. Forward propagation equations show how a shock to the marginal cost of a producer propagates through forward linkages, from suppliers to consumers, to change prices downstream. The backward equations show how a shock to the sales of a producer propagates through backward linkages, from consumers to their suppliers, to change sales upstream …

These equations not only help answer questions about the nonlinearities in output in efficient environments, but they can also be used to answer microeconomic questions including, for example, how shocks propagate from one firm to another in general equilibrium, or how the distribution of factor income shares responds to shocks … Furthermore, unlike Hulten’s theorem itself, the forward and backward propagation equations straightforwardly generalize to more complex environments where the first welfare theorem does not hold, and these generalizations will allow us to extend our analysis beyond efficient equilibria.

… nonlinearities magnify negative shocks and attenuate positive shocks, resulting in an aggregate output distribution that is asymmetric (negative skewness) and fat-tailed (excess kurtosis), with a negative mean, even when shocks are symmetric around zero and thin-tailed. Average output losses due to short-run sectoral shocks are an order of magnitude larger than the welfare cost of business cycles calculated by Lucas (1987)

Frictions:

Hulten’s theorem derives its deceptive simplicity from two facts: (i) marginal-cost pricing ensures that the expenditures by firms on every input measures the elasticity of output with respect to that input (Shephard’s lemma); (ii) marginal-cost pricing ensures production is efficient, meaning that reallocating resources from one user to another does not change real GDP to a first order. Since reallocation effects can safely be ignored to a first-order, (ii) implies that the elasticity of aggregate output to shocks can be computed by assuming that the allocation of resources stays constant and resources simply scale up or down proportionally according to initial shares. From (i) we know that this will change each firm’s output by that firm’s expenditure share on the input being scaled. This “mechanical” effect of scaling resources by initial shares when summed over all input users yields sales, which is the Hulten formula.

Inefficient economies break Hulten’s theorem in two ways. First, sales shares no longer capture the “mechanical” effect of scaling up input usage because of wedges between output elasticities and expenditures shares. Second, reallocation effects, which are first-order irrelevant in efficient equilibria, now matter to a first-order and must be solved for.

… In other words, when a producer becomes more productive, the impact on aggregate TFP can be broken down into two components.

First, given the initial distribution of resources, the producer increases its output, and this, in turn, increases the output of its direct and indirect customers; this is the mechanical effect that would be equal to sales shares in the absence of wedges. Second, there are reallocation effects that can raise or lower aggregate output holding fixed the level of technology. We show that this reallocation effect can be measured by a specific weighted average of changes in wedges and changes in factor income shares (in an economy with a single factor, say labor, this is simply the labor income share). Intuitively, if a shock reallocates resources in such a way that boosts aggregate output, then this shock will “save” on factor usage. This reallocation makes factors less scarce and causes factor prices and, ceteris paribus, factor income shares to decline on average. The fact that factor income shares decline on average therefore captures changes in aggregate TFP due to reallocation effects.

… average markups have been increasing primarily due to a between-firm composition effect, whereby firms with high markups have been getting larger, and not to a within-firm increase in markups. From a social perspective, these high-markup firms were too small to begin with, and so the reallocation of resources towards them increases aggregate TFP over time.

… we find that in the U.S. in 2015, eliminating markups would raise aggregate TFP by about 20% (depending on the markup series). This increases the estimated cost of monopoly distortions by two orders of magnitude compared to the famous estimate of 0.1% of Harberger (1954).

… changes in aggregate demand, for example, monetary policy shocks, can naturally affect an economy’s TFP due to reallocation effects. In particular, we propose a supply-side channel for the transmission of aggregate demand shocks by showing that in an economy with heterogeneous firms and endogenous markups, demand shocks can have first-order effects on aggregate productivity.

Intuitively, if high-markup firms have lower pass-throughs than low-markup firms, as is consistent with the empirical evidence, then an aggregate demand shock, like a monetary easing, generates an endogenous positive “supply shock” that amplifies the positive “demand shock” on output. The result is akin to a flattening of the Phillips curve. We derive a tractable four-equation dynamic model, disciplined by four sufficient statistics from the distribution of firms, and use it to show that a monetary easing generates a procyclical hump-shaped response in aggregate TFP and countercyclical dispersion in firm-level TFPR.

Non-convexities:

Unlike first-best policies, which are independent of network structure and simply ensure efficiency market-by-market, the effects of second-best policies are network-dependent. In particular, for economies with increasing returns to scale, we rationalize and revise Hirschman’s influential argument that policy should encourage expansion in sectors with the most forward and backward linkages, and we give precise formal definitions for these concepts. We show that the optimal marginal intervention aims to boost the sales of sectors that have strong scale economies, but are also upstream of other sectors with strong scale economies.

Household heterogeneity:

… we provide a modified version of Hulten’s theorem that does answer welfare questions in general equilibrium economies with non-homothetic, non-aggregable, and unstable preferences. We show that calculating changes in welfare in response to a shock only requires knowledge of expenditure shares and elasticities of substitution and (given these elasticities) does not require income elasticities and taste shocks. We also characterize the gap between changes in welfare and changes in real consumption.

Interview, Riksbank RN, 2021

Riksbank Research News 2021, December 2021. PDF (pp. 2–3), HTML.

Q: You have been leader of the CEPR Research and Policy Network on FinTech and Digital Currencies since 2021 and explored issues at the heart of monetary theory and payment systems in your research. What do you think is new about digital central bank money and what makes it different from other digital means of payment?

A: Societies have been using digital means of payment for decades. Commercial banks use digital claims against the central bank, “reserves,” to pay each other. Households and firms use digital claims against commercial banks, “deposits,” as well as claims on such deposits, as money. Financial innovations typically improved the convenience for users or helped build additional layers of claims on top of each other, fostering fractional reserve banking and raising money multipliers.

Recently, new digital instruments have appeared on the fringes of the financial system. Some think of them as currencies and others as mere database entries. These instruments exploit the fact that smart ways of managing information, and even smarter approaches to providing incentives in anonymous, decentralized networks can replicate some functions of conventional monies. Monetary theorists are not surprised. They have debated for decades to what extent money is, or is not a substitute for a large societal database. The information technology revolution has made this debate much less theoretical.

Of course, the new entrants such as Bitcoin have not been very successful so far when it comes to actually creating substitute monies. But they have been quite successful in terms of creating new assets, mostly bubbles. Bubbles are also a great mechanism for their creators to extract resources from other people.

What is new about digital central bank money for the general public (central bank digital currency, CBDC) is that households and firms would no longer be restricted to cash when they wanted to pay using a central bank (i.e., government) liability. That is, banks would lose a privilege and households and firms would gain an option. CBDC, which I like to think of as “Reserves for All,” seems natural when you consider the history of central banking. It also seems natural when you consider that many governments strongly discourage the use of cash. Nevertheless, compared with the status quo, “Reserves for All” would amount to a major structural change.

Q: What do you think are the main challenges of issuing a CBDC?

A: From a macroeconomic perspective, introducing “Reserves for All” could have major implications. The balance sheets of central banks would likely expand while commercial banks would likely lose some deposits as a source of funding. Mechanically, they would reduce their asset holdings or attract other sources of funding. The question is, which assets they would shed, and subject to which terms and conditions they would attract new funding. These are important questions because banks play a key role in the transmission of monetary policy to main street.

While many central bankers are concerned about the implications of CBDC for bank assets and funding costs academic research conveys a mixed picture. To assess the consequences of “Reserves for All” it is natural to first ask what it would take to perfectly insulate banks and the real economy from the effects of CBDC issuance. As it turns out, the answer is “not much:” Under fairly general conditions the central bank holds a lot of power and can neutralize the implications of CBDC for macroeconomic outcomes.

Of course, central banks might choose to implement other than the neutral policies. In my view, this is in fact very likely, for reasons related to the political economy of banking and central banking. On the one hand, CBDC would make it even harder for central banks to defend their independence. On the other hand, CBDC would increase the transparency of the monetary system and trigger questions about the fair distribution of seignorage. On top of this, “Reserves for All” might trigger demands for the removal of other “bank privileges:” Interest groups might request LOLR-support, arguing that they are systemically important and just temporarily short of liquidity. Others might want to engage in open market operations with the central bank.

Beyond macroeconomics and political economy, CBDC could substantially change the microeconomics of banking and finance. In the current, two-tiered system there is ample room for complementarities between financing, lending, and payments. The information technology revolution strengthens these complementarities but it also generates new risks or inefficiencies. How the connections between money and information currently change is the subject of ongoing research. I don’t think we have been able to draw robust conclusions yet as to what role CBDC would play in this respect.

Q: Should we, and will we have CBDCs in the near future?

A: Some countries have already decided in favor. Others, like the Riksbank I believe, are still on the sidelines, thinking about the issues, watching, and preparing. Yet others have only recently taken the issue more seriously, mostly because of the Libra/Diem shock in June 2019, which made it clear to everybody that the status quo ceases to be an option.

I think the normative question is still unanswered. Not only does CBDC have many consequences, which we would like to better understand. There are also the unknown consequences that we might want to prepare ourselves for. Moreover, many of the problems that CBDC could potentially address might also allow for different solutions; the fact that CBDC could work does not mean that CBDC is the best option.

In a recent CEPR eBook* several authors share that view, which suggests a case-by-case approach. CBDC might be appropriate for one country but not for another, for instance because cash use has strongly declined in Sweden and this may favor CBDC (as Martin Flodén and Björn Segendorf discuss in their chapter) while the same does not apply in the US or elsewhere.

Regarding the positive question, I think that many more countries will decide to introduce “Reserves for All,” and quite a few of them in the next five years. One reason is that it is politically difficult to wait when others are moving ahead. Another is the fear of “dollarization,” not only in countries with less developed financial markets. The strongest factor, I believe, is the fear that central banks might lose their standing in financial markets. This is connected with the important question, which the Riksbank has been asking early on, whether in the absence of CBDC declining cash circulation could undermine trust in central bank money.

Among the eBook authors, most but far from all expect that a CBDC in a developed economy would resemble deposits in terms of user experience. Almost everyone expects that private banks and service providers rather than the central bank itself would interact with end-users. I share these views. But there is disagreement as to whether digital currencies would be interest bearing and how strictly they would protect privacy. I believe that it is also unclear how strictly central banks would enforce KYC regulation or holding restrictions on foreigners. These two factors might critically affect the threats to monetary sovereignty in other countries, and as a consequence they might shape the chain reaction of adoptions.

What seems clear to me is that the implications of CBDC go far beyond the remit of central banks. Parliaments and voters therefore should have the final say.

* Dirk Niepelt (2021), editor: “CBDC: Considerations, Projects, Outlook”, CEPR eBook. Changes in the research staff

David Graeber’s “Debt”

Goodreads rating 4.19.

Graeber’s book contains many interesting historical observations but lacks a concise argument to convince a brainwashed neoclassical economist looking for coherent arguments on money and debt. After 60 pages, 340 more seemed too much.

Chapter one:

… the central question of this book: What, precisely, does it man to say that our sense of morality and justice is reduced to the language of a business deal? What does it mean when we reduce moral obligations to debts? … debt, unlike any other form of obligation, can be precisely quantified. … to become simple, cold, and impersonal … transferable.

… money’s capacity to turn morality into a matter of impersonal arithmetic—and by doing so, to justify things that would otherwise seem outrageous or obscene. … the violence and the quantification—are intimately linked. … the threat of violence, turns human relations into mathematics.

…The United States was one of the last countries in the world to adopt a law of bankruptcy: despite the fact that in 1787, the Constitution specifically charged the new government with creating one, all attempts were rejected, or quickly reversed, on “moral grounds” until 1898.

… historically, credit money comes first [before bullion, coins]

… ages of virtual credit money almost invariably involve the creation of institutions designed to prevent everything going haywire—to stop the lenders from teaming up with bureaucrats and politicians to squeeze everybody dry … by the creation of institutions designed to protect debtors. The new age of credit money we are in seems to have started precisely backwards. It began with the creation of global institutions like the IMF designed to protect not debtors, but creditors.

… the book begins by attempting to puncture a series of myths—not only the Myth of Barter … but also rival myths about primordial debts to the gods, or to the state … Historical reality reveals [that the state and the market] have always been intertwined. … all these misconceptions … tend to reduce all human relations to exchange … [but] the very principle of exchange emerged largely as an effect of violence … the real origins of money are to be found in crime and recompense, war and slavery, honor, debt, and redemption. … an actual history of the last five thousand years of debt and credit, with its great alternations between ages of virtual and physical money …

… many of Adam Smith’s most famous arguments appear to have been cribbed from the works of free market theorists from medieval Persia …

Chapter two (“The Myth of Barter”) contains questionable claims about economics as well as interesting historical facts (or claims?):

When economists speak of the origins of money … debt is always something of an afterthought. First comes barter, then money; credit only develops later. …

Barter … was carried out between people who might otherwise be enemies …

… “truck and barter”’ [in many languages] literally meant ”to trick, bamboozle, or rip off.”

What we now call virtual money came first. Coins came much later, … never completely replacing credit systems. Barter, in turn, … has mainly been what people who are used to cash transactions do when for one reason or another they have no access to currency.

Chapter three (“Primordial Debts”) argues the the myth of barter is central to the discourse of economics, which according to Graeber downplays the state as opposed to markets, exchange, and individual choice. He tries to confront this view with Alfred Mitchell-Innes’ credit theory of money, Georg Friedrich Knapp’s state theory of money, the Wizard of Oz (i.e. “ounce”), and John Maynard Keynes (original?) claim that banks create money.

In all Indo-European languages, words for “debt” are synonymous with those for “sin” or “guilt,” illustrating the links between religion, payment and the mediation of the sacred and profane realms by “money.” [money-Geld, sacrifice-Geild, tax-Gild, guilt]

Wikipedia article on the book:

A major argument of the book is that the imprecise, informal, community-building indebtedness of “human economies” is only replaced by mathematically precise, firmly enforced debts through the introduction of violence, usually state-sponsored violence in some form of military or police.

A second major argument of the book is that, contrary to standard accounts of the history of money, debt is probably the oldest means of trade, with cash and barter transactions being later developments.

Debt, the book argues, has typically retained its primacy, with cash and barter usually limited to situations of low trust involving strangers or those not considered credit-worthy. Graeber proposes that the second argument follows from the first; that, in his words, “markets are founded and usually maintained by systematic state violence”, though he goes on to show how “in the absence of such violence, they… can even come to be seen as the very basis of freedom and autonomy”.

Reception of the book was mixed, with praise for Graeber’s sweeping scope from earliest recorded history to the present; but others raised doubts about the accuracy of some statements in Debt, as outlined below in the section on “critical reception”.

 

Max Horkheimer and Theodor Adorno’s “Dialektik der Aufklärung”

Goodreads rating 4.09.

Enlightenment:

Schon der Mythos ist Aufklärung, und: Aufklärung schlägt in Mythologie zurück. …

Seit je hat Aufklärung im umfassendsten Sinn fortschreitenden Denkens das Ziel verfolgt, von den Menschen die Furcht zu nehmen und sie als Herren einzusetzen. Aber die vollends aufgeklärte Erde strahlt im Zeichen triumphalen Unheils. …

Als Sein und Geschehen wird von der Aufklärung nur anerkannt, was durch Einheit sich erfassen lässt; ihr Ideal ist das System, aus dem alles und jedes folgt. … Als Gebieter über Natur gleichen sich der schaffende Gott und der ordnende Geist. … Mythen wie magische Riten meinen die sich wiederholende Natur.

Odysseus, myth, enlightenment:

Der listige Einzelgänger ist schon der homo oeconomicus, dem einmal alle Vernünftigen gleichen: daher ist die Odyssee schon eine Robinsonade. Die beiden prototypischen Schiffbrüchigen machen aus ihrer Schwäche — der des Individuums selber, das von der Kollektivität sich scheidet — ihre gesellschaftliche Stärke. Dem Zufall des Wellengangs ausgeliefert, hilflos isoliert, diktiert ihnen ihre Isoliertheit die rücksichtslose Verfolgung des atomistischen Interesses.

Juliette, enlightenment, morals:

Der Bürger, der dem kantischen kategorischen Imperativ aus der Kritik der praktischen Vernunft beistimmt … folgt keiner wissenschaftlichen Vernunft, sondern einer Narretei, wenn er sich deswegen einen materiellen Gewinn entgehen lässt. …

Bei de Sade wie bei Nietzsche wird das „szientifische Prinzip ins Vernichtende“ gesteigert. … Alles bleibt sinnfrei der Willkür des verbrecherischen Lustprinzips verhaftet. … So demontiert Sades Juliette alle Konventionen und Werte – Familie, Religion, Gesetz, Moral –, nichts vermag übrig zu bleiben, was die Gesellschaft ehemals zusammenhielt, alles fällt dem Wirtschaftsbetrieb und der enthemmten Ökonomie der aufgeklärten Verbrechergangs zum Opfer.

Wikipedia on the book:

… die These, dass sich bereits zu Beginn der Menschheitsgeschichte mit der Selbstbehauptung des Subjekts gegenüber einer bedrohlichen Natur eine instrumentelle Vernunft durchgesetzt habe, die sich als Herrschaft über die äußere und innere Natur und schließlich in der institutionalisierten Herrschaft von Menschen über Menschen verfestigte. Ausgehend von diesem „Herrschaftscharakter“ der Vernunft beobachteten Horkheimer und Adorno einen Aufschwung der Mythologie, die „Rückkehr der aufgeklärten Zivilisation zur Barbarei in der Wirklichkeit“ …

… habe nicht einen Befreiungs-, sondern einen universellen Selbstzerstörungsprozess der Aufklärung in Gang gesetzt. …

Nach Horkheimer und Adorno ist die Abstraktion das Werkzeug, mit dem die Logik von der Masse der Dinge geschieden wird. Das Mannigfaltige wird quantitativ unter eine abstrakte Größe gestellt und vereinheitlicht, um es handhabbar zu machen. … Alles, was sich dem instrumentellen Denken entzieht, wird des Aberglaubens verdächtigt. Der moderne Positivismus verbannt es in die Sphäre des Unobjektiven, des Scheins.

Aber diese Logik ist eine Logik des Subjekts, die unter dem Zeichen der Herrschaft, der Naturbeherrschung, auf die Dinge wirkt. Diese Herrschaft tritt dem Einzelnen nunmehr als Vernunft gegenüber, die die objektive Weltsicht organisiert.

… Die wissenschaftliche Weltherrschaft wendet sich gegen die denkenden Subjekte und verdinglicht in der Industrie, der Planung, der Arbeitsteilung, der Ökonomie die Menschen zu Objekten. … an die Stelle der befreienden Aufklärung aus der Unmündigkeit tritt das wirtschaftliche und politische Interesse, das Bewusstsein der Menschen zu manipulieren. Aufklärung wird zum Massenbetrug. …

Nach Auffassung Horkheimers und Adornos raubt industriell hergestellte Kultur dem Menschen die Phantasie … Die „Kulturindustrie“ liefert die „Ware“ so, dass dem Menschen nur noch die Aufgabe des Konsumenten zukommt. … erwünscht ist es, die reale Welt so gut wie möglich nachzuahmen. Triebe werden so weit geschürt, dass eine Sublimierung nicht mehr möglich ist. …

Das Ziel der Kulturindustrie ist – wie in jedem Industriezweig – ökonomischer Art. Alles Bemühen ist auf wirtschaftliche Erfolge ausgerichtet.

Die authentische Kultur hingegen ist nicht zielgerichtet, sie ist Selbstzweck. … fördert die Phantasie des Menschen, indem sie Anregungen gibt, aber anders als die Kulturindustrie, den Freiraum für eigenständiges menschliches Denken lässt. Authentische Kultur will nicht die Wirklichkeit nachstellen, sondern weit über sie hinausgehen.

English Wikipedia article.

Richard Bandler and John Grinder’s “The Structure of Magic”

Goodreads rating 4.06.

Human beings have their personal models of the world. These models are wrong and sometimes very wrong, leaving people with the impression that they have no choice, are being excluded, etc. The authors argue that successful psychotherapies and -therapists all use similar methods to help clients change and correct their models, opening new perspectives for them. In the book the authors systematize this argument.

They emphasize errors that humans make when mistaking models for reality—errors due to inadequate generalization, deletion, or distortion—and they use the language and tools from linguistics (transformational grammar)—distinguishing between the deep structure and the surface structure of sentences—to provide a toolkit for psychotherapists to help identify and correct these errors. Essentially, the therapist and the client are meant to identify the errors in the client’s model by insisting on well-formed sentences.

This quote is from the end of ch. 3:

This set, the set of sentences which are well formed in therapy and acceptable to us as therapists, are sentences which:
(1) Are well formed in English, and
(2) Contain no transformational deletions or unexplored deletions in the portion of the model in which the client experiences no choice.
(3) Contain no nominalization (process -> event).
(4) Contain no words or phrases lacking referential indices.
(5) Contain no verbs incompletely specified.
(6) Contain no unexplored presuppositions in the portion of the model in which the client experiences no choice.
(7) Contain no sentences which violate the semantic conditions of well-formedness.

Edwin Abbott’s “Flatland”

Goodreads rating 3.81.

For someone living in two dimensions and becoming aware of three, it might be easier to think of four than for someone living in three dimensions.

The cherished feeling of oneness might be misleading …

That Point is a Being like ourselves, but confined to the non-dimensional Gulf. He is himself his own World, his own Universe; of any other than himself he can form no conception; he knows not Length, nor Breadth, nor Height, for he has had no experience of them; he has no cognizance even of the number Two; nor has he a thought of Plurality; for he is himself his One and All, being really Nothing. Yet mark his perfect self-contentment, and hence learn this lesson, that to be self-contended is to be vile and ignorant, and that to aspire is better than to be blindly and impotently happy. (ch. 20)

Hermann Hesse’s “Siddharta”

Goodreads rating 4.04.

Seine Wunde blühte, sein Leid strahlte, sein Ich war in die Einheit geflossen. …

Weisheit ist nicht mitteilbar. Weisheit, welche ein Weiser mitzuteilen versucht, klingt immer wie Narrheit. …

Mir aber liegt einzig daran, die Welt lieben zu können, sie nicht zu verachten, sie und mich nicht zu hassen …

 

SNB Profit Distributions

The Federal Department of Finance and the SNB have agreed on a new scheme for the distribution of SNB profits. Agreement for the period 2020-2025, Explanations. Some comments in German (also available as PDF):

Profitieren Bund und Kantone finanziell von den höheren SNB-Ausschüttungen?

  • Höhere Gewinnausschüttungen in der Gegenwart bedingen tiefere in der Zukunft.
  • In erster Näherung bleibt das Nettovermögen von Bund und Kantonen unverändert, denn es berücksichtigt auch den Wert der zukünftigen Ansprüche gegenüber der SNB.
  • Siehe z.B. „Die Volkswirtschaft“ 8-9 2020, HTML.

Warum dann die positiven Reaktionen bei Vertretern von Bund und Kantonen?

  • Politiker/Wähler orientieren sich an den ausgewiesenen Schulden des Staates. Höhere Ausschüttungen ermöglichen eine tiefere Schuldenaufnahme. Daher die Reaktionen.
  • Relevanter als ausgewiesene Schulden ist das Nettovermögen. Dieses wird von Ausschüttungen (in erster Näherung) nicht beeinflusst.

Was ist die primäre Wirkung höherer Ausschüttungen?

  • Die Schuldenbremse wird in der Gegenwart gelockert und in der Zukunft angezogen.
  • Falls die Schuldenbremse bindet, erhöhen frühere, höhere Ausschüttungen den Spielraum für staatliche Ausgaben in der Gegenwart, aber nicht in der Zukunft.
  • Hohe Ausschüttungen könnten über ihre Wirkung auf das Eigenkapital der SNB auch deren geldpolitische Entscheide beeinflussen.

Wie ist die neue Gewinnausschüttungsformel ökonomisch zu bewerten?

  • Ausschüttungen sollten die Geldpolitik nicht konterkarieren.
  • Die Geldpolitik setzt Bilanzlänge und -struktur der SNB als Instrumente ein. Ihre Glaubwürdigkeit kann vom Eigenkapital der SNB abhängen.
  • Demnach müssten Ausschüttungen von Bilanzlänge, -struktur und Eigenkapital der SNB abhängen. Nicht vom Gewinn des Vorjahres.

Wie ist die neue Gewinnausschüttungsformel politökonomisch zu bewerten?

  • Problematisch ist, dass der Eindruck entstehen kann, alle paar Jahre würde unter politischem Druck um eine neue Formel gefeilscht. Die Bindung an eine sinnvolle Regel würde diesem Eindruck entgegenwirken.
  • Gleichzeitig reduziert die neue Formel den politischen Druck. Sie signalisiert die Bereitschaft der SNB zur Diskussion.

Wie könnte ein alternatives Ausschüttungsmodell aussehen?

  • Die SNB erklärt periodisch, welche Bilanzlänge und -struktur sie zur Erfüllung ihrer Aufgaben benötigt. Sie stellt sich der Kritik, entscheidet aber eigenverantwortlich.
  • Ausschüttungen sind nicht zweckgebunden. Dadurch wird vermieden, dass sich Interessengruppen bilden, die systematisch auf höhere Ausschüttungen drängen.

Welches Grundproblem bliebe bestehen?

  • Eine ökonomisch begründete Ausschüttungspolitik führt zu fluktuierenden Ausschüttungen. Diese können die Schuldenbremse konterkarieren.
  • Glättet die SNB hingegen ohne geldpolitische Notwendigkeit ihre Ausschüttungen, masst sie sich eine Kontrolle der Fiskalpolitik an, die ausserhalb ihres Aufgabenbereichs liegt.

Reading List on ‘Free’ or ‘Not-so-free’ Public Debt

Risk, Discounting, and Dynamic Efficiency

In the presence of risk, a comparison of the risk-free interest rate and the expected growth rate is insufficient to assess whether an economy is dynamically efficient or inefficient. Stochastic discount factors—not risk-free interest rates—enter the government’s budget constraint, even if debt is safe.

These points are made, for example, by Andrew Abel, N. Gregory Mankiw, Lawrence Summers, and Richard Zeckhauser (Assessing Dynamic Efficiency: Theory and Evidence, REStud 56(1), 1989),

the issue of dynamic efficiency can be resolved by comparing the level of investment with the cash flows generated by production after the payment of wages … dynamic efficiency cannot be assessed by comparing the safe rate of interest and the average growth rate of the capital stock, output, or any other accounting aggregate,

or Henning Bohn (The Sustainability of Budget Deficits in a Stochastic Economy, JMCB 27(1), 1995),

discounting at the safe interest rate is usually incorrect. … popular fiscal policy “indicators” like deficit levels or debt-GNP ratios may provide very little information about sustainability. … the intertemporal budget constraint imposes very few restrictions on the average primary balance.

Recent work in which these themes appear include papers by Zhengyang Jiang, Hanno Lustig, Stijn Van Nieuwerburgh, and Mindy Xiaolan (Manufacturing Risk-free Government Debt, NBER wp 27786, 2020), Robert Barro (r Minus g, NBER wp 28002, 2020), or Stan Olijslagers, Nander de Vette, and Sweder van Wijnbergen (Debt Sustainability when r−g<0: No Free Lunch after All, CEPR dp 15478, 2020).

Intergenerational Risk Sharing

With overlapping generations the way the government manages its debt has implications for intergenerational risk sharing, see for example Henning Bohn (Risk Sharing in a Stochastic Overlapping Generations Economy, mimeo, 1998), Robert Shiller (Social Security and Institutions for Intergenerational, Intragenerational, and International Risk Sharing, Carnegie-Rochester Conference on Public Policy 50, 1999), or Gabrielle Demange (On Optimality of Intergenerational Risk Sharing, Economic Theory 20(1), 2002).

Long-Run Debt Dynamics and Fiscal Space

Dmitriy Sergeyev and Neil Mehrotra (Debt Sustainability in a Low Interest World, CEPR dp 15282, 2020) offer an analysis of long-run debt dynamics under the assumption that the primary surplus systematically, and strongly responds to the debt-to-GDP ratio such that the government’s intertemporal budget constraint is necessarily satisfied:

Population growth and productivity growth have opposing effects on the debt-to-GDP ratio due to their opposing effects on the real interest rate. Lower population growth leaves the borrowing rate unchanged while directly lowering output growth, shifting the average debt-to-GDP ratio higher. By contrast, when the elasticity of intertemporal substitution is less than one, a decline in productivity growth has a more than a one-for-one effect on the real interest rate, lowering the cost of servicing the debt and thereby reducing the average debt-to-GDP ratio. To the extent that higher uncertainty accounts for low real interest rates, we find that
the variance of the log debt-to-GDP ratio unambiguously increases with higher output
uncertainty. However, uncertainty also has an effect on the mean debt-to-GDP ratio that
depends on the coefficient of relative risk aversion. Higher uncertainty lowers the real
interest rate but this effect may be outweighed by an Ito’s lemma term due to Jensen’s
inequality that works in the opposite direction.

Sergeyev and Mehrotra also consider the effects of rare disasters as well as of a maximum primary surplus which implies that debt becomes defaultable and the interest rate on debt features an endogenous risk premium, generating the possibility of a “tipping point” with a slow moving debt crises as in Guido Lorenzoni and Ivan Werning (Slow Moving Debt Crises, AER 109(9), 2019).

Ricardo Reis (The Constraint on Public Debt when r<g But g<m, mimeo, 2020) analyzes a non-stochastic framework under the assumption that the marginal product of capital, m, exceeds the growth rate, g, which in turn exceeds the risk-free interest rate, r. Reis considers the case where m is the relevant discount rate, for example because r features a liquidity premium:

there is still a meaningful government budget constraint once future surpluses and debt are discounted by the marginal product of capital.

He shows the following:

  • The debt due to a one-time primary deficit can be rolled over indefinitely and disappears asymptotically as long as r<g.
  • With permanent primary deficits that grow at the same rate as debt and output, the government’s intertemporal budget constraint features a bubble component due to r<m. This corresponds to the usual seignorage revenue measure (see p. 173 in Niepelt, Macroeconomic Analysis, 2019).
  • Suppose that from tomorrow on, the primary deficit and debt quotas are given by d and b, respectively. Then, the present value of total net revenues in the government’s budget constraint equals [- d + (m – r)*b] / (m-g). Both m>g and g>r relax the constraint, as does a lower r.
  • Along a balanced growth path, b = [- d + (m – r)*b] / (m-g) and thus, d = (g-r)*b where d is assumed to be positive. Reis argues that b cannot be larger than total assets relative to GDP. Accordingly, the deficit cannot exceed total assets times (g-r).

Reis concludes that most of the bubble component “has already been used.” In addition to developing a model that yields m>g>r in equilibrium he also discusses the role of inflation (stable inflation generates fiscal space because it renders debt safer and thus increases demand for debt) and inequality (more inequality increases fiscal space).

Blanchard’s Presidential Address

In his presidential address, Olivier Blanchard (Public Debt and Low Interest Rates, AER 109(4), 2019) argues that the risk-free interest rate has fallen short of average US growth rate (and similarly, in other countries). Importantly—and implicitly addressing Abel, Mankiw, Summers, Zeckhauser, and Bohn (see above)—he also argues that risk is not that much of an issue as far as the sustainability of public debt is concerned:

Jensen’s inequality is thus not an issue here. In short, if we assume that the future will be like the past (admittedly a big if), debt rollovers appear feasible. While the debt ratio may increase for some time due to adverse shocks to growth or positive shocks to the interest rate, it will eventually decrease over time. In other words, higher debt may not imply a higher fiscal cost.

Most of his formal analysis doesn’t focus on debt though. Instead he analyzes the effects of risk-free social security transfers from young to old in a stochastic OLG economy. (There are close parallels between debt and such transfers to the old that are financed by contemporaneous taxes on the young.) In a steady-state with very low interest rates higher transfers have two effects on welfare, by (i) providing an attractive substitute for savings and by (ii) reducing capital accumulation and thereby lowering wages and raising the interest rate. If the economy initially is dynamically inefficient both effects are welfare improving because (i) capital accumulation with a low return is replaced by higher yielding intergenerational transfers and (ii) lower wages and higher interest rates are attractive, starting from a situation with a low interest rate. In a stochastic economy the first channel yields welfare gains as long as the growth rate exceeds the risk-free rate, and the second channel yields welfare gains (approximately) when the growth rate exceeds the marginal product of capital. Blanchard argues

[b]e this as it may, the analysis suggests that the welfare effects of a transfer may not necessarily be adverse, or, if adverse, may not be very large.

In the corresponding case with debt there is another effect because the intergenerational transfer is not risk-free; the size of this additional effect depends on the path of the risk-free interest rates (Blanchard assumes that the debt level is stabilized which requires net tax payments by the young to reflect the contemporaneous risk-free rate). In the slightly different case where debt is increased once and then rolled over, without adjusting taxes in the future, the sustainability and welfare implications are ambiguous and critically depend on the production function:

In the linear case, debt rollovers typically do not fail [my emphasis] and welfare is increased throughout. For the generation receiving the initial transfer associated with debt issuance, the effect is clearly positive and large. For later generations, while they are, at the margin, indifferent between holding safe debt or risky capital, the inframarginal gains (from a less risky portfolio) imply slightly larger utility. But the welfare gain is small … . In the Cobb-Douglas case however, this positive effect is more than offset by the price effect, and while welfare still goes up for the first generation (by 2 percent), it is typically negative thereafter. In the case of successful debt rollovers, the average adverse welfare cost decreases as debt decreases over time. In the case of unsuccessful rollovers, the adjustment implies a larger welfare loss when it happens. If we take the Cobb-Douglas example to be more representative, are these Ponzi gambles, as Ball, Elmendorf, and Mankiw (1998) have called them, worth it from a welfare viewpoint? This clearly depends on the relative weight the policymaker puts on the utility of different generations [my emphasis].

Blanchard argues that the marginal product of capital may be smaller than commonly assumed, implying that it is more likely that the welfare effects working through (ii) are positive (those working through (i) are very likely positive). Finally, he also presents some additional potential arguments pro and con higher public debt.

Blanchard’s work has attracted substantial criticism, for instance at the January 2020 ASSA meetings (see this previous post). In a short paper presented at the meetings, Johannes Brumm, Laurence Kotlikoff, and Felix Kubler (Leveraging Posterity’s Prosperity?) point out that a negative difference between average interest and growth rates is not necessarily indicative of dynamic inefficiency (see the discussion above) and that Blanchard’s analysis disregards tax distortions as well as the welfare effects from intergenerational risk sharing (again, see above):

To see the distinction between risk-sharing and a Ponzi scheme, modify B’s two-period model to include agents working when old if they don’t randomly become disabled. Now workers face second-period asset income and labor earnings risk. The government has no safe asset in which to invest. If it borrows, invests in capital, and taxes bond holders its excess return, “safe” debt is identical to risky capital. But if the net taxes are only levied on the non-disabled, bonds become a valued risk-mitigating asset and their return can be driven far below zero. This scheme could be, and to some extend it is, implemented through progressive taxation. If, observing this gap between growth and safe rates, the government decides to institute an “efficient” Ponzi scheme with a fixed pension benefit financed on a pay-go basis by taxes on workers, net wages when young will be more variable, raising generation-specific risk and potentially producing an outcome in which no generation is better off and at least one is worse off.

Brumm, Kotlikoff, and Kubler also note that the effective interest rate at which US households are borrowing is much higher than the borrowing rate of the government; this undermines Blanchard’s approach to gauge the welfare implications. And they point out that the scheme suggested by Blanchard could harm other countries by reducing global investment.

Jasmina Hasanhodzic (Simulating the Blanchard Conjecture in a Multi-Period Life-Cycle Model) simulates a richer OLG model and rejects the Blanchard conjecture of Pareto gains due to higher transfers:

It shows that the safe rate on government debt can, on average, be far less than the economy’s growth rate without its implying that ongoing redistribution from the young to the old is Pareto improving. Indeed, in a 10-period, OLG, CGE model, whose average safe rate averages negative 2 percent on an annual basis, welfare losses to future generations resulting from the introduction of pay-go Social Security, financed with a 15 percent payroll tax, are enormous—roughly 20 percent measured as a compensating variation relative to no policy.

Relative to Blanchard’s simulations, her model implies more negative consequences of crowding out on wages, a higher tax burden from the transfer scheme, and more induced old-age consumption risk.

Michael Boskin (How, When and Why Deficits Are Dangerous) offers a broad discussion of potential weaknesses of Blanchard’s analysis. Richard Evens (Public Debt, Interest Rates, and Negative Shocks) questions Blanchard’s simulations on calibration grounds and notes that he couldn’t replicate some of Blanchard’s findings.

On his blog, John Cochrane argues along similar lines as Ricardo Reis: Even if r<g, expected primary deficits are so large that debt quotas will explode nevertheless.

Olivier Blanchard on Markus’ Academy.

More work by Johannes Brumm, Xiangyu Feng, Laurence J. Kotlikoff, and Felix Kubler: When Interest Rates Go Low, Should Public Debt Go High? (NBER working paper 28951), and Deficit Follies (28952).

Note: This post was updated several times.

Robert Pirsig’s “Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values”

Quality, not subject or object, as the elementary fabric. ἀρετή. A rehabilitation of the sophists.

Some quotes:

If the purpose of scientific method is to select from among a multitude of hypotheses, and if the number of hypotheses grows faster than experimental method can handle, then it is clear that all hypotheses can never be tested. If all hypotheses cannot be tested, then the results of any experiment are inconclusive and the entire scientific method falls short of its goal of establishing proven knowledge. …

God, I don’t want to have any more enthusiasm for big programs full of social planning for big masses of people that leave individual Quality out. These can be left alone for a while. There’s a place for them but they’ve got to be built on a foundation of Quality within the individuals involved. We’ve had that individual quality in the past, exploited as a natural resource without knowing it, and now it’s just about depleted. Everyone’s just about out if gumption. And I think it’s about time to return the rebuilding of this American resource – individual worth. There are political reactionaries who’ve been saying something close to this for years. I’m not one of them, but to the extent they’re talking about real individual worth and not just an excuse for giving more money to the rich, they’re right. We do need a return to individual integrity, self-reliance and old-fashioned gumption. We really do. …

What is good, Phaedrus, and what is not good – need we ask anyone to tell us these things?

Anthony McWatt’s discussion of Pirsig’s philosophy in Philosophy Now:

In Zen & the Art of Motorcycle Maintenance, Pirsig first explored the history of the term ‘Quality’, or what the Ancient Greeks called arête, tracing it all the way back to Plato (428-348 BCE). He concluded that the strange position of Quality in today’s West originated with Plato’s division of the human soul into its reason and emotion aspects, in his dialogue the Phaedrus. In this dialogue, Plato gave primary place to reason over emotion. Soon afterwards Aristotle was similarily emphasizing analysis over rhetoric. And as Hugh Lawson-Tancred confirms in the Introduction to his 1991 translation of Aristotle’s Rhetoric: “There are few things that are more to be deplored in Greek culture, and notably in the legacy of Plato, than the wholly forced and unnatural division between… [the] two sister studies” of rhetoric and philosophy (p.57). Eventually this division grew into the ‘subjective versus objective’ way of thinking now largely dominant in the West. So now in the West we have objectivity, reason, logic, and dialectic on the one hand; and subjectivity, emotion, imagination, intuition, and rhetoric on the other. The former terms suggest scientific respectability, while the latter are often assumed to be artistic terms, having little place in science or rationality. It is this Platonic conception of rationality that Pirsig sought to challenge by reconciling the spiritual (for example, Zen), artistic (for example, art) and scientific (for example, motorcycle maintenance) realms within the unifying paradigm of the Metaphysics of Quality. …

Plato was perhaps a little too over-confident in how usable his theory of Forms is in practice. I wonder if it ever crossed his mind that his mentor, Socrates, might have been hinting to him and the other young philosophy students in Athens that the Good and Beauty are actually indefinable? The idea of Forms was, of course, invented by Plato, not Socrates. Unfortunately, as a consequence of Plato’s thinking that reality can be basically defined, Western philosophy is in the state that it is in today: more a handmaiden of science rather than its master. Assuming that words can capture all aspects of reality is an understandable error to make at the very beginning of the Western philosophical tradition… but having said that, it was a metaphysical error avoided by East Asian philosophy. Think about Plato’s allegory of the Cave of Ignorance and escaping from it to see the Sun of the Good, then compare it with the following quote:

“Not by its rising is there light,
Not by its sinking is there darkness
Unceasing, continuous
It cannot be defined…
The image of no-thingness…
Meet it and you do not see its face
Follow it and you do not see its back.”

Dao De Jing, Laozi, Quoted in ZMM, p.253-54

If you think about it long enough, then you’ll see that there was no ‘Cave of Ignorance’ until Plato put Western culture inside its metaphysical darkness for 2,400 years!

Obituary by Paul Vitello in the New York Times:

One of Mr. Pirsig’s central ideas is that so-called ordinary experience and so-called transcendent experience are actually one and the same — and that Westerners only imagine them as separate realms because Plato, Aristotle and other early philosophers came to believe that they were.

But Plato and Aristotle were wrong, Mr. Pirsig said. Worse, the mind-body dualism, soldered into Western consciousness by the Greeks, fomented a kind of civil war of the mind — stripping rationality of its spiritual underpinnings and spirituality of its reason, and casting each into false conflict with the other.

Obituary by Michael Carlson in The Guardian.

Edward Snowden’s “Permanent Record”

An intriguing description of America’s intelligence community and the industry surrounding it; the slippery slopes; and Snowden’s motivation for following his conscience rather than the money. From the book, how we got here:

[After 9/11] [n]early a hundred thousand spies returned to work at the agencies with the knowledge that they’d failed at their primary job, which was protecting America. …

In retrospect, my country … could have used this rare moment of solidarity to reinforce democratic values and cultivate resilience in the now-connected global public. Instead, it went to war. The greatest regret of my life is my reflexive, unquestioning support for that decision. I was outraged, yes, but that was only the beginning of a process in which my heart completely defeated my rational judgment. I accepted all the claims retailed by the media as facts, and I repeated them as if I were being paid for it. … I embraced the truth constructed for the good of the state, which in my passion I confused with the good of the country.

And what to make of it:

Ultimately, saying that you don’t care about privacy because you have nothing to hide is no different from saying you don’t care about freedom of speech because you have nothing to say. Or that you don’t care about freedom of the press because you don’t like to read. … Just because this or that freedom might not have meaning to you today doesn’t mean that it doesn’t or won’t have meaning tomorrow, to you, or to your neighbor – or to the crowds of principled dissidents I was following on my phone who were protesting halfway across the planet, hoping to gain just a fraction of the freedom that my country was busily dismantling. …

Any elected government that relies on surveillance to maintain control of a citizenry that regards surveillance as anathema to democracy has effectively ceased to be a democracy.

Buy the book from a key contractor of the intelligence community. Reviews on goodreads. Youtube video of the 2013 presentation by CIA CTO Gus Hunt which Snowden discusses in the book.

Jack Kerouac’s “On the Road”

280 pages of frantic search for an end. New York, Denver, San Francisco, New Orleans, Mexico City, and the miles in between. Music, drugs, talk, sex.

Wikipedia:

Inspired by a 10000-word rambling letter from his friend Neal Cassady, Kerouac in 1950 outlined the “Essentials of Spontaneous Prose” and decided to tell the story of his years on the road with Cassady as if writing a letter to a friend in a form that reflected the improvisational fluidity of jazz. In a letter to a student in 1961, Kerouac wrote: “Dean and I were embarked on a journey through post-Whitman America to find that America and to find the inherent goodness in American man. It was really a story about 2 Catholic buddies roaming the country in search of God. And we found him.”

Arnold Kling’s “Specialization and Trade, A Re-Introduction to Economics”

Arnold Kling (2016), Specialization and Trade, A Re-Introduction to Economics, Washington, DC, Cato Institute.

Kling’s central theme in this short book of nine main chapters is that specialization, trade, and the coordination of individual plans by means of the price system and the profit motive play fundamental roles in modern economies. Most mainstream economists would agree with this assessment. Their models of trade, growth, and innovation certainly include the four elements, with varying emphasis.

But Kling criticizes the methodological approach adopted by post-world-war-II economics, which he associates with “MIT economics.” An MIT PhD himself, he argues that economics, and specifically macroeconomics, should adopt less of a mechanistic and more of an evolutionary perspective to gain relevance. In the second chapter, entitled “Machine as Metaphor,” Kling asserts that under the leadership of Paul Samuelson post-war (macro)economics framed economic issues as programming problems that resemble resource allocation problems in a wartime economy. Even as the discipline evolved, Kling contends, the methodology remained the same, pretending controllability by economist-engineers; in the process, the role of specialization was sidelined in the analysis.

I think that Kling is too harsh in his assessment. Economics and macroeconomics, in particular, has changed dramatically since the times of Paul Samuelson. The notion that, given enough instruments, any economic problem can be solved as easily as a system of equations, has lost attraction. Modern macroeconomic models are based on microeconomic primitives; they take gains from trade seriously; they involve expectations and frictions; and they do not suggest easy answers. The task of modern macroeconomics is not to spit out a roadmap for the economist-engineer but to understand mechanisms and identify problems that arise from misaligned incentives.

Kling is right, of course, when he argues that many theoretical models are too simplistic to be taken at face value. But this is not a critique against economic research which must focus and abstract in order to clarify. It rather is a critique against professional policy advisors and forecasters, “economic experts” say. These “experts” face the difficult task of surveying the vast variety of mechanisms identified by academic research and to apply judgement when weighing their relevance for a particular real-world setting. To be useful, “experts” must not rely on a single framework and extrapolation. Instead, they must base their analysis on a wide set of frameworks to gain independent perspectives on a question of interest.

In chapters three to five, Kling discusses in more detail the interplay of myriads of specialized trading partners in a market economy and how prices and the profit motive orchestrate it. In the chapter entitled “Instructions and Incentives,” Kling emphasizes that prices signal scarcity and opportunity costs are subjective. In the chapter entitled “Choices and Commands,” he discusses that command-and-control approaches to organizing a society face information, incentive, and innovation problems, unlike approaches that rely on a functioning price mechanism. And in the chapter “Specialization and Sustainability,” Kling makes the point that well-defined property rights and a functioning price mechanism offer the best possible protection for scarce resources and a guarantee for their efficient use. Sustainability additionally requires mechanisms to secure intergenerational equity.

I agree with Kling’s point that we should be humble when assessing whether market prices, which reflect the interplay of countless actors, are “right” or “wrong.” However, I would probably be prepared more often than Kling to acknowledge market failures of the type that call for corrective taxes. The general point is that Kling’s views expressed in the three chapters seem entirely mainstream. While we may debate how often and strongly market prices fail to account for social costs and benefits, the economics profession widely agrees that for a price system to function well this precondition must be satisfied.

In the sixth chapter, entitled “Trade and Trust,” Kling argues that specialization rests on cultural evolution and learning and more broadly, that modern economic systems require institutions that promote trust. Independently of the norms a particular society adopts, it must implement the basic social rule,

[r]eward cooperators and punish defectors.

How this is achieved (even if it is against the short-run interest of an individual) varies. Incentive mechanisms may be built on the rule of law, religion, or reputation. And as Kling points out societies almost always rely on some form of government to implement the basic social rule. In turn, this creates problems of abuse of power as well as “deception” and “demonization.” Mainstream economists would agree. In fact, incentive and participation constraints, lack of commitment, enforcement, and self-enforcement are at center stage in many of their models of partial or general equilibrium. Similarly, the role of government, whether benevolent or representing the interests of lobby groups and elites, is a key theme in modern economics.

Chapter seven, entitled “Finance and Fluctuations,” deals with the role of the financial sector. Kling argues that finance is a key prerequisite for specialization and since trust is a prerequisite for finance, swings in trust—waves of optimism and pessimism—affect the economy. No mainstream macroeconomist will object to the notion that the financial sector can amplify shocks. Seminal articles (which all were published well before the most recent financial crisis) exactly make that point. But Kling is probably right that the profession’s workhorse models have not yet been able to incorporate moods, fads, and manias, the reputation of intermediaries, and the confidence of their clients in satisfactory and tractable ways, in spite of recent path-breaking work on the role of heterogenous beliefs.

In chapter eight, Kling focuses on “Policy in Practice.” He explains why identifying market failure in a model is not the same as convincingly arguing for government intervention, simply because first, the model may be wrong and second, there is no reason to expect government intervention to be frictionless. I don’t know any well-trained academic economist who would disagree with this assessment (but many “experts” who are very frighteningly confident about their level of understanding). The profession is well aware of the insights from Public Choice and Political Economics, although these insights might not be as widely taught as they deserve. And Kling is right that economists could explain better why real-world policy selection and implementation can give rise to new problems rather than solely focusing on the issue of how an ideal policy might improve outcomes.

To me, the most interesting chapters of the book are the first and the last, entitled “Filling in Frameworks” and “Macroeconomics and Misgivings,” respectively. In the first chapter, Kling discusses the difference between the natural sciences and economics. He distinguishes between scientific propositions, which a logical flaw or a contradictory experiment falsifies, and “interpretive frameworks” a.k.a. Kuhn’s paradigms, which cannot easily be falsified. Kling argues that

[i]n natural science, there are relatively many falsifiable propositions and relatively few attractive interpretive frameworks. In the social sciences, there are relatively many attractive interpretive frameworks and relatively few falsifiable propositions.

According to Kling, economic models are interpretative frameworks, not scientific propositions, because they incorporate a plethora of auxiliary assumptions and since experiments of the type run in the natural sciences are beyond reach in the social sciences. Anomalies or puzzles do not lead economists to reject their models right away as long as the latter remain useful paradigms to work with. And rightly so, according to Kling: For an interpretative framework with all its anomalies is less flawed than intuition which is uninformed by a framework. At the same time, economists should remain humble, acknowledge the risk of confirmation bias, and remain open to competing interpretative frameworks.

In the chapter entitled “Macroeconomics and Misgivings,” Kling criticizes macroeconomists’ reliance on models with a representative agent. I agree that representative agent models are irrelevant for applied questions when the model implications strongly depend on the assumption that households are literally alike, or that markets are complete such that heterogeneous agents can perfectly insure each other. When “experts” forecast macroeconomic outcomes based on models with a homogeneous household sector then these forecasts rest on very heroic assumptions, as any well-trained economist will readily acknowledge. Is this a problem for macroeconomics which, by the way, has made a lot of progress in modeling economies with heterogeneous agents and incomplete markets? I don’t think so. But it is a problem when “experts” use such inadequate models for policy advice.

Kling argues that the dynamic process of creative destruction that characterizes modern economies requires ongoing change in the patterns of specialization and trade and that this generates unemployment. Mainstream models of innovation and growth capture this process, at least partially; they explain how investment in new types of capital and “ideas” can generate growth and structural change. And the standard framework for modeling labor markets features churn and unemployment (as well as search and matching) although, admittedly, it does not contain a detailed description of the sources of churn. The difference between the mainstream’s and Kling’s view of how the macroeconomy operates thus appears to be a difference of degree rather than substance. And the difference between these views and existing models clearly also reflects the fact that modeling creative destruction and its consequences is difficult.

Kling is a sharp observer when he talks about the difference between “popular Keynesianism” and “rigor-seeking Keynesianism.” The former is what underlies the thinking of many policy makers, central bankers, or journalists: a blend of the aggregate-demand logic taught to undergraduates and some supply side elements. The latter is a tractable simplification of a micro-founded dynamic general equilibrium model with frictions whose properties resemble some key intuitions from popular Keynesianism.

The two forms of Keynesianism help support each other. Popular Keynesianism is useful for trying to convince the public that macroeconomists understand macroeconomic fluctuations and how to control them. Rigor-seeking Keynesianism is used to beat back objections raised by economists who are concerned with the ways in which Keynesianism deviates from standard economics, even though the internal obsessions of rigor-seeking Keynesianism have no traction with those making economic policy.

There is truth to this. But in my view, this critique does not undermine the academic, rigor-seeking type of Keynesianism while it should undermine our trust in “experts” who work with the popular sort which, as Kling explains, mostly is confusing for a trained economist.

In the end, Kling concludes that it is the basics that matter most:

[B]etter economic outcomes arise when patterns of sustainable specialization and trade are formed. … It requires the creative, decentralized, trial-and-error efforts of thousands of entrepreneurs and millions of households … Probably the best thing that the government can do to encourage new forms of specialization is to rethink existing policies that restrict competition, discourage innovation, and retard mobility.

This is a reasonable conclusion. But it is neither a falsifiable proposition nor an interpretive framework. It is the synthesis of many interpretive frameworks, weighed by Kling. In my own view, the weighting is based on too harsh an assessment according to which many modern macroeconomic models are irrelevant.

Kling’s criticism of contemporaneous macroeconomics reads like a criticism of the kind of macroeconomics still taught at the undergraduate level. But modern macroeconomics has moved on—it is general equilibrium microeconomics. Its primary objective is not to produce the one and only model for economist-engineers or “experts” to use, but rather to help us understand mechanisms. A good expert knows many models, is informed about institutions, and has the courage to judge which of the models (or mechanisms they identify) are the most relevant in a specific context. We don’t need a new macroeconomics. But maybe we need better “experts.”