Old Blogs

A Model’s Crisis

Friedrich von Hayek described the economist’s task as demonstrating how little we really know about what we imagine we can design.

The fifth anniversary of the start of the most severe financial crisis since the Great Depression has us reflecting on how little we were able to predict from macroeconomic models – the very models we use to capture business cycle dynamics. While being careful not to completely overturn the mainstream body of economics, the very fact that they could not predict the most important economic phenomenon in nearly a century requires us to question the way these models perform.

The focus of most Dynamic Stochastic General Equilibrium (DSGE) models used by central banks is to keep inflation at its target value. Before the crisis, most countries were experiencing high rates of inflation. Consequently, central banks responded aggressively by raising interest rates to keep pace. Thus, central banks literally restrained the economic activities in these countries by following the inflation targeting models designed to deal with high inflation. Complications arise when we consider that this assumption is extensively focused on the distortions in price levels and its association with low-to—medium inflation rates. However, in reality the extent of misfortune created by the financial crisis is not comparable to the potential costs of high inflation.

There are several reasons that have led macroeconomists to this level of disarray. Decades ago, economists started noticing that there is no clear-cut connection between macroeconomics and microeconomics, leading to increasing attempts to connect these two fields. We saw the rise of macroeconomic models based on microeconomic foundations incorporating assumptions of perfect competition and perfect information in the market. This is indeed the concept behind DSGE models.

These assumptions, and more importantly the “representative agent” as an assumption leading to the impossibility of internal lending, indeed turned out to be critically wrong. The problem with the models based on the representative agent assumption is that there is literally no financial sector in these models. In other words, either the whole sector defaults or no one defaults. Furthermore, in these models all the market participants are as credible as the government. So a bank’s IOU’s can be exchanged as money. There is no money or central banks involved in these models.

Not surprisingly this debt-inflation problem indeed proved to be one of the strong explanatory variables in predicting the crisis.

The nature of the financial system links the crisis to the debate about the role of central bank and finance companies and the inadequacy of their regulatory systems. While DSGE models would address cyclical perspectives, every day decisions of financial institutions involve liquidity management practices, namely buying and selling securities and operating in the short-term money market. And the crises came as a product of individual banks’ liquidity shocks. Even though the connection of banks’ balance sheet shocks to monetary policy was widely recognized by economists ahead of the crises, regulatory aspects of this conclusion that could have possibly prevented these shocks where disregarded.

How this has changed since that time? And have predictive models attempted to incorporate such aspects?

If banks can still largely fund their assets at their pleasure with borrowed money, other systemic crises will follow. In addition, dysfunctional regulatory frameworks will add fuel to the fire.

In light of such considerations, and with the 2008 crises having proved the amplifying effects of the financial sector on business-cycle fluctuations, the G20 has taken a new approach to risk and regulation in the financial sector by endorsing the new Basel III capital and liquidity requirements. The concept of liquidity buffers was introduced, and in order to avoid the repeat of a “liquidity fortress” scenario the liquidity coverage ratio was established. Asymmetric information, in the form of financial frictions and macro prudential policy concepts, has been incorporated into new general equilibrium framework models.

For researchers, the critical task going forward is identifying the unrealistic assumptions behind macroeconomic models and to correct for, and possibly include, the financial sector in all DSGE models. And we need a more focused microeconomic data analysis of related macroeconomic questions.

In the spirit of the current financial crisis, we have learned a lot about macroeconomic modeling, which makes us optimistic about the future of modeling in the macroeconomic research agenda. However, the learning process is still at an early stage if we look at it through the perspective of what will remembered as a historic financial moment. Therefore this conversation needs be continued.

Our Hansen Moment

By Elham Saeidinezhad

The main goal of the macroeconomist is to understand the sources behind business cycles and the behavior of financial markets in the modern economy.

As in any science, economics offers many ways to accomplish its tasks. What sets economics apart, however, is that this year two models that are in sharp contrast with each other in explaining the dynamics of financial assets, and were created by two high-profile economists, were selected to receive the discipline’s highest honor – the Noble Prize in economics.

The two theories in question are Eugene Fama’s Efficient Market Hypothesis, which poses that asset prices fully reflect all of the available information in the market, and Robert Shiller’s belief that prices in the financial markets are not rational because they are driven by human psychology.

Since the awards were announced in October, much has been made of the differences between Fama’s and Shiller’s ideas and have brought into question economics’ standing as a science. But there is another implication to be drawn from this year’s Nobel Prize selections. And it comes from an understanding of the third winner of the 2013 Nobel – Lars Peter Hansen.

It is fashionable in macro-economics research that economists either confirm current mainstream models or establish new models that are in contrast with the most popular ways of thinking. This year’s Nobel Prizes indeed represented this trend.

Put differently, the question raised by the Nobel announcement is: Are these contradictions a natural byproduct of the intellectual arguments? Or do they mean that we as economists have been stuck in producing the same results and that this prevented us from developing new thinking about the financial market performance?

In fact, we have been warned by few macro-econometricians that it is a danger that economists have become stuck in producing the same results, albeit with different number attached. We need to take these warning seriously since we, and capitalism, no longer can afford to be so confused about how the financial market really works. How much do we have to pay in the form of another crisis?

Although Fama and Shiller deserve congratulations for receiving the Noble Prize, the confusion surrounding their award is a warning sign we must heed. The problem that has not been talked about much is that it seems like the debate is getting prolonged, and the possibility of a financial crisis arising as the result of our misunderstanding of how market performs is very real.

The point is we do not yet know if our financial decisions are rational or emotional, which is the critical question in understanding macroeconomy. Both Shiller and Fama’s theories are incomplete in explaining the behavior of asset prices, although Shiller was able to predict a couple of crises and Fama’s model is being practiced in real world finance.

But it is here where the often-overlooked Hansen comes in. Hansen has chosen a path between Fama and Shiller. In his work, he incorporates the implications from Fama’s theory in his methodology, even though he has never fully accepted the efficient market hypothesis. He therefore adopted the EMH model to introduce his sophisticated GMM econometrics method. Hansen’s contribution is a robust way of standing on the shoulders of giants and not throwing a theory completely away. Perhaps Hansen’s way should be used more often in creating and producing new theories. And maybe this is the key to preventing another crisis, or at least predicting it ahead of time.

Although the economics community has been focused on Shiller and Fama because their approaches and theories are in the sharp contrast with each other, it is Hansen’s methodology that has the elements of the both schools of thinking. There is no doubt that we have learned a great deal from Fama and Shiller and their very contrasting and unique points of views. But there is more to learn from Hansen’s approach in creating new thinking in economics by coming up with something more comprehensive that does not essentially belong to a specific class of thinking and does not specifically aim to reject the ideas from other schools.

As economists, we can learn from Hansen’s vision in producing new methodologies and models. Creating new economic thinking can be done by incorporating factors from different schools of economics. The point is that the interaction between Fama’s and Shiller’s theories could be more enlightening than looking at each of them in isolation. Hansen did it and received the Noble Prize, even though he did not attract nearly as much attention as his other more well-known co-winners.

But being right is not a popularity contest.

The financial market is the heart of the economy. Not being confident about how to interpret asset price behaviors may be seriously harmful and consequently lead to another crisis. Even though it is important to be careful with multidisciplinary approaches, economists also must pay close attention to the implications of the economic models that are being introduced by their colleagues in academia who belong to other schools of thinking. This is our only way forward.

The time for absolutism has passed. But when it comes to economic theories, we too often rule out past methods of thinking from another school of thought in order to put forward a completely new set of ideas. But the fact is that there can be correct aspects even in ideas that are doomed to fail.

That’s why economists should pay close attention to Hansen’s work. Now is the time for his way of thinking.

Modeling a World of Imperfect Knowledge

By Elham Saeidinezhad

Does it matter if the Rational Expectations Hypothesis is unrealistic?

Does it matter if the Rational Expectations Hypothesis is unrealistic?

Not according to New York University Prof. Roman Frydman, head of the Institute for New Economic Thinking’s research group on Imperfect Knowledge Economics (IKE).

Speaking at a seminar organized by the Institute and Columbia University and chaired by Columbia Prof. Joseph Stiglitz, who’s a member of the Institute’s advisory board, Frydman stated that the Rational Expectations Hypothesis, or REH, is simply an abstraction of reality and therefore it is misleading to criticize it for abstracting from many significant aspects of the real world economy. To Frydman, instead, the more pressing questions raised by REH are, what kind of world is it an abstraction of? And can this explain why the models fail so dramatically?

Frydman’s point is that the kind of the world that REH models are abstracting actually does not exist. A REH model assumes that people need to make decisions on how the economy works and the problem that they face is how to forecast when there are so many ways to do so. The relevant economic models, according to John Muth (1961), could serve this purpose. But the current version of REH modeling is empirically non-testable, which creates significant controversy.

Having said that, there also is no proof that other versions of REH models can work. Although we do not know if the model empirically works, we can determine in which “domain” the model works. This basically means that we have to investigate the following question: How does change proceed in REH models over time? As Popper (1957) argues, “If there is such a thing as “growing human knowledge,” we cannot anticipate today what we shall know tomorrow.”

Frydman is focusing his work on understanding “imperfect knowledge” to find a solution for the REH’s irrelevance in representing rational decision-making in the real world economy. Frydman and his co-author, Michael D. Goldberg, also argue that they can empirically test this model by formulizing market understanding. Rational forecasting, therefore, needs to be related to the markets’ understanding of the price process. Market understanding is indeed the function of some causal factors. Therefore, one could formulize the market understanding of the price changes in order to empirically test the model.

The domain that one could choose for the models is macroeconomics or the financial market. By choosing a specific domain we are actually making assumptions about the different processes of change. Mathematically speaking, choosing the domain keeps the model open to an unobserved change.

One of the characteristics of the models that are widely used in the current economic literature is that they are closed, so it is assumed that the structure of the model does not change over time and the model is predetermined. In the REH model, therefore, it is assumed that the human knowledge does not change. This is very contradictory because the REH models are indeed invented to represent the growth of human knowledge. Put differently, economic agents in the REH model assume that there would be no unexpected change. If the market does not assume the same thing, the agent is considered incoherent.

Behavioral economists also tend to forget about this fundamental contradiction in the REH model since REH plays a crucial role in their models as well. In their view, the REH model adequately represents rationality, but the market participants are irrational. They therefore ignore the fact that in these models knowledge does not grow by claiming that the market participants actually behave according to determinate models.

In his critique of this approach, Frydman creates new economic thinking by arguing that the key to incorporating the insights of REH models and behavioral economics is to jettison the determinate models. By eliminating determinacy Frydman puts forward a model in which, even though there is some regularity, it is not completely open and is not completely determinate. The Contingent Expectations Hypothesis, or CEH, that is introduced in the latest paper by Frydman and Goldberg is somewhere “in between”.

Ultimately, Frydman argues that in order to push the boundaries of economic literature there is no need to throw out all the things that we have learned. Instead, we just need to discard the current popular assumptions that we know how markets work and we have the same information as the market does.

Of course, behind this methodological shift from the REH model to the CEH model are overarching sharp contradictions of all future outcomes and their probabilities. It is therefore important to understand that the ultimate ambition of the REH model is beyond the reach of the economic analysis.

The balanced approach, as identified by Frydman, is that the markets are essential to the modern economy, but they have bad tendencies to create extreme results, such as extreme booms or busts. It takes new economic thinking like Frydman’s CEH model to break through and change current standards.

The Exchange Rate as a Monetary Phenomenon

By Elham Saeidinezhad and Jay Pocklington

What exactly is an exchange rate?

We all know that one of the main challenges in the study of international finance is the setting of exchange rates. But Barnard College Prof. Perry Mehrling, who also is a member of the Institute for New Economic Thinking’s Curriculum Committee, attempts to answer this more basic question by putting forward a new way of thinking about this “thing” called an exchange rate [Essential hybridity: A money view of FX].

For Mehrling, the exchange rate is determined in the moment when one state confronts another state and two financial systems meet. Thus, the foreign exchange market has a hybrid aspect that is overlooked by the standard academic perspective where economics and finance are seen as distrinct from political science and the legal system.

But what Mehrling calls “the money view” embraces both market and state aspects of the exchange rate. For Mehrling, the key to bridging this divide is “to conceptualize the exchange rate as the price of one money in terms of another money; [with] money itself being essentially a hybrid entity part market and part state.”

This hybrid nature becomes clear in the distinction between “state” money and “private” money. Currency, or state money, is considered legal tender and is issued by the state. Meanwhile, private profit-seeking entities create the private money by issuing payment obligations in one or another currency.

The important element of hybridity in practice is that inside a particular currency area the par exchange between state money and private money keeps an equivalence of quantity between these two qualitatively different kinds of money. Central banks are the force behind keeping the rate between state money and private money at par. By understanding the exchange rate through Mehrling’s lens, one sees that exchange rates retain quantitative equivalence between what are in fact disparate kinds of money.

To understand the practice of setting exchange rates, let’s look at two extreme scenarios.

In one extreme, a central bank, acting on behalf of a government, sets the exchange rate as a matter of policy. In the other, a central bank lets the exchange rate be completely determined by the profit-seeking private dealers. Today’s exchange rate system is a hybrid sitting somewhere between these two extremes; private dealers are responsible for most of the day-to-day trading, whereas central banks play the role of the “dealer of the last resort” in the forex market.

The key to comprehending how the exchange rate is determined is in the relationship between the profit-seeking forex dealer and the central bank. In this context, central banks are viewed as a special kind of forex dealer that pursues stability rather than profit.

The conventional economics view of the forex market is to regard the exchange rate as the relative price of the tradable goods, while finance understands the exchange rate as the relative price of the tradable assets. Neither sees the market as a trade in money. Rather, exchange is understood as a kind of barter that abstracts from money.

By contrast, what Mehrling labels “the money view” takes the “reserves constraint” on the end-of-day clearing in a multilateral payments system seriously. Mehrling applies Minksy’s survival constraint to the international exchange of money by examining the means through which deficit countries must settle with surplus countries.

Nowadays, deficit countries have to acquire dollars in the global forex market to settle their debts. The key point in the” money view” is to understand how the requirement to obtain dollars shapes the behavior of the deficit country. Different types of dealers must participate in the market to provide it with enough dollars to meet its obligations.

In Mehrling’s view, the international monetary system constitutes a hierarchy, the dollar being on top of the hierarchy and other currencies below it. The public dollar functions as money and all other non-dollar currencies can be regarded as a kind of credit with implicit promises to pay dollars. Integrating major and minor currencies into the picture, “the money view” presents a relationship where minor currencies are promises to pay with major currencies, and major currencies are promises to pay with dollars, which are the ultimate world reserve currency.

From this perspective, the starting point for analysis is always Minsky’s survival constraint, which highlights the need for all financial entities to settle payment each day. Central banks that handle overnight interest rates come into the picture as foreign exchange dealers of last resort.

It is worth emphasizing, though, that the central bank of a deficit country is not in an equivalent position as the central bank for a surplus country. The deficit county faces the survival constraint at clearing and therefore is obliged to intervene in the supply of its own currency once the private dealer is not willing to participate in the market anymore. In this way, it is actually acting as speculative dealer of last resort. Of course, the critical difference between the central bank and the private dealer, however, is that the central bank is not a profit seeking entity. Thus, it can take positions to improve the liquidity of its currency even at a “loss.”

In the end, Mehrling’s point of view provides a framework to see exchange rates as what they actually are: a market in money that has to be viewed as a hybrid between private and state actors. His paper is important because it provides an opening to incorporate the actual mechanics of foreign exchange and the interests of public and private entities in this market. This is beyond an issue of finance. Rather, it is a legal and political phenomenon.

A Fight Over Inequality: The 5% Vs. The Rest

By Elham Saeidinezhad

In late 2007, the United States started feeling the effects of the Great Recession. And over the ensuing two years the economic disaster spread across the globe.

Precisely when and if the recession ended remains open for debate. Recently, the National Bureau of Economic Research announced that the economic recovery began in June 2009. However, the resumption of growth has been surprisingly slow, to the point where few people can feel the recovery happening regardless of what the statistics say.

One of the key characteristics of the Great Recession was its relationship to consumer borrowing and spending. Indeed, the crisis was followed by a fall in consumers saving and an increased spending fueled by rising individual debts level. In addition, the recession was followed by a sharp increase in the income gap between those who are at the top of income distribution and the rest.

None of this is a coincidence, according to Steven Fazzari of Washington University in St. Louis, who is studying the issue under a research grant from the Institute for New Economic Thinking. In fact, a paper written Fazzari and Barry Cynamon, a visiting scholar at the Federal Reserve Bank of St. Louis, digs into this issue by studying the relationship between household spending, consumer debt, and rising income inequality, going as far back as the 1980s.

Fazzari and Cynamon studied new data that break down income and other important macroeconomic measures, between the bottom 95% and top 5% of income distribution. Macroeconomists, starting with Michal Kalecki, challenge the viability of growing income inequality claiming that high-income households, which are often affiliated with receiving profits, typically save more and spend smaller percentages of their income than wage earners. This being the case, a further increase in inequality should lead to a decrease in demand that will be followed by higher unemployment or even secular stagnation. Two 2013 studies, one by DeBacker et al. and the other by Alvaredo et al., both show that income inequality rose dramatically over the 20 years from 1987 to 2006, which created a permanent shift of income across households rather than merely changing the transitory shocks.

However, despite experiencing a significant shift in income distribution that the theories suggest might cause a fall in demand, the U.S. actually performed reasonably well indeed in the decades leading up to the Great Recession.

The other surprising aspect of this era, considering the substantial change in income distribution, is that personal consumption expenditure (PCE) was both the largest and fastest growing part of the gross domestic product. This trend, along with the substantial increase of inequality, demonstrates a paradox that is a core feature of Fazzari’s paper, specifically that, “Rising inequality should theoretically reduce the consumption-income ratio if affluent households spend a smaller part of their growing share of aggregate demand. But the period of rising inequality, starting roughly in the early 1980s, corresponds almost exactly with a historic increase in American household relative to income.”

How can this be true?

To answer this question, the authors carefully examined how rising income inequality influences income growth rates and how this in effect changes household balance sheets. In their conceptual framework, Fazzari and Cynamon show that stagnating income growth for any group of households need not lead to an immediate drag in the level of income. However, the decision to maintain consumption growth at a higher rate than declining income growth eventually will reduce savings and increase the vulnerability of wage earners’ balance sheets. More importantly, net worth cannot decrease forever and debt levels cannot increase indefinitely relative to income. So eventually rising debt levels force households with lower income growth to reduce consumption to meet their intertemporal budget constraints.

Fazzari and Cynamon pushed the discussion even further by studying the changes in income growth, spending, and balance sheet differs between the bottom 95% and top 5% of the U.S. population. They compared the income growth of these the groups and found that inequality widened mostly due to a significant drop in real income growth by wage earners. This combination of slow income growth for the bottom 95% and rapid aggregate consumption growth indicates that the rise in leverage was likely a significant factor for households in this group.

Furthermore, the researchers considered the extent to which the rise in the debt-to-income ratio is influenced by the change in assets.

Between 1989 and 2000, there is no evidence of a rise in the change in assets that would offset the rapid increase in the debt-to-income ratio of the bottom 95%. This suggests that faster asset accumulation was not the reason for rising balance sheet fragility for this group. From 2000 to 2007, however, the story is different. Suddenly, the bottom 95% experienced a change in the ratio of assets to disposable income ratio by an average of about four percentage points from the 1990s. This is most likely due to the ramping up of house construction and renovation after 2000.

The researchers also show that, perhaps not surprisingly, the wage earners group consumed a considerably larger share of disposable income prior to 2008. Prior to Great Recession, the average consumption rate for the bottom 95% was eight percentage points higher than the consumption rate of the top 5%. Following the Great Recession, however, the U.S. population started seeing large changes in its consumption-income ratios.

Fazzari and Cynamon interpreted these observations to mean that prior to 2008 the spending trend of the bottom 95% was unsustainable. It is worthwhile to note that the fall in the consumption rate of the bottom 95% occurred simultaneously with the stall in debt-to-income growth.

On the other hand, the consumption rate for the profit-earning group behaved very differently. In line with the smoothing consumption hypothesis, the rate for the top 5% is relatively volatile. From 1994 to 2000, when the real income growth of this group accelerated, their consumption decreased. This exact pattern repeated itself in the 2001 recession and the subsequent swift recovery of top 5% of income during the middle 2000s.

Fazzari and Cynamon argue that this contrast between the spending patterns of the top 5% and bottom 95% is striking, especially during the Great Recession. The researchers note that: “The collapse of the 9% spending rate, consistent with a forced end to this group’s balance sheet expansion, is the exact opposite of the significant consumption smoothing evident for the top 5%, a group that did not appear to have balance sheet fragility problems on the eve of the Great Recession.”

This contrasting impact was so significant that from 2009 to 2011 the top 5% spent a bigger share of their disposable income than the bottom 95%.

This change in consumption patterns has significant macroeconomic implications. The evidence implies that wage earners reacted to slower income growth in large part by continuing consumption at the expense of saving. In a sense, this temporarily saved the U.S. economy from the fall in demand that many theories suggest would be the result of increasing in inequality.

However, the research by Fazzari and Cynamon also shows that the worsening balance sheets of the bottom 95% eventually lead to the Great Recession. This being the case, it is clear that the problem of inequality goes far beyond questions of social justice. In fact, it is a significant reason for the current economic stagnation that the world is experiencing.

Short-sighted regulators may create more risk than they prevent 

By Elham Saeidinezhad

Regulators’ fragmented approach to mitigating risk since the banking crisis can prevent them from seeing the adverse consequences a new rule may have on the financial system as a whole.

This regulatory myopia may explain the Financial Stability Board’s recommendation that central counterparties (CCPs) — the intermediaries between buyers and sellers in derivatives trading — expand their liquidity agreements with financial institutions, most of them banks.

The problem is that larger commitments to CCPs may strain the ability of banks to service other parts of the market (e.g., make markets and lend) and perform other functions during a widespread disruption. 

The recommendation, made in July to G-20 leaders, highlights the strategic role regulators assigned to CCPs following the crisis. The role of the clearinghouses is to reduce risk by guaranteeing both sides of a transaction against losses arising from a default. They settle about 75 percent of all swaps, compared with 15 percent before 2007. 

By requiring buyers and sellers to use CCPs, regulators intended to create a buffer to minimize the risk that a default might trigger a domino-like reaction among financial institutions. But as the role of CCPs grew, so did the concern that the clearinghouses themselves had become so indispensable that the collapse of one could also endanger the system.

Although the FSB’s recommendation attempts to reduce that risk, it does not consider all the liquidity demand banks are required to meet. For example, banks are themselves the primary members of CCPs. Each member must help shoulder the losses of others by contributing to clearing house default funds.

The liquidity agreements that the FSB wants to expand are separate contracts for banks to supply additional liquidity in a timely manner if CCPs face significant shortages. But in a crisis, banks must also provide liquidity to the rest of the financial system.

For instance, some CCP customers, such as pension funds, borrow from banks to raise collateral, or “margin,” collected by the CCPs to reduce exposure to market turmoil.

The liquidity provided by banks is a key factor in determining the default risks of a CCP customer. A recent Milken Institute white paper argues that even with the higher capital and liquidity requirements imposed by Dodd-Frank, these competing obligations may place too much pressure on banks at the worst time. CCPs losses often occur during — and can be induced by — widespread illiquidity. 

The FSB’s apparent failure to consider the full range of banks’ obligations points to a shortcoming in the current regulatory framework. Macroprudential policy aims to mitigate risk across the entire financial system.

Most of the macroprudential assessments for CCPs are, in effect, microprudential tools that regulators apply to a single segment of the financial ecosystem. We need a broader evaluation of how a regulation designed for one part of the system affects the rest.

The FSB is right that CCPs need greater resilience, but before accepting the board’s recommendation, G-20 regulators should look carefully at the big picture. Pressured to act quickly to shore up the CCPs’ ability to withstand a shock, the FSB made a politically expedient yet risky decision. 

Addressing the legacy of our partial, rather than systemic, post-crisis approach to regulation is especially critical now. The Department of the Treasury is examining the expanded role of CCPs and the complex interdependencies that link them to other financial institutions.

In the short-term, the Treasury’s report, expected as early as this month, should provide clarity on CCPs liquidity needs. The more important, longer-term lesson policymakers should learn from the analysis is that effective financial regulation requires a broad, comprehensive view of markets and institutions.

Without it, the goal of making financial institutions safer and stronger, and the economy more resilient, will remain elusive.

This article originally appeared in The Hill on September 15, 2017.

Capital MarketsIFMSystemic Risk

Central Counterparties Help, But Do Not Assure Financial Stability 

By Elham Saeidinezhad

Central counterparties (CCPs) play a pivotal role in the post-crisis reforms of derivative markets. By stepping into the middle of trades, a CCP becomes “the buyer to every seller and seller to every buyer,” thereby reducing bilateral counterparty risk in the financial market. Policymakers recognized the growing importance of CCPs and introduced regulations to ensure CCP resilience to financial shocks. In a recent note, we highlight some commonly held misconceptions and overly-complacent conclusions about CCPs’ ability to stabilize financial markets, especially in the presence of systemic shocks. 

Since the reform, CCPs have become an indispensable part of the infrastructure for derivative trading. Approximately 75 percent of swaps are now cleared through clearinghouses, compared with just 15 percent before the 2007-2009 financial crisis. Such an increase in the concentration of trading exposure led regulators and market participants to worry about the resilience of CCPs to systemic shocks. The first regulatory reaction was to designate the largest CCPs as systemically important financial market utilities (SIFMUs). This was followed by stress tests designed to evaluate their robustness and identify vulnerabilities. Furthermore, a default waterfall—a cascade of risk-mutualization backstops— is being developed to minimize the risk and the impact of a CCP member’s failure.

Despite their critical role in ensuring trades, regulators may become over-reliant on CCPs as a bulwark to safeguard the financial system. The current framework has been successful in reducing bilateral counterparty risk and securing CCPs’ ability to clear securities trading. Yet, still missing is a full assessment of the consequences of CCP operations on the functioning of other segments of the financial ecosystem, apart from their impact on derivatives trading.

Strengthening CCPs is a necessary but hardly sufficient condition to ensure financial system stability. For instance, as CCP members become more interconnected among themselves and with other parts of the financial system, policymakers should evaluate the potential for CCP margin requirements to be pro-cyclical. Policies that impose added responsibilities to CCPs may tax their ability to raise additional capital or liquidity during stressed market conditions. It is vital that in implementing new policies, assessments include how changes in CCP and market behavior affect third parties. Indeed, the new policies may induce undesirable and destabilizing system-wide behaviors.

Despite the potential for CCPs to ensure that derivative markets function smoothly, vigilant oversight by the public sector over the efficacy of other systemically important aspects of CCPs may still be needed to ensure the continuity of trading activities. At a minimum, regulators and supervisors need to monitor market liquidity conditions, ensure effective price discovery, and stand ready to support illiquid financial intermediaries if CCPs and markets threaten to seize. 

IFMSystemic Risk

Every Trade Counts: Dark Pools as Alternative Infrastructures 

By Elham Saeidinezhad

The growing presence of high-frequency traders in Alternative Trading Systems (ATSs), or dark pools, raises concern about market fragmentation causing pockets of illiquidity and hindering price discovery. In response to these concerns, international regulators have started reviewing existing regulations and their impact on dark pools. While the U.S. Securities and Exchange Commission (SEC) has only begun its review, European and Canadian regulators are ready to implement new regulations. In our judgement, the simple market-based Canadian approach is a better template for future U.S. regulatory reform than the more arbitrary and bureaucratic European approach.

Alternative Trading Systems, also known as “dark pools,” are trading venues where transactions take place and are settled outside of public exchanges and trading platforms. Access to many dark pools is controlled by their owners, often large investment banks (acting as market makers) such as Goldman Sachs, Credit Suisse, Barclays, and asset managers including Fidelity, Vanguard and UBS.

One catalyst for their rise is the need for institutional investors to lower the transactions cost associated with trading large blocks of securities while minimizing the impact on market prices. For market makers and large asset managers, there is often sufficient “in-house” turnover among a firm’s diversified portfolio of funds that changes in the supply and demand for certain securities can be satisfied internally, and without having to go out to the public exchanges. Such transactions are considered “dark” because the underlying bid and ask prices are not recorded on the public exchange because the buying and selling are confined to the firm’s internal marketplace.

Dark pools are attractive for portfolio managers because they lower costs by allowing large blocks to be traded internally, often by resizing them according to the needs of various in-house portfolio managers. By comparison, a large block of securities either offered or bid in the public exchanges is transacted on an all-or-nothing basis and may arouse undesirable speculation about potential changes in the strategic outlook of the firm’s portfolio manager. Dark pool transactions prevent such speculation and the risk of contagion.

Critics of dark pool transactions argue that the lack of transparency may hinder efficient price discovery and limit market liquidity[1]. The fragmentation between stock exchanges and off-exchange platforms raises concern that the unrecorded trades may prevent public exchange prices from accurately reflecting underlying supply and demand conditions for all securities, and possibly disadvantage investors without access to the dark pools (e.g., small retail investors). Isolated dark pools may also amplify the buildup of excess or deficient liquidity in the public exchanges.

Furthermore, the evidence is suggesting that pricing in the dark pools is being disrupted by high-frequency trading firms[2]. Their presence in the alternative trading venues (often front-running large block trades) has the potential to undermine the usefulness of dark pools for institutional investors. Front-running makes it more expensive for fund managers to place large block trades.

In response to these concerns, European regulators have already started the process of replacing regulations on markets in financial instruments (Mifid I) with a more restrictive approach, known as Mifid II[3]. Its aim is to limit off-exchange trading and restrict the presence of high-frequency trading.

The purpose of Mifid II is to restore market integrity by having administrators simultaneously regulate activities, types of financial instruments and trading venues in securities markets[4] In the meantime, it provides various exemptions known as “special rulings” that are totally based on regulators’ assessment of the market. The two key exemptions from Mifid II are: regulators can remove the volume caps based on the type of securities or the size of the trade when it is “considerably” large. These special rulings seem to favor non-HTF activities. Such special rulings, however, may create legal loopholes and result in even more fragmented market with less reliable liquidity.

By contrast, Canadians adopted a more minimalistic approach. Their aim is to achieve market integrity by providing right incentive and targeting a single mandate for all trades, namely trades are allowed if they contribute to “price improvement.” The price improvement criteria means that regulators allow trades to be routed to the dark pool if the dark pool provides a “better price” for the investor initiating the trade than is available on an exchange. The location of the transaction will be determined by such a market mechanism.

It is unlikely that fragmentation or the questions of price discovery will disappear. However, among the options that are under consideration, we believe a simpler, the less restrictive and more market-based tactic seems to be a better choice for the U.S. This means walking away from the complex European strategy and move toward the simple and more pragmatic Canadian approach.


[1] Market liquidity is the ability to buy and sell whale-sized trades without changing the prices.

[2]A lawsuit brought by New York State attorney-general in 2014 as well as SEC charges in 2016 accelerated this concern by regulators.

[3] Both MiFID II and MiFIR entered into force on 2 July 2014 and must generally apply within Member States by 3 January 2018.

[4] One year before its full implementation, the European Securities and Markets Authority (Esma) has started to collect data on 15 million financial instruments from more than 300 trading venues.

Capital MarketsFinanceIFMSystemic Risk

Orderly Resolution: Dodd Frank’s Title II vs. Chapter 14

By Elham Saeidinezhad

Bailing out big institutions during the financial crisis was unpopular from the beginning. It was done in part because the bankruptcy code provision for the resolution of large, troubled institutions was widely considered inadequate to preserve the nation’s financial stability.1 Congress approved Title II of Dodd-Frank in 2010 to provide better safeguards by enhancing the FDIC’s authority and creating the Orderly Liquidation Fund. However, the changes remain unpopular in the financial world.2 Title II opponents in Congress now propose amending the bankruptcy code to include a new Chapter 14 to create special provisions for the bankruptcy of large financial firms.3

This article compares the proposed Chapter 14 and Title II on four core issues:

  • Covered Companies: Title II qualifies any financial company for FDIC receivership regardless of the company’s designation and supervision status. The main criterion is whether the secretary of the treasury concludes that the company’s failure would undermine financial stability.4 Under Title II, existing bankruptcy code provisions would continue to be used for other companies. With Chapter 14, only financial companies and subsidiaries that hold more than $100 billion in consolidated assets would qualify for the chapter’s special resolution procedure.
  • Regulators, Judges and Financial Stability: Title II and Chapter 14 require different roles for bankruptcy judges. Under Title II, a judge becomes involved only when the Treasury Department files a petition to appoint the FDIC as the receiver. The FDIC is then responsible for the resolution process. Under Chapter 14, the bankruptcy judge would oversee the case. However, Chapter 14 would prohibit the judge from making decisions based on either a need to reduce systemic risk or minimize moral hazard. Although Chapter 14 would permit the Federal Reserve to be heard on matters of financial stability, regulators’ roles would be limited and subject to the judge’s discretion.
  • Reorganization: Is it Necessary? Title II requires the FDIC to liquidate the failing company to mitigate risk and remove moral hazard, but prohibits the agency from reorganizing the insolvent firm for the profit of the creditors. Furthermore, it mandates that decisions by the Fed and FDIC to liquidate a firm’s assets and to transfer them to a bridge institution should be solely based on financial stability concerns. In contrast, reorganization would be the main feature of resolution under Chapter 14. Furthermore, the receiver would openly seek to maximize the business’s value for the profits of creditors.5
  • Potential for Bailouts: The Dodd-Frank Act forbids the Fed to bail out insolvent companies. However, when Title II is triggered, FDIC can finance the resolution of the covered company through the Orderly Liquidation Fund. Under Chapter 14, the practice of Debtor-In-Possession Financing (DIP loans) would allow the main regulator to finance the claim of “critical” creditors at the beginning of the case.6

To conclude, there is a philosophical difference between Title II and Chapter 14 on how to best achieve the “orderly resolution” of big financial companies. Title II explicitly has the financial stability of the U.S. as its primary goal,7 and to accomplish that, it provides a mechanism for an orderly liquidation of financial companies. Title II’s secondary goal is to ensure that creditors and other counterparties bear the proportionate risks of their investments. In contrast, Chapter 14 maintains as its focal point the Bankruptcy Code’s goal of reorganizing to protect creditors. Its objective is to provide a charter for an orderly bankruptcy of large firms to cover creditors’ claims according to their contractual priorities.

Key Features of Chapter 11, Chapter 14 and Title II of the Act
  Chapter 11 Chapter 14 Title II of the Act
Focus Creditors Tax payers Financial stability
Covered companies Large corporations and individuals. Financial firms with more than $100 billion assets. Bank holding companies, financial companies and non-bank SIFIs.
Who Triggers the Process Debtor and/or creditors.  U.S. treasury secretary and/or creditors.  U.S. treasury secretary
Resolution (Liquidation) Orderly liquidation of debtor’s assets to ensure creditors’ claims are covered by priority. Orderly liquidation of debtor’s assets to ensure creditors’ claims are covered by priority. Orderly liquidation of the debtor’s assets to ensure financial stability.
Reorganization Yes Yes No
Bail out NA Regulators can use U.S. Treasury funds at the very early stages to pay for critical creditors. FDIC can use U.S. Treasury funds at the very beginning through Orderly Liquidation Fund.

Chapter 11 is the chapter in the bankruptcy code employed by large companies to reorganize their debts and continue their business.

2 The Orderly Liquidation Fund is a segregated fund at the U.S. Department of the Treasury to finance the resolution of covered financial companies by the FDIC. It has been characterized by its critics as paving the way for future bailouts.

The Taxpayer Protection and Responsible Resolution Act of 2014 is commonly known as “Chapter 14.” The “Chapter 14″ proposal was originally put forth by the Hoover Institution.

The main exceptions are government-sponsored entities (most notably Fannie Mae and Freddie Mac), broker-dealers and insurance companies.

The court could appoint the FDIC as a trustee with an authority to reorganize or liquidate the failing company.

DIP loans are a special form of loan. They are provided for distressed companies especially during restructuring under corporate bankruptcy law.

7 It also aims to reduce moral hazard and end the use of U.S. Treasury resources upon the failure of a systemically important financial company. 

FinanceIFMRegulationSystemic Risk

Asset Managers and Collateral: Control of the Center

By Elham Saeidinezhad

*Control of the center is a chess strategy. Control of the center is considered significant as tactical battles often happen around the central squares, from where pieces can access most of the board.

In post-crisis lending markets, asset managers have played an increasingly important role in providing collateral. Their importance to the system makes the Financial Stability Board’s (FSB) long-awaited policy recommendations especially significant.

Recommendations for asset management address four structural vulnerabilities that could upset financial stability: liquidity risks, leverage, operational risks and securities lending by managers and funds. While the board focuses primarily on liquidity and leverage risks, the section on securities lending merits attention.1 The recommendations are limited to a need to monitor asset managers’ lending activities, but there is a risk that they will lead regulators to a mindset of “when in doubt, prohibit.” If this happens, markets will suffer.

Securities lending enhances liquidity, risk management and price discovery for the traded securities. Mutual funds and pension plans, for example, accounted for 66 percent of the reported euro 14 trillion of securities that institutional investors made available for lending in the global pool last year, according to International Securities Lending Association (ISLA).2

Asset managers usually participate in this market as beneficial owners and the actual lending is done through an “agent” such as a bank. This means that asset managers merely give the agent guidelines on behalf of their clients, which include mutual funds and exchange traded funds (ETF).

The FSB warns of a very limited, yet growing number of managers who act as agents themselves. They inject collateral into the system by lending securities directly and providing borrower or counterparty indemnifications — an insurance-like promise to clients to ensure them against potential defaults. These commitments, FSB warns, could potentially contribute to instability. The board urges more scrutiny and makes broad recommendations on appropriate policy responses to attack this potential systemic risk.

However, the role of asset managers as agents — and providers of capital — is crucial because it enables participants to get funding from the wholesale money market.3 To draw an analogy from chess, they represent the critical center squares that provide access to the rest of the board. In repurchase agreements (repos), for instance, collateral is necessary for market liquidity. Furthermore, regulations such as the Liquidity Coverage Ratio (LCR) have created a need in the market for high-quality liquid assets (HQLA) that did not exist before. Banks hunt for high-quality collateral to comply with LCR requirements and to reduce capital charges for equity securities.

It is important to emphasize the vital, constructive contribution of securities lending as regulators begin to consider policy responses to their potential to create systemic risk. There is a tendency among prudential regulators to prohibit certain activities in order to enhance the resilience of financial entities. The best-known example in the U.S. is the Volcker Rule, which bars banking entities from proprietary trading as well as from retaining any ownership or sponsorship of hedge funds and private equity funds. The UK’s “ring-fencing” regulation serves a similar purpose.

The FSB’s policy recommendations will provide an intellectual foundation for designing prudential regulations, but they should not lead to new rules that prevent asset managers from supplying collateral. If they are unable to fill this role, the scarcity of collateral and its impact on liquidity may itself create systemic risks.


1 For further assessment of the issue, see http://www.milkeninstitute.org/publications/view/821

2 http://www.isla.co.uk/wp-content/uploads/2016/03/ISLA-SL-Market-Report-Dec-2015c.pdf

3 Market liquidity can be obtained from wholesale money market through different instruments such as repurchase agreement (repos). The importance of repos is that they are secured as they use securities as collateral.

FinanceIFMRegulationSystemic Risk

The Securities Settlement System: Without Regulation, Contagion? 

By Elham Saeidinezhad

After the 2008 financial crisis, the demand for regulatory reform gained momentum on an international scale, expressing the belief among regulators that loose oversight had contributed to the catastrophe.

Regulating financial market infrastructures (FMI) has been among the most important aspects of this post-crisis agenda. Payment systems, securities settlement systems, and central counterparty clearinghouses are the primary components of FMIs. Before the meltdown, payment systems and securities settlement were regulated separately.[1] Payment was recognized as the main channel that connects money markets to the “real economy.” Securities settlement systems, on the other hand, was considered a key part of the capital markets infrastructure with no direct impact on the real economy. The latter market thus received less regulatory attention.

The events of 2008, however, painfully demonstrated that capital markets and money markets are more intertwined than regulators had assumed. Post-crisis regulatory architecture acknowledges their inseparable linkage, with officials putting both markets at the heart of the new legislation. Yet in practice, capital markets remain underregulated compared to money markets and the banking sector, which have received a great deal of attention in recent years.

FMI is an area where this regulatory imbalance is pertinent. After the crisis, central bankers and securities regulators joined together in an effort to harmonize their recommendations for payment and securities settlement systems based on their recognition that the two markets are closely interconnected. Again, though, efforts to secure banks’ payment systems have overshadowed the safety of security settlements in practice.

In the European Union, for instance, the regulation of payment systems, which are largely owned by the banks, is in a more advanced stage than the regulation of securities settlements. The Directive on Payment Services provides the legal foundation for the creation of an EU-wide market for payments. The agreed final rule was published in 2015 and entered into force in 2016. The Central Securities Depositories Regulation (CSDR), on the other hand, is not effectively regulating the market yet. Despite CSDR becoming law in 2014, the adoption of the settlement system’s technical standards has been delayed.

Before the crisis, “financial Intermediation” was assumed to be the essence of banking, with banks acting as intermediary transferring loanable funds from non-bank savers to non-bank borrowers. Payment systems, as a result, stood alone as the key element of financial infrastructure. But the crisis marked that view as an illusion. In modern finance, credit creation is at the heart of banking. Banks are significant players in both money markets and capital markets, as they finance most of their lending by borrowing in the capital markets. It is repurchase agreement market, “repo” for short, and not necessarily bank deposits, that ultimately finances bank customers. As a result, receiving and settling securities in the repo market helps to smooth lending to the real economy.

Bank lending has other impacts on the securities market too. Interest rate changes significantly affect movements in securities. By creating new monetary purchasing power, banks affect the price of money, i.e., interest rates, which influence the valuations of capital assets such as derivatives. For one thing, the rate on risk-free assets such as bank deposits is a key determinant of the value of risky assets, such as securities. Also, a substantial category of derivatives, including futures, are “marked to market,” which pegs their value to that assigned by investors rather than book value.

As cash and securities circulate among buyers and sellers, the prices of assets will fluctuate, sometimes wildly, creating risk that one or more of the parties to a transaction will not be able to fulfill these obligations. The primary role of securities settlement systems is to reduce this liquidity risk and prevent it from becoming credit risk and potentially a systemic peril.

Bank lending stirs a series of cascade effects that ultimately involve settlement systems. The financial crisis was the devastating manifestation of this interconnectedness. In the capital markets, securities settlement is crucial in managing risk and preserving liquidity, and it should be secured with as much robustness and sophistication as payment settlement in the banking system. We will be in a much better position to prevent another crisis if we ensure the protection of both.


[1] The International Organization of Securities Commissions developed the Objectives and Principles of Securities Regulation (IOSCO, 1998), and the Committee on Payment and Settlement Systems of the Group of Ten countries’ central banks created the Core Principles for Systemically Important Payment Systems (BIS, 2001).

BankingEuropeFinanceIFMRegulationSystemic Risk

The Tales of Dodd-Frank: Too Divided to Rule?

By Elham Saeidinezhad

Two recent developments in the financial world highlight the divergence between legislators and regulators when it comes to implementing Dodd-Frank and dealing with the too-big-to-fail institutions. On one hand, Congress is considering new legislation that could lead to passage of the Financial Institutions Bankruptcy Act of 2016, which would help justify the continued existence of large financial institutions. On the other hand, the Federal Reserve has removed GE Capital from its list of Systemically Important Financial Institutions (SIFI) as the firm significantly downsized in the face of what it perceived as a prohibitive cost structure. 

Let us provide a quick refresher on the too-big-to-fail concept. With the financial crisis came the realization that some institutions have become so big that allowing them to fail would have a domino effect for the system. During the crisis, the government had to intervene using taxpayers’ money to backstop the market. One direct outcome was the formation of a universal movement, championed by the Dodd-Frank Act, to tackle the so-called too-big-to-fail problem.

Eight years later, with many of the bill’s original supporters in Congress no longer in office, and the urgency of the crisis fading from the public memory, regulators and legislators stance tend to diverge. Two strategies dominate the discussion. Capitol Hill wants to solve the issue by making SIFIs’ orderly resolution feasible, which requires reworking the bankruptcy code to better handle large financial firms. The preference, here, is to maintain the current size and activities of large institutions. By contrast, the Fed plans on tightening its yearly stress test for the biggest and most complex banks openly expresses their ambition to make the cost of being large significantly challenging. In their eyes, GE capital is a success story.

Interestingly, both parties seem agree with our assessment of Dodd-Frank, recently released as an MI Viewpoints white paper. The SIFI framework can only work if the three required steps (identification, prudential enhancement, and resolution plans) are successfully implemented. By significantly increasing the cost of compliance, the Fed’s ultimate ambition may be to break SIFIs into smaller entities. This would make step 3, the “living will” — a resolution plan designed to allow a bank to go through bankruptcy without disrupting the financial system — obsolete. This is a good thing since only one of the eight living wills submitted by the U.S. banks was deemed credible by both the Federal Reserve and Federal Deposit Insurance Corp. under the current bankruptcy code. As an alternative, Congress intends to facilitate SIFIs’ ability to fulfill the three criteria by adapting the current law to include a Financial Institutions Bankruptcy Act, which also would enable SIFIs to demonstrate a clear path to an orderly failure under bankruptcy at no cost to taxpayers. Of course, the two strategies would have different impacts on the real economy. The GE Capital case illustrates what it took to lose the too-big-to-fail tag: patience, 3 years and a significant reduction of lending and leasing activities.

Key Systemic Risk Metrics—Assets ($billions)

dodd frank blog2
Source: GE Capital FR Y-9C; GE Capital internal data.

Finally, if the Financial Institutions Bankruptcy Act passes, could it be a way for SIFIs to circumvent criticisms that they remain too-big-to-fail without a taxpayer bailout, or will it be a useless tool as it will be too costly for big banks to pass the Fed stress test? Then, of course, there is the fundamental question when it comes to the bankruptcy for SIFIs: Ultimately, will it remains a domestic exercise or will it be a multinational procedure? Osborne’s intervention in the US-HSBC money laundering case is a telling example

Interconnectedness: A Source of Systemic Risk or Resilience in Financial Markets?

By Elham Saeidinezhad

After the 2008 financial crisis, the global economic paradigm shifted from “too big to fail” to “too interconnected to fail.” But does interconnectedness really add a layer of resilience to the financial system or is it the source of systemic risk? The question was at the heart of most of the finance panels at the Milken Institute Global Conference. 

Opinions among conference panelists were mixed. Some argued that interconnectedness enables market players to quickly pick up the business of a crisis-stricken financial institution, for instance by purchasing its assets or attracting its customers. This substitutability effect, they say, leads to a higher level of market stability and prevents a risk from becoming systemic. Others argue that interconnectedness creates a link of knotted chains that intimately connects different segments of the economy such as the money, capital and commodity markets. Performance of each segment thus becomes crucially dependent on other parts. Yet historical evidence suggests that, in fact, their activities are not necessarily substitutable.

Let us start by referring to a not very distant time in history, September 2007.  Just before the 2008 financial crisis, capital markets faced great turbulence. This was as a result of what happened in structured investment vehicles (SIVs): often huge, mainly bank-run, programs designed to profit from the difference between short-term borrowing rates and longer-term returns from structured product investments. A structured product takes a traditional security and replaces its usual payment features with non-traditional payoffs derived from the performance of the underlying assets, exposing investors to credit and liquidity risks. These programs usually invested in credit market instruments, such as U.S. subprime mortgage-backed bonds and collateralized debt obligations. In the case of a blip in the market, these instruments were supported by liquidity facilities from highly rated, mainstream banks. Commercial banks, therefore, backstopped the market for SIVs. Here we observe the hierarchical design embedded in the structure of the financial system.

At the very same period, the European Central Bank and the U.S. Federal Reserve, which are at the top of this hierarchy, were forced to intervene in the money market through their emergency liquidity-boosting operations to rescue banks, which did not have enough cash to meet requests and were unable to liquidate enough assets to plug the gap. The problem intensified when the problems in “asset- backed commercial paper” — or ABCP — became so serious that some European banks were preparing for additional calls on credit lines to SIVs. But the banks were also grappling with a backlog of unsold leveraged loans, placing additional pressure on their balance sheets. Interestingly, this specific incident created even more serious problems for their U.S. counterparties as most of the European banks had borrowed in U.S. dollars and used U.S. financial institutions to fund these instruments. We all know the end result of the turmoil experienced in these markets in 2007: the most severe financial crisis since the Great Depression, which led to extensive use of unconventional monetary policy packages and bailouts.

Some regulators, when looking for systemic risk, indeed search for a kind of volatility in the system that forces a government agency to intervene using taxpayers’ money to backstop the market. By this account, our very recent history shows that interconnectedness creates systemic risk. The underlying reasons are credit, maturity and liquidity transformation that leads to credit, maturity and liquidity mismatches. All of these happen as a result of the activities that this level of interconnectedness between different parts of the economy enables us to do.

History provides more evidence for the conclusion that interconnectedness adds to systemic risk than that it provides a safety net. Globalization, however, is a dynamic and unstoppable force. Given that the world is becoming a more complex and interconnected place, the future might provide different lessons. The point to emphasize here is that whether or not interconnectedness creates “substitutability” is a key factor in tackling the question of whether interconnectedness is stabilizing or de-stabilizing. In either case, it seems that interconnectedness is here to stay and will be part of our future. The right starting point is to understand the underlying mechanisms and characteristics of this phenomenon. When we have done that, we will have a sound analytical foundation to debate whether interconnectedness is an issue to be resolved or a phenomenon to be embraced.

IFMSystemic Risk