Posts this month
A blog on financial markets and their regulation
Last week, Bill Hwang’s family office, Archegos, imploded as it was unable to meet the margin calls emanating from steep declines in the prices of stocks that Hwang had bought with huge leverage. Mark to market is a very powerful discipline that spares nobody however rich or powerful. This ruthless discipline makes financial markets self-correcting unlike many other social institutions.
Academic literature in particular is much more insulated from the discipline of mark to reality. Old papers discredited by subsequent developments or even subsequent research continue to be cited and quoted (this is the replication crisis in economics and finance). To borrow accounting terminology, the academic community tends to carry the old literature at historical cost without sufficiently stringent periodic impairment tests.
There is a large stream of finance and accounting literature which is probably badly impaired by last week’s developments. I refer to the literature that uses percentage of institutional shareholding in a company as a proxy for various things including corporate governance. What we are learning now is that Archegos used over the counter derivatives like swaps and contracts for differences to invest in a range of companies with very high leverage. The banks who sold these derivatives to Archegos bought shares in the companies to hedge the derivatives that they had sold. The shareholding pattern of these companies would then show the Archegos counterparties (banks) as the principal shareholders of these companies though in economic terms, the real owner of the shares was Archegos. Media reports suggest that this includes companies which were targeted by short sellers (and presumably had corporate governance concerns).
In the case of these companies with possibly dubious corporate governance, academics and investors might have been reassured on observing that say two-thirds of the shares were owned by institutions without realizing that much of the holding was the family office of a person who had committed insider trading. I think this is another illustration of Goodhart’s law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” The lesson that the academic literature must learn from that law is that the longer established a proxy measure is, the more ruthlessly one must apply an impairment test and mark it to reality.
A month ago, the National Stock Exchange (NSE), India’s largest stock exchange, suffered a software glitch and suspended trading about four hours prior to the scheduled end of the trading session. As the clock ticked close to the scheduled end of the trading day, there was no news about resumption of trading, and stock brokers decided to close out the outstanding positions of their clients on the other exchange (BSE) to avoid exposure to overnight price risk. About 13 minutes before scheduled close of the trading session, the NSE announced that normal market trading would resume 15 minutes after the scheduled close and would continue for 75 minutes thereafter. Yesterday, the NSE put out a self congratulatory press release providing some details of what happened on February 24, 2021. This is a vast improvement on the very limited information that they released a month ago (24th morning, 24th afternoon and 25th).
It appears that the regulators are also investigating the matter and, perhaps, much effort will be expended on apportioning blame between NSE and its various technology vendors. I wish to take a different approach here and argue that the regulators should simply lay down an downtime target. The computing industry works with Four Nines (99.99%) availability (less than an hour of downtime a year) and Five Nines (99.999%) availability (about five minutes of downtime a year). Let us assume that Five Nines is out of reach for stock exchanges and settle for Four Nines. There would then be no penalty for the first hour of downtime permitted under Four Nines and the penalty per hour thereafter would be calibrated so that the entire profits of the stock exchange are wiped out if the availability drops below Three Nines (99.9%) corresponding to a downtime of about nine hours per year.
Based on the most recent financial statements of the NSE, the penalty for that exchange would be about Rs 2.2 billion (around $30 million) per hour beyond the first hour. The penalty is designed to be large enough to ensure that the shareholders of the exchange weep when the exchange suffers an outage. They would then force the management to invest in technology, and also design management bonuses in such a way that they all get zeroed out when there is a large outage. The exchange would then negotiate large penalty clauses with their vendors so that if a telecom link fails, the telecom company pays a large penalty to the exchange. That provides the incentives to the telecom company to build redundancies. The regulators do not have to do any root cause analysis or apportion blame; they just have to collect the penalty, and use that to compensate the investors.
The other thing that the regulators need to do is to provide greater predictability about resumption of trading after a glitch. I would propose a simple set of rules here:
Stock exchange software glitches have been a favourite topic on this blog as long back as fifteen years ago and I suspect that they will continue to provide material for this blog for many, many years to come.
The internet is a wonderful place: it knows that I have posted on Zoom’s negative beta and it also knows that I have posted on Gamestop and r/wallstreetbets. So it quite correctly concludes that I would have some interest in whether Gamestop has a negative beta. Yesterday, I received a number of comments on my blog on this question and my blog post also got referenced at r/GME. According to r/GME, several commercial sources (Bloomberg, Financial Times, Nasdaq) that provide beta estimates are reporting negative betas for Gamestop (GME).
I began by running an ordinary least squares (OLS) regression of GME returns on the S&P 500 returns. Using data from the beginning of the year till March 16, I obtained a large negative beta which is statistically significant at the 1% level. (If you wish to replicate the following results, you can download the data and the R code from my website).
Estimate Std. Error t value Pr(>|t|) beta -10.54182 3.84200 -2.744 0.00851
The next step is to look at the scatter plot below which shows points all over the space, but does give a visual impression of a negative slope. But if one looks more closely, it is apparent that the visual impression is due to the two extreme points: one point at the top left corner showing that GME’s biggest positive return this year came on a day that the market was down, and the other point towards the bottom right showing that GME’s biggest negative return came on a day that the market was up. These two extreme points stand out in the plot and the human eye joins them to get a negative slope. If you block these two dots with your fingers and look at the plot again, you will see a flat line.
Like the human eye, least squares regression is also quite sensitive to extreme observations, and that might contaminate the OLS estimate. So, I ran the regression again after dropping these two dates (January 27, 2021 and February 2, 2021). The beta is no longer statistically significant at even the 10% level. While the point estimate is hugely negative (-5), its standard error is of the same order.
Estimate Std. Error t value Pr(>|t|) beta -4.97122 3.62363 -1.372 0.177
However, dropping two observations arbitrarily is not a proper way to answer the question. So I ran the regression again on the full data (without dropping any observations), but using statistical methods that are less sensitive to outliers. The simplest and perhaps best way is to use least absolute deviation (LAD) estimation which minimizes the absolute values of the errors instead of minimizing the squared errors (squaring emphasizes large values and therefore gives undue influence to the outliers). The beta is now even less statistically significant: the point estimate has come down and the standard error has gone up.
Estimate Std.Error Z value p-value beta -2.7999 5.0462 -0.5548 0.5790
Another alternative is to retain least squares but use a robust regression that limits the influence of outliers. Using the bisquare weighting method of robust regression, provides an even smaller estimate of beta that is again not statistically significant.
Value Std. Error t value beta -1.5378 2.3029 -0.6678
Commercial beta providers use a standard statistical procedure on thousands of stocks and have neither the incentive nor the resources to think carefully about the specifics of any situation. Fortunately, each of us now has the resources to adopt a DIY approach when something appears amiss. Data is freely available on the internet, and
R is a fantastic open source programming language with packages for almost any statistical procedure that we might want to use.
Much has been written about how a group of investors participating in the sub-reddit r/wallstreetbets has caused a surge in the prices of stocks like GameStop that are not justified by fundamentals. I spent a fair amount of time reading the material that is posted on that forum and am convinced that most of these Redditors are perfectly rational and disciplined investors, and have no delusions about the fundamentals of the company.
Rationality in economics requires utility maximization, but does not constrain the nature of that utility function. It does not demand that the goals be rational as perceived by somebody else. Rationality of goals is the province of religion and philosophy: for example, Plato’s Form of the Good, Aristotle’s Highest Good, Hinduism’s four proper goals (puruṣārthas), and Buddhism’s right aspiration (sammā-saṅkappa). Economics concerns itself only with the efficient attainment of whatever goals the individual has. Even the Stigler-Becker maximalist view of economics (Stigler, G.J. and Becker, G.S., 1977. De gustibus non est disputandum. The American Economic Review, 67(2), pp.76-90) does not seek to impose our goals on anybody else, and does not require that the goals be pecuniary in nature (consider, for example, the Stigler-Becker discussion about music appreciation).
It is perfectly consistent with economic rationality for a person to buy a Tesla car as a status symbol and not as a means of going from A to B. Equally, it is perfectly consistent with economic rationality for a person to buy a Tesla share as a status symbol and not as a means of earning dividends or capital gains. Buddha and Aristotle might take a dim view of such status symbols, but the economist has no quarrel with them.
It is in this light that I find the Redditors at r/wallstreetbets to be highly rational. There is a clear understanding and Stoic acceptance of the consequences of their investment decisions. In this sense, there is greater awareness and understanding than in much of mainstream finance. When Redditors knowingly pay prices far beyond what is justified by fundamentals in the pursuit of non pecuniary goals, they are only indulging in a more extreme form of the behaviour of an environment conscious investor who knowingly buys a green bond at a low yield.
There is overwhelming evidence throughout r/wallstreetbets that these Redditors are focused on non pecuniary goals:
/r/wallstreetbets is a community for making money and being amused while doing it. Or, realistically, a place to come and upvote memes when your portfolio is down.
Yo, health check time: Get proper sleep, Eat proper food, Stretch occasionally, HYDRATE. I’m sure we’ve all been glued to our screens all week, but please make sure you take care of yourselves.
There is a crystal clear understanding that most trades will lose money:
Buy High Sell low – what you do as a newcomer.
First one is free – A phenomena where you are so retarded and don’t know what the [expletive deleted] your doing you somehow make money on your first trade.
… if you don’t know any of this there is really no reason for you to be throwing 10k at weeklies you’ll lose 99% of the time.
We don’t have billionaires to bail us out when we mess up our portfolio risk and a position goes against us. We can’t go on TV and make attempts to manipulate millions to take our side of the trade. If we mess up as bad as they did, we’re wiped out, have to start from scratch and are back to giving handjobs behind the dumpster at Wendy’s.
… and also for the most part, they’re playing with their own money that they can actually afford to lose even if it hurts for a year or two.
Options are like lottery tickets in that you can pay a flat price for a defined bet that will expire at some point.
Indeed mainstream regulators could borrow some ideas from r/wallstreetbets on how to disclose risk factors in an offer document. When a risky company does an IPO, a prominent disclosure on the front page “This IPO was created for you to lose money” would be far better than the pages and pages of unintelligible risk factors that nobody reads.
SPACs and Capital Structure Arbitrage
Special Purpose Acquisition Companies (SPACs) have become quite popular recently as an attractive alternative to Initial Public Offerings (IPOs) for many startups trying to go public. Instead of going through the tortuous process of an IPO, the startup just merges into a SPAC which is already listed. The SPAC itself would of course have done an IPO, but at that time it would not have had any business of its own, and would have gone public with only the intention of finding a target to take public through the merger. Both seasoned investors and researchers take a dim view of this vehicle. Last year, Michael Klausner and Michael Ohlrogge wrote a comprehensive paper (A Sober Look at SPACs) documenting how bad SPACs were for investors that choose to stay invested at the time of the merger. Smart investors avoid losses by bailing out before the merger, and the biggest and smartest investors make money by sponsoring SPACs and collecting various fees for their effort.
As I kept thinking about the SPAC structure, it occurred to me that at the heart of it is a capital structure arbitrage by smart investors at the cost of naive investors. The capital structure of the SPAC prior to its merger consists of shares and warrants. However, in economic terms, the share is actually a bond because at the time of the merger, the shareholders are allowed to redeem and get back their investment with interest. It is the warrant that is the true equity. If the share were treated as equity, it would have a lot of option value arising from the possibility that the SPAC might find a good merger candidate, and the greater the volatility, the greater the option value. A part of the upside (option value) would rest with the warrants. But if the shares are really bonds, then all the option value resides in the warrants which are the true equity. Naive investors are perhaps misled by the terminology, and think of the share as equity rather than a bond; hence, they ascribe a significant part of the option value to the shares. Based on this perception, they perhaps sell the detachable warrants too cheap, and hold on to the equity.
From the perspective of capital structure arbitrage, this is a simple mispricing of volatility between the two instruments. Volatility is underpriced in the warrants because only a part of the asset volatility is ascribed to it. At the same time, volatility is overpriced in the shares since a lot of volatility (that rightfully belongs to the warrant) is wrongly ascribed to the share. One way for smart investors in SPACs to exploit this disconnect is to sell (or redeem) the share and hold onto the warrant, while naive investors hold on to the share and possibly sell the warrant.
Capital structure arbitrage suggests a different (smarter?) way to do this trade. If at bottom, the SPAC conundrum is a mispricing of the same asset volatility in two markets, then capital structure arbitrage would seek to buy volatility where it is cheap and sell it where it is expensive. In other words, buy warrants (cheap volatility) and sell straddles on the share (expensive volatility). At least some smart investors seem to be doing this. A recent post on Seeking Alpha mentions all three elements of the capital structure arbitrage trade: (a) sell puts on the share, (b) write calls on the share and (c) buy warrants. But because the post treats each as a standalone trade (possibly applied to different SPACs), it does not see them as a single capital structure arbitrage. Or perhaps, finance professors like me tend to see capital structure arbitrage everywhere.
Earlier this month, the United Kingdom Treasury published the Report of the Independent Investigation into the Financial Conduct Authority’s (FCA’s) Regulation of London Capital & Finance (LCF). I read it with high expectations, but must say I found it deeply disappointing. I take perverse pleasure in reading investigation reports into frauds and disasters around the world (so long as they are in English). Beginning with Enron nearly two decades ago, there have been no dearth of such high quality reports except in my own country where unbiased factual post mortem reports are quite rare. So it was with much anticipation that I read the report on LCF which involved a number of novel issues about the risk posed by unregulated businesses carried out by regulated entities. Unfortunately, the Investigation Report did not meet my expectations: instead of providing an unbiased and dispassionate analysis of what happened, it indulges in indiscriminate and often unwarranted criticism of the Financial Conduct Authority (FCA). In the process, the report very quickly loses all credibility.
The LCF debacle is described well in the report of the Joint Administrators under the Insolvency Act from which this paragraph is drawn. LCF was set up in 2012 as a commercial finance provider to UK companies. From 2013, the Company sold mini-bonds, with trading significantly increasing from 2015 onwards. LCF was granted “ISA Manager” status by the UK taxation authorities (HMRC) in 2017, and LCF started selling its mini bonds under this rubric. (The necessary requirements to qualify for ISA Manager status are fairly limited; it is not a rigorous application process; and ISA Managers are not routinely monitored by HMRC). About 11,500 bond holders invested in excess of £237m in LCF mini bonds. The vast majority of LCF’s assets are the loans made to a number of borrowers a large number of whom do not appear to have sufficient assets with which to repay LCF. At present the Administrators estimate a return to the Bondholders from the assets of the Company of as low as 20% of their investment.
It is evident from the above that the most important issue in the LCF debacle is a failure of regulation rather than supervision. In the UK, mini bonds (illiquid debt securities marketed to retail investors) are subject to very limited regulation unlike in many other countries. (In India, for example, regulations on private placement of securities, collective investment schemes and acceptance of deposits severely restrict marketing of such instruments to retail investors). To compound the problem, the UK allows mini bonds to be held in an Innovative Finance ISA (IFISA). ISAs (Individual Savings Accounts) are popular tax sheltered investment vehicles for retail investors. The UK has taken a conscious decision to allow these high risk products to be sold to retail investors in the belief that the benefits in terms of innovation and financing for small enterprises outweigh the investor protection risks. While cash ISAs and Stock and Share ISAs are eligible for the UK’s deposit insurance and investor compensation scheme (FSCS), IFISAs are not eligible for this cover. Many investors may think that ISAs are regulated from a consumer protection perspective, but the UK tax department thinks of approval of ISAs as purely a taxation issue. To make matters worse, the UK has had extremely low interest rates ever since the Global Financial Crisis, and yield hungry investors have been attracted to highly risky mini bonds especially when they are marketed to retail investors under the veneer of a quasi regulated product – the IFISA. After the LCF debacle, some regulatory steps have been taken to alleviate this problem.
The Investigation Report is concerned about supervision more than regulation, and here the key issue is the regulatory perimeter issue: when an entity carries out a regulated business and an unregulated business, to what extent should the regulators examine the unregulated business. There are some financial businesses like banking where there is intrusive regulation of the unregulated business (the bank holding company). But what should the regulatory stance be on small regulated entities that carry out very limited regulated businesses (for example, confined mainly to financial marketing)? The Investigation Report simply points to the regulatory powers of the FCA to look at the unregulated business, and blithely asserts that the FCA should have been doing this routinely. This is unrealistic and would confer excessive and unacceptable powers to the financial regulators that would make them overlords of the entire society. Imagine that the publisher of the largest circulation newspaper in the country also publishes an investment newsletter that could be construed as financial promotions and is therefore regulated by the financial regulators. Do we want the regulator to have the power to take some regulatory action because it does not like the editorial stance of the newspaper? If you think that I am insane to consider such possibilities, you should examine the criminal prosecution that German financial regulators launched against two Financial Times journalists for its reporting on the Wirecard fraud. The Investigation Report does not reveal any such nuanced understanding and therefore represents a missed opportunity to improve our perspective on such matters.
Since the issuance of mini bonds is itself not a regulated activity, the role of the FCA is mainly in the area of the marketing of the bonds by LCF as a regulated entity authorized to carry on credit broking and some corporate finance activities. I would have expected the Investigation Report to focus on whether the FCA monitored LCF’s marketing (financial promotions) adequately. The Investigation Report documents that FCA received a few complaints on this, and in each instance, the FCA required changes in the website to conform to the FCA requirements. In my understanding, it is quite common for regulators worldwide to require changes in the financial promotions ranging from the font size and placement of a statement to changes in wordings to more substantive issues. The question that is of interest is where did LCF breaches lie on this spectrum (some of them were clearly technical breaches) and how did the frequency of serious breaches compare with that of other entities of similar size that the FCA regulates. Unfortunately, the Investigation Report does not provide an adequate analysis of this matter, other than saying that repeat breaches should have led to severe actions including an outright ban on LCF. That is not how regulation works or is expected to work anywhere in the world.
But these two inadequacies of analysis are not the main grounds for my disappointment with the Investigation Report. What troubled me is the repeated instances of what struck me as prima facie evidence of bias. At first, I brushed these aside and kept reading the report with an open mind, but slowly, the indicia of bias kept piling up and I began to question the objectivity and credibility of the report. At every twist and turn, wherever there was a grey area, the Investigation Report unfailingly ended up resolving this against the FCA. In the process, the credibility of the report was eroded bit by bit. By the time, I reached the end, the credibility of the report had been completely destroyed.
One of the most glaring examples of apparent bias is the discussion about a letter purported to have been sent by one Liversidge to the FCA. The only evidence for this is the statement by Liversidge that he did post the letter. Detailed search of all records at the FCA failed to find any evidence that the letter was in fact received by the FCA. One of the first things that is taught in all basic courses on logic is that it is impossible to prove a negative statement (like the statement that the letter was not received) and that is essentially what the FCA quite honestly told the Investigation Team. The Investigation Report first states that whether this letter was received or not is not relevant to the Investigation. That should have been the end of the matter. But then it goes on to make the statement that “if it had been incumbent on the Investigation to have reached a decision on this point, it would have concluded on the balance of probabilities that the Liversidge Letter was received by the FCA”. This is unreasonable in the extreme: there is no evidence other than the sender’s testimony that the letter was sent at all (let alone received), while there is some evidence that it was not received. The balance of probabilities clearly points the other way.
The Report goes to great lengths to criticize the FCA for the extended timelines of the DES programme which attempted a very significant transformation in the structure, the governance, the systems, the processes, the risk frameworks of supervision at the FCA. This was initiated around end of 2016 or early 2017 with a target completion date of March 2018, but was concluded only by December 2018. Having been involved in exercises of this kind in many organizations, I think spending a couple of years to accomplish something like this is quite reasonable (In fact it strikes me as a rather aggressive timeline). The original timeline of March 2018 appears to me to have been utterly unrealistic. The Investigation Report suggests that the FCA should instead have resorted to some “quick wins, reviews or easy fixes”. I think this suggestion is utterly misguided. “Easy fixes” is precisely the kind of thing that an organization should not do under such conditions. I think it is to the credit of the FCA Board that it did not undertake such a stupid course of action.
Actually, the FCA discovered the fraud on its own from two different angles. First, LCF filed a prospectus with the FCA and the Listing Team had a number of serious concerns on this. Second, during the course of a
review of an external database (only accessible to a limited group within the FCA and on strict conditions of use) concerned with another firm, the Intelligence Team found some information on LCF and immediately escalated the matter. While the Investigation Report commends these actions, it states that if other employees at the FCA had similar levels of expertise in understanding financial statements, they would have uncovered the fraud earlier. I was aghast on reading this. Expertise in financial statements is a highly sought skill that is in short supply in the market. That the FCA manages to hire people with that skill in some critical departments is great. To expect that people in the call centre or those running authorizations would have this skill is absurd. If people with such skills thought that they may be transferred to such postings, they would probably not join the FCA in the first place.
The Investigation Report finds fault with the FCA for giving LCF permissions to carry out regulated businesses that it did not in fact use. I do not find this unusual at all. To give an analogy, the objects clause in the corporate charter (Memorandum of Association) typically contains a lot of things that the company has no intention of undertaking; it includes these things because of the severe consequences of finding that the company does not have the power under its charter to do something that has suddenly become desirable. Similarly, a regulated business would often want to have a range of regulatory authorizations that it does not expect to use. All the more so because regulators often take an restrictive view of things and take companies to task for all kinds of technical violations. For example, a stock broker who provides only execution services might want to have an advisory licence to guard against the risk that some incidental service that it provides could be regarded as advisory. Similarly, an advisory firm might worry that a minor service like collecting a document from the customer and delivering it to a stock broker might be interpreted as going beyond purely advisory services. That LCF obtained a licence but did not carry out the regulated activity is not in my view a red flag at all. The Investigation Report makes a song and dance about this despite having observed one fact that demonstrates its triviality. The FCA created a system that produced an automated alert whenever a firm did not generate income from regulated activities. Because of the high volume of automated alerts that were created as a result of this, the FCA had to allow these alerts to be closed without review!
It is indeed distressing that this deeply flawed report is all that we will ever get on this episode which raises so many interesting regulatory issues of interest across the world.
When I started this blog over 15 years ago, one of my earliest posts was entitled Are Financial Centres Worthwhile? The conclusion was that though the annual benefits from a financial centre appear to be meagre, they may perhaps be worthwhile because these benefits continue for a very long time as leading centres retain their competitive advantage for centuries. At least, most countries seemed to think so as they all eagerly tried to promote financial centres within their territories. But that was before (a) the Global Financial Crisis and (b) the current process of deglobalization.
Yesterday, the United Kingdom finalized the Brexit deal with the European Union, and the UK government rejoiced that they had got a trade deal without surrendering too much of their sovereignty. There was no regret about there being no deal for financial services. The UK seems quite willing to impair London as a financial centre in the pursuit of its political goals. China seems to be going further when it comes to Hong Kong. It has been willing to do things that would damage Hong Kong to a much greater extent than Brexit would damage London. Again political considerations have been paramount.
Decades ago, both these countries looked at financial centres very differently. After World War I, the UK inflicted massive pain on its economy to return to the gold standard at the pre-war parity. In some sense, the best interests of London prevailed over the prosperity of the rest of the country. Similarly during the Asian Crisis when Hong Kong’s currency peg to the US dollar seemed to be on the verge of collapse, then Chinese Zhu Rongji declared at a press conference that Beijing would “spare no efforts to maintain the prosperity and stability of Hong Kong and to safeguard the peg of the Hong Kong dollar to the U.S. dollar at any cost” (emphasis added). The major elements of that “at any cost” promise were (a) the tacit commitment of the mainland’s foreign exchange reserves to the defence of the Hong Kong peg, and (b) the decision not to devalue the renminbi when devalations across East Asia were posing a severe competitive threat to China. In some sense, the best interests of Hong Kong prevailed over the prosperity of the mainland.
Clearly, times have changed. The experience of Iceland and Ireland during the Global Financial Crisis demonstrated that a major financial centre was a huge contingent liability that could threaten the solvency of the nation itself. Switzerland was among the first to see the writing on the wall; it forced its banks to downsize by imposing punitive capital requirements. Other countries are coming to terms with the same problem. Deglobalization adds to the disillusionment about financial centres.
Today, countries are eager to become technology centres rather than financial centres. How that infatuation will end, only time will tell.
Jon Frost, Hyun Song Shin and Peter Wierts at the Bank for International Settlements wrote a paper last month An early stablecoin? The Bank of Amsterdam and the governance of money which disparages past models of (proto) central banking and new incipient forms of central banking to conclude that the modern central bank is the only worthwhile model. They criticize the Bank of Amsterdam (1609–1820) for its flawed governance that led to its eventual failure, and extrapolate from that to dismiss newly emerging stablecoins which (according to Frost, Shin and Wierts) share the same governance problems. The authors think that, by contrast, modern central banks have the right governance structures and right fiscal backstops.
My biggest grouse with this paper is that if we want to criticize an institution that thrived for 170 years and survived for two centuries, we must compare it against an institution that has been successful for even longer. Unfortunately, I am not aware of even one major central bank today that has been successful for the last 100 years let alone 170 years:
It appears to me to be the height of hubris for an association of failed central banks and central banks that are too young to have experienced failure to point fingers at the Bank of Amsterdam whose track record for 170 years was far better than that of any of these banks. In fact, I think that the Bank of Amsterdam’s track record even at the point of failure was better than the stated goal of most central banks today. The best central banks currently target an annual inflation rate of 2%. Over a period of 200 years, this inflation rate will lead to a 98% loss of purchasing power: 200 years from now, a dollar would be worth only 2¢ in today’s money (1.02 − 200 ≈ 0.02). By contrast, at the point when the French revolutionary armies invaded the Netherlands in 1795, and the true state of the balance sheet of the Bank of Amsterdam was revealed, the money issued by the Bank of Amsterdam fell to a 30% discount to gold (Frost, Shin and Wierts, page 24). In other words, over the two centuries of its existence, the money issued by the Bank of Amsterdam lost only 30% of its value for an annual depreciation of less than 0.2% (1.0021609 − 1795 ≈ 0.7). At the point of failure, the performance of the Bank of Amsterdam was equivalent to an annual inflation rate one-tenth that of what the best central banks promise today.
If modern central bankers think that they are better than the Bank of Amsterdam (either in its heyday, or on average over its entire life including the point of failure), they need to introspect long and hard whether they suffer from excessive over confidence or amnesia.
When I read the recent BIS working paper Low price-to-book ratios and bank dividend payout policies by Leonardo Gambacorta, Tommaso Oliviero and Hyun Song Shin, I was immediately reminded of the paper Bankruptcy hardball (Ellias and Stark (2020), Calif. L. Rev., 108, p.745) though that is not how Gambacorta, Oliviero and Shin analyse the issue.
Ellias and Stark documented the tendency of distressed firms to declare dividends or otherwise move assets out of reach of the creditors for the benefit of shareholders. This strategy which they called bankruptcy hardball is most closely associated with private equity owners.
As I reflected on the Gambacorta, Oliviero and Shin paper, it struck me that what makes bankruptcy hardball attractive is the high level of leverage rather than private equity ownership. If the assets are worth 100 and debt is 80% of assets (so that equity is 20%), then shifting 10 of assets to the shareholders reduces the value of debt only by 12.5% but increases the value of the shareholders by 50%. If debt is 90% of assets, then the same shift of 10 would be only a 11% loss to lenders but a 100% gain to shareholders. Since private equity is characterized by high leverage, the incentives are much greater in their case. On the verge of bankruptcy, it is true that the leverage would shoot up for all companies (as the equity becomes close to worthless), and it might appear that every firm can play hardball. However, the tactics are more likely to survive legal challenges when they are implemented at a time when the company appears to be solvent, and ideally years before a bankruptcy filing. So the greatest opportunity to play hardball is for those companies that have high levels of leverage in normal times when they are still notionally solvent but possibly distressed.
Apart from firms owned by private equity, there is another example of a business with high levels of leverage in normal times – banking. Banks typically operate with leverage levels that exceed that of typical private equity owned businesses. One would therefore expect banks to also play the hardball game – pay large dividends when they are distressed but still notionally solvent. And that is what Gambacorta, Oliviero and Shin find.
Their key finding is that banks with low Price to Book ratio (the ratio of the market price of the share to the book value per share) tend to pay higher dividends and this tendency becomes even more pronounced when the Price to Book ratio drops below 0.7. Price to Book has been associated with financial distress in the finance literature since the original papers by Fama and French. But the link is even stronger in banking where a low price to book ratio is often driven by the market’s belief that the asset quality of the bank is a lot worse than the accounting statements indicate. In other words, while for non financial companies, a low price to book reflects low profitability, for banks, it often indicates that book equity is overstated (due to hidden bad loans) and the capital adequacy of the bank is a lot worse than what the accounting statements suggest. Price to book is therefore an even more direct indicator of distress for banks than for non financial companies.
For a bank which is already highly levered and whose true leverage is even higher because of overvalued assets, dividends become an attractive device to transfer value to shareholders from creditors. The fact that the bank meets the capital adequacy standards set by the regulators (aided by overvaluation of assets) acts as a cover for the hardball tactic. The fact that many creditors (especially the depositors) are protected by deposit insurance means that creditor resistance is muted.
Gambacorta, Oliviero and Shin talk about the wider social benefits of curtailing dividends (increasing lending capacity), but there is a more direct corporate governance and prudential regulation argument for doing so. Regulators have already recognized the role of market discipline in regulating banks (Pillar 3 of the Basel framework). From this it is a short step to linking dividend and other capital distributions to a market signal (price to book ratio).
One of the questions that comes up every time I teach the Capital Asset Pricing Model (CAPM) in a basic finance course is whether there are any negative beta stocks, and if so what would be their expected return. My standard answer has been that negative beta stocks are a theoretical possibility but possibly non existent in practice. Every time I have found a negative beta in practice, there was either a data error or the sample size was too small for the negative beta to be statistically significant. I would also often joke that a bankruptcy law firm would possibly have a negative beta, but fortunately or unfortunately, such firms are typically not listed. (The answer to the second part of the question is easier, if the beta is negative, the expected return is less than the risk free because it hedges the risk of the risk of the portfolio and one is willing to pay for this hedging benefit).
But now there is an interesting real life case of a negative beta stock: Zoom Video Communications, Inc. Not only is this a large company by market capitalization, but it is also a familiar company with so many online classes taking place on Zoom. During the Covid-19 pandemic, a plausible argument has been going round why Zoom should have a negative beta. The argument is that if the pandemic rages, the economy collapses while Zoom soars, and if the pandemic retreats, the economy recovers, and people go back to face to face meetings, and the Zoom boom is over.
Interestingly, the data supports this nice theory:
A better example of beta changing dramatically (going from around two to negative and then back to around two) within a few months without any change in the business mix of the company would be hard to find.
Negative betas may be a once in a 100-year event (the last global pandemic of comparable severity was in 1918), but the Zoom example illustrates the importance of estimating betas more carefully using shrinkage estimators and Bayesian methods as I explained in detail in a blog post ten years ago.