Posts this month
A blog on financial markets and their regulation
(This was posted on my blog yesterday but due to an oversight was not copied to mirror sites until now.)
Adriana Robertson argues in a recent paper that index investing is not passive investing; it only delegates the active management to the index proviver. (Passive in Name Only: Delegated Management and ‘Index’ Investing (November 2018). Yale Journal on Regulation, Forthcoming. Available at SSRN). This is a problem because mutual funds are regulated, but index providers are not. The paper presents data showing that the vast majority of indices in the United States are used as a benchmark by only 1 or 2 mutual funds, and so it is hard to argue that these index providers are subject to strong market discipline.
She offers an ingenuous suggestion to solve this problem without new intrusive regulation.
While a mutual fund cannot deviate from its fundamental policies, as stated in its registration statement, without a shareholder vote, there is no restriction on an index’s ability to change its methodology.
Fortunately, there is a simple solution to this problem. Once we recognize that delegating to an index is no different from delegating to a fund manager, we can craft a solution based on the existing rules: Any time the underlying index makes a change that, if made by the fund manager in a comparable actively managed fund, would trigger a vote, the fund manager is required to hold a vote on retaining the index. This simple change would harmonize the protections offered to investors in the two types of funds.
I can think of at least two significant index changes that would qualify under this rule, and on both these, I think Adriana Robertson’s solution makes eminent sense:
I have written many times about the Equifax data breach arguing that the credit bureau business should be subject to the doctrine of strict liability, that society should not hesitate to impose punitive penalties on them (including shutting down errant entities), and that modern cryptography makes existing credit bureaus obsolete. My excuse for writing about them again is that I just finished reading the US Congress (Committee on Oversight and Government Reform) Majority Staff Report on The Equifax Data Breach.
This report makes it clear that things were even worse at Equifax than I thought. But what I found most interesting is that when the breach occurred, Equifax had initiated the process of making the hacked system compliant with PCI-DSS (Payment Card Industry Data Security Standard) and doing so “would have largely addressed the security concerns flagged”, and would have likely prevented the hack.
PCI DSS compliance requirements include: the use of file integrity monitoring; strong access control measures; retention of logs for at least one year, with the last three months of logs immediately available for analysis; installation of patches for all known vulnerabilities; and maintenance of an up-to-date inventory of system components.
None of this is rocket science and even tiny mom-and-pop stores are required to comply with them before they can accept credit card payments. Yet, one of the largest credit bureaus in the world did not comply with them. The reason is something that Bruce Schneier has been saying for a long time (Eliminating Externalities in Financial Security):
It’s an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk.
If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.
Equifax was so terrible at computer security because it had no incentives to do a better job: even after one of the worst breaches in history, Equifax faced only minor penalties.
Three decades ago, New Zealand was the first country in the world to adopt a formal inflation target for its central bank. At around the same time, it also broke new ground in bank regulation with a focus on self-discipline and market-discipline with the regulator focusing mainly on systemic risks (a good summary is available here). Today, the Reserve Bank of New Zealand may be showing the way again with its proposal last week to almost double bank capital requirements.
More than the actual proposal itself, it is the approach that is interesting and likely to be influential. The fact that New Zealand is not a Basle Committee member gives it greater freedom to start from first principles. That is what they have done starting with their mandate to promote a sound and efficient financial system. First, they express the soundness goal in risk appetite terms: “a banking crisis in New Zealand shouldn’t happen more than once every two hundred years”. Second, they interpret the efficiency goal in terms of the literature on optimal capital requirements. This means that they begin by computing the capital requirements that would reduce the probability of a crisis to less than 0.5% per year, and then go on to ask if the optimal capital may be even higher. So the capital requirement is the higher of that determined from soundness and efficiency goals.
Another welcome thing about the proposal is that higher capital is seen as a way for the Reserve Bank of New Zealand to maintain its emphasis on self-discipline and market-discipline:
Capital requirements are the most important component of our overall regulatory arrangements. In the absence of stronger capital requirements, other rules and monitoring of bank’s activities would need to be much tougher.
They end up with Tier-1 capital of 16% as opposed to the existing 8.5% (6% + 2.5% conservation buffer). The 16% includes a countercyclical capital buffer, but unlike in other countries, this buffer would have a positive value at all times, except following a financial crisis. The 16% also includes a 1% D-SIB buffer for the large banks, but excludes the 2% Tier-2 capital requirement (which they are maintaining for the time being, though they would to have only Tier-1 capital).
What is interesting is that 16% is not the regulatory minimum (that remains at the current 6% level). Their idea seems to be that above 16%, it is all self-discipline and market-discipline, but as capital falls below that level, the regulator starts getting involved according to a “framework of escalating supervisory responses based on objective triggers that can provide clarity and much more certainty”. On the other side, when banks are operating above 16%, the Reserve Bank will impose relatively
less of a regulatory burden on banks. They are even ready to consider allowing banks to change their internal risk models without regulatory approval at all. Below 16%, the supervisory responses escalate as follows:
One of the dangers of international harmonization of financial sector regulation under the auspices of Basel, FSB and G20 has been the risk of a regulatory mono-culture. New Zealand located at the edge of the world and outside the Basel system is providing a good antidote to this.
Last week, the US District Court Southern District of New York issued a judgement dismissing the US CFTC’s complaint of market manipulation against Donald R. Wilson and DRW Investments (h/t Matt Levine). Describing the CFTC’s theories as little more than an “earth is flat” style conviction, the court wrote:
It is not illegal to be smarter than your counterparties in a swap transaction, nor is it improper to understand a financial product better than the people who invented that product. In the summer and fall of 2010, Don Wilson believed that he comprehended the true value of the Three-Month Contract better than anyone else, including IDCH, MF Global, and Jeffries. He developed a trading strategy based on that conviction, and put his firm’s money at risk to test it. He didn’t need to manipulate the market to capitalize on that superior knowledge, and there is absolutely no evidence to suggest that he ever did so in the months that followed.
In August 2011, DRW unwound its swap futures trade at a profit of $20 million, and the CEO of the biggest firm on the other side Jeffries emailed Wilson: “You won big. We lost big.”. The mathematics behind this trade is well described in a paper by a well known academic quant and two quants who worked for DRW:
Rama Cont, Radu Mondescu and Yuhua Yu “Central Clearing of Interest Rate Swaps: A Comparison of Offerings” available on SSRN.
The purpose of this blog post is to ask a different question: how common is it for traders make money simply by better knowledge of the mathematics than other participants. My sense is that this is relatively rare; traders usually make money by having a better understanding of the facts.
Perhaps the best known mathematical formula in the financial markets is the Black-Scholes option pricing formula, and Black has described his attempts to make money using this formula:
The best buy of all seemed to be National General new warrants. Scholes, Merton, and I and others jumped right in and bought a bunch of these warrants. For a while, it looked as if we had done just the right thing. Then a company called American Financial announced a tender offer for National General shares. The original terms of the tender offer had the effect of sharply reducing the value of the warrants. In other words, the market knew something that our formula didn’t know.
Black, F., 1989. “How we came up with the option formula”. Journal of Portfolio Management, 15(2), pp.4-8.
Many years later, Black did make money with superior knowledge of the mathematics of option pricing. A well known finance academic Jay Ritter has described the sad story of being on the losing side of this trade:
I lost more in the futures market than I made from my academic salary. … Years later, I found out who was on the other side of the trades in the summer of 1986. It was Goldman Sachs, with Fischer Black advising the traders, that took me to the cleaners as the market moved from one pricing regime to another. In the first four years of the Value Line futures contract, the market priced the futures using the wrong formula. After the summer of 1986, the market priced the Value Line futures using the right formula. The September 1986 issue of the Journal of Finance published an article (Eytan and Harpaz, 1986) giving the correct formula for the pricing of the Value Line futures. In the transition from one pricing regime to the other, I was nearly wiped out.
Ritter, J.R., 1996. “How I helped to make Fischer Black wealthier”. Financial Management, 25(4), pp.104-107.
One person who did make money by understanding the mathematics of option pricing was Ed Thorp who kept his knowledge secret till Black and Scholes discovered their formula and published it. Decades later Thorp said in an interview:
… with blackjack, … I thought it was mathematically very interesting, so as an academic, I felt an obligation to publicize my findings so that people would begin to think differently about some of these games. … Moving on to the investment world, when I began Princeton/Newport Partners in 1969, I had this options formula, this tool that nobody else had, and I felt an obligation to the investors to basically be quiet about it. … I spent a lot of time and energy trying to stay ahead of the published academic frontier.
Consulting Submitter, Journal of Investment, “Putting the Cards on the Table: A Talk with Edward O. Thorp”, PhD (July 1, 2011). Journal of Investment Consulting, Vol. 12, No. 1, pp. 5-14, 2011. Available at SSRN
Academics in general have been content to publish their results even when they think it is worth a billion dollars:
Longstaff, F.A., Santa-Clara, P. and Schwartz, E.S., 2001. “Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market”. Journal of Financial Economics, 62(1), pp.39-66.
Using unpublished mathematical results to make money often has the effect of destroying the underlying market. Nasdaq (which owned IDCH) delisted the swap futures contract within months of DRW unwinding its profitable trade. Similarly, Fischer Black effectively destroyed the Value Line index contract through his activities. Markets work best when the underlying mathematical knowledge is widely shared. It is very unlikely that the option markets would have grown to their current size and complexity if the option pricing formulas had remained the secret preserve of Ed Thorp. Mathematics is at its best when it is the market that wins and not individual traders.
PS: One of the things that has puzzled me about the DRW case is that DRW was a founding member of Eris which offered a competing Swap Futures product. Why didn’t anybody raise a concern that DRW and Eris were conspiring to destroy IDCH? Of course, DRW would have the compelling defence that with $20 million of profits to be made from the arbitrage, they did not need any other motive to do the trade. But still it bothers me that the matter does not seem to have come up at all.
There is a large body of literature (mainly in the US) that a lot of the trading activity in response to earnings information happens in the options market. (The seminal paper in this field is Roll, R., Schwartz, E., & Subrahmanyam, A. (2010). O/S: The relative trading activity in options and stock. Journal of Financial Economics, 96(1), 1–17.) Unfortunately, the US and most other countries do not have a liquid single stock futures market, and so we do not know whether the options market was the preferred choice of the informed traders or it was the second best choice substituting for the missing first choice (the futures market). If what the informed trader wanted was leverage and short selling ability, the futures are a much better vehicle because there is no option premium and no delta rebalancing cost. On the other hand, if the trader believed for example that there was a high probability of a large upside surprise in the earnings, counterbalanced by a more modest risk of downside surprise, then the sensible way to express that view would be with a bull-biased strangle (buy a substantial number of out-of-the-money calls and a somewhat smaller number of out-of-the-money puts). It would be too risky to trade this view in the futures market without the downside protection provided by options.
India provides the perfect setting to resolve this issue because it has liquid single stock futures and single stock options markets (both of these markets are among the largest such markets in the world). In a recent paper, my doctoral student, Sonali Jain, my colleagues, Prof. Sobhesh Agarwalla and Prof. Ajay Pandey and I investigate this (Jain S, Agarwalla SK, Varma JR, Pandey A. Informed trading around earnings announcements – Spot, futures, or options?. J Futures Markets. 2018. https://doi.org/10.1002/fut.21983) We find that in India single stock futures play the role that the options market plays in the US implying that the informed traders are seeking leverage benefits of derivatives rather than the nonlinear payoffs of options. We also find patterns in the data that are best explained by information leakage. Though, Indian derivative markets are often disparaged as being gambling dens dominated by noise traders, our results suggest that the futures markets are also venues of trading based on fundamentals.
Craig Pirrong writes on his Streetwise Professor blog that “Spreads price constraints.” Though Pirrong is talking about natural gas calendar spreads, I think this is an excellent way of thinking about many other spreads even for financial assets. In commodities, the constraints are obvious: for calendar spreads, the constraint is that you cannot move supply from the future to the present, for location spreads, the constraints are transportation bottlenecks, for quality spreads, technological constraints limit the elasticity of substitution between different grades (in case of intermediate goods), while inflexible tastes constrain the elasticity in case of final goods.
But the idea that “spreads price constraints” is also true for financial assets where the physical constraints of commodities are not applicable. The constraints here are more about limits to arbitrage — capital, funding, leverage and short-sale constraints, regulatory constraints on permissible investments, and constraints on the skilled human resources required to implement certain kinds of arbitrage.
Thinking of the spread as the shadow price of a constraint makes it much easier to understand the otherwise intractable statistical properties of the spread. Forget about normal distributions, even the popular fat tailed distributions (like the Student-t with 3-10 degrees of freedom) are completely inadequate to model these spreads. Modelling the two prices and computing the spread as their difference does not help because modelling the dependence relationship (the copula) is fiendishly difficult (see my blog post about Nordic power spreads). But thinking about the spread as the shadow price of a constraint, allows us to frame the problem in terms of standard optimization theory. Shadow prices can be highly non linear (even discontinuous) functions of the parameters of an optimization problem. For example, if the constraint is not binding, then the shadow price is zero, and changing the parameters makes no difference to the shadow price until the constraint becomes binding, at which point, the shadow price might jump to a large value and might also become very sensitive to changes in various parameters.
This is in fact quite often observed in derivative markets — a spread may be very small and stable for years, and then it can suddenly shoot up to very high levels (orders of magnitude greater than its normal value), and can also then become very volatile. If the risk managers had succumbed to the temptation to treat the spread as a very low risk position, they would now be staring at a catastrophic failure of the risk management system. Risk managers would do well to refresh their understanding about duality theory in linear (and non linear) programming.
The Aadhaar abuse that I described a year ago as a hypothetical possibility a year ago has indeed happened in reality. In July 2017, I described the scenario in a blog post as follows:
That is when I realized that the error message that I saw on the employee’s screen was not coming from the Aadhaar system, but from the telecom company’s software. … Let us think about why this is a HUGE problem. Very few people would bother to go through the bodily contortion required to read a screen whose back is turned towards them. An unscrupulous employee could simply get me to authenticate the finger print once again though there was no error and use the second authentication to allot a second SIM card in my name. He could then give me the first SIM card and hand over the second SIM to a terrorist. When that terrorist is finally caught, the SIM that he was using would be traced back to me and my life would be utterly and completely ruined.
Last week, the newspapers carried a PTI report about a case going on in the Delhi High Court about exactly this vulnerability:
The Delhi High Court on Thursday suggested incorporating recommendations, like using OTP authentication instead of biometric, given by two amicus curiae to plug a ‘loophole’ in the Aadhaar verification system that had been misused by a mobile shop owner to issue fresh SIM cards in the name of unwary customers for use in fraudulent activities. The shop owner, during Aadhaar verification of a SIM, used to make the customer give his thumb impression twice by saying it was not properly obtained the first time and the second round of authentication was then used to issue a fresh connection which was handed over to some third party, the high court had earlier noted while initiating a PIL on the issue.
This vindicates what I wrote last year:
Using Aadhaar (India’s biometric authentication system) to verify a person’s identity is relatively secure, but using it to authenticate a transaction is extremely problematic. Every other form of authentication is bound to a specific transaction: I sign a document, I put my thumb impression to a document, I digitally sign a document (or message as the cryptographers prefer to call it). In Aadhaar, I put my thumb (or other finger) on a finger print reading device, and not on the document that I am authenticating. How can anybody establish what I intended to authenticate, and what the service provider intended me to authenticate? Aadhaar authentication ignores the fundamental tenet of authentication that a transaction authentication must be inseparably bound to the document or transaction that it is authenticating. Therefore using Aadhaar to authenticate a transaction is like signing a blank sheet of paper on which the other party can write whatever it wants.
A recent paper by my doctoral student, Sonali Jain, my colleague, Prof. Sobhesh Agarwalla and myself (Jain S, Varma JR, Agarwalla SK. Indian equity options: Smile, risk premiums, and efficiency. J Futures Markets. 2018;1–14. https://doi.org/10.1002/fut.21971) studies the pricing of single stock options in India which is one of the world’s largest options markets.
Our findings are supportive of market efficiency: A parsimonious smile-adjusted Black model fits option prices well, and the implied volatility (IV) has incremental predictive power for future volatility. However, the risk premium embedded in IV for Single Stock Options appears to be higher than in other markets. The study suggests that even a very liquid market with substantial participation of global institutional investors can have structural features that lead to systematic departures from the behavior of a fully rational market while being “microefficient.”
The good news here is that (a) options with different strikes on the same stock are nicely consistent with each other (parsimonious smile), and (b) the option market predicts future volatility instead of blindly extrapolating past volatility. The troubling part is that the implied volatility of Indian single stock options consistently exceeds realized volatility by too large an amount to be easily explained as a rational risk premium. Globally, there is a substantial risk premium in index options but not so much in single stock options in accordance with the intuition that changes in index volatility are a non diversifiable risk, while fluctuations in the idiosyncratic volatility of individual stocks are probably diversifiable. The large gap between Indian implied and realized volatility is therefore problematic. However, the phenomenon cannot be attributed entirely to an irrational market: we find that the single stock implied volatility has a strong systematic component responding to changes in market wide risk aversion (the index option smile).
There is a puzzle here that demands further research. There is some anecdotal evidence that option writers demand a risk premium for expiry day manipulation by the promoters of the company. I also think that there is a shortage of capital devoted to option writing despite the emergence of a few alternative investment funds in this area. Perhaps there are other less well understood barriers to implementing a diversified option writing strategy in India.
I had the opportunity to engage in a conversation with Nobel Laureate Robert Merton after he delivered the R H Patil Memorial Lecture as part of the Silver Jubilee celebrations of the National Stock Exchange last week. The video is available here, and a large part of the conversation is about whether financial markets can be trusted more than financial institutions particularly in the Indian context.
Last month, the loss caused by the default of a single trader in a Nordic power spread contract cleared by Nasdaq Clearing consumed the entire €7 million contribution of Nasdaq to the default waterfall and then wiped out more than two thirds of the €168 million default fund of the Commodities Market segment of Nasdaq (the diagram on page 7 of this document shows the entire default waterfall for this episode).
Nasdaq explained its margin methodology as follows:
The margin model is set to cover stressed market conditions, covering at least 99.2% of all 2-day market movements over the recent 12 month period. In the final step of the margin curve estimation a pro-cyclicality buffer of 25% is applied.
The MPOR (Margin Period of Risk) for the relevant products is two days.
It also provided the following historical data:
There has been a lot of excellent commentary on this episode:
The episode highlights a number of important lessons about risk management that we knew even before this default happened: