Prof. Jayanth R. Varma’s Financial Markets Blog

A blog on financial markets and their regulation

Revolving door and favouring future employers

Canayaz, Martinez and Ozsoylev have a nice paper showing that the pernicious effect of the revolving door (at least in the US) is largely about government employees favouring their future private sector employers. It is not so much about government employees favouring their past private sector employers or about former government employees influencing their former colleagues in the government to favour their current private sector employers.

Their methodology relies largely on measuring the stock market performance of the private sector companies whose employees have gone through the revolving door (in either direction) and comparing these returns with a control group of companies which have not used the revolving door. The abnormal returns are computed using the Fama-French-Carhart four factor model.

The advantage of the methodology is that it avoids subjective judgements about whether for example, US Treasury Secretary Hank Paulson favoured his former employer, Goldman Sachs, during the financial crisis of 2008. It also avoids having to identify the specific favours that were done. The sample size also appears to be reasonably large – they have 23 years of data (1990-2012) and an average of 62 revolvers worked in publicly traded firms each year.

The negative findings in the paper are especially interesting, and if true could make it easy to police the revolving door. All that is required is a rule that when a (former) government employee joins the private sector, a special audit would be carried out of all decisions by the government employee during the past couple of years that might have provided favours to the prospective private sector employer. In particular, the resistance in India to hiring private sector professionals to important government positions (because they might favour their former employer) would appear to be misplaced.

One weakness in the methodology is that companies which anticipate financial distress in the immediate future might hire former government employees to help them lobby for some form of bail out. This might ensure that though their stock price declines due to the distress, it does not decline as much as it would otherwise have done. The excess return methodology would not however show any gain from hiring the revolver because the Fama French excess returns would be negative rather than positive. Similarly, companies which anticipate financial distress might make steps (for example, campaign contributions) that make it more likely that their employees are recruited into key government positions. Again, the excess return methodology would not pick up the resulting benefit.

Just in case you are wondering what all this has to do with a finance blog, the paper says that “[t]he financial industry, … is a substantial employer of revolvers, giving jobs to twice as many revolvers as any other industry.” (Incidentally, Table A1 in their paper shows that including or excluding financial industry in the sample makes no difference to their key findings). And of course, the methodology is pure finance, and shows how much information can be gleaned from a rigorous examination of asset prices.

On may versus must and suits versus geeks

On may versus must and suits versus geeks

On Monday, the Basel Committee on Banking Supervision published its Regulatory Consistency Assessment Programme (RCAP) Assessment of India’s implementation of Basel III risk-based capital regulations. While the RCAP Assessment Team assessed India as compliant with the minimum Basel capital standards, they had a problem with the Indian use of the word “may” where the rest of the world uses “must”:

The team identified an overarching issue regarding the use of the word “may” in India’s regulatory documents for implementing binding minimum requirements. The team considers linguistic clarity of overarching importance, and would recommend the Indian authorities to use the word “must” in line with international practice. More generally, authorities should seek to ensure that local regulatory documents can be unambiguously understood even in an international context, in particular where these apply to internationally active banks. The issue has been listed for further reflection by the Basel Committee. As implementation of Basel standards progresses, increased attention to linguistic clarity seems imperative for a consistent and harmonised transposition of Basel standards across the member jurisdiction.

Section 2.7 lists over a dozen instances of such usage of the word “may”. For example:

Basel III paragraph 149 states that banks “must” ensure that their CCCB requirements are calculated and publicly disclosed with at least the same frequency as their minimum capital requirements. The RBI guidelines state that CCCB requirements “may” be disclosed at table DF-11 of Annex 18 as indicated in the Basel III Master Circular.

Ultimately, the RCAP Assessment Team adopted a pragmatic approach of reporting this issue as an observation rather than a finding. They were no doubt swayed by the fact that:

Senior representatives of several Indian banks unequivocally confirmed to the team during the on-site discussions that there is no doubt that the intended meaning of “may” in Indian banking regulations is “shall” or “must” (except where qualified by the phrase “may, at the discretion of” or similar terms).

The Indian response to the RCAP Assessment argues that “may” is perfectly appropriate in the Indian context.

RBI strongly believes that communication, including regulatory communications, in order to be effective, must necessarily follow the linguistics and social characteristics of the language used in the region (Indian English in this case), which is rooted in the traditions and customs of the jurisdiction concerned. What therefore matters is how the regulatory communications have been understood and interpreted by the regulated entities. Specific to India, the use of word “may” in regulations is understood contextually and construed as binding where there is no qualifying text to convey optionality. We are happy that the Assessment Team has appreciated this point.

I tend to look at this whole linguistic analysis in terms of the suits versus geeks divide. It is true that in Indian banking, most of the suits would agree that when RBI says “may” it means “must”. But increasingly in modern finance, the suits do not matter as much as the geeks. In fact, humans matter less than the computers and the algorithms that they execute. I like to joke that in modern finance the humans get to decide the interesting things like when to have a tea break, while the computers decide the important things like when to buy and sell.

For any geek worth her salt, the bible on the subject of “may” and “must” is RFC 2119 which states that “must” means that the item is an absolute requirement; “should” means that there may exist valid reasons in particular circumstances to ignore a particular item; “may” means that an item is truly optional. I will let Arnold Kling have the last word: “Suits with low geek quotients are dangerous”.

Back from vacation

My long vacation provided the ideal opportunity to reflect on the large number of comments that I received on my last blog post about the tenth anniversary of my blog. These comments convinced me that I should not only keep my blog going but also try to engage more effectively with my readers. Over the next few weeks and months, I intend to implement many of the excellent suggestions that you have given me.

First of all, I have set up a Facebook page for this blog. This post and all future blog posts will appear on that page so that readers can follow the blog from there as well. My blog posts have been on twitter for over six years now and this will continue.

Second, I have started a new blog on computing with its own Facebook page which will over a period of time be backed up by a GitHub presence. I did not want to dilute the focus of this blog on financial markets and therefore decided that a separate blog was the best route to take. At the end of every month, I intend to post on each blog a list of posts on the sister blog, but otherwise this blog will not be contaminated by my meanderings in fields removed from financial markets.

Third, I will be experimenting with different kinds of posts that I have not done so far. This will be a slow process of learning and you might not observe any difference for many months.

Reflections on tenth anniversary

My blog reaches its tenth anniversary tomorrow: over ten years, I have published 572 blog posts at a frequency of approximately once a week.

My first genuine blog post (not counting a test post and a “coming soon” post) on March 29, 2005 was about an Argentine creditor (NML Capital) trying to persuade a US federal judge (Thomas Griesa) to attach some bonds issued by Argentina. The idea that a debtor’s liabilities (rather than its assets) could be attached struck me as funny. Ten years on, NML and Argentina are still battling it out before Judge Griesa, but things have moved from the comic to the tragic (at least from the Argentine point of view).

The most fruitful period for my blog (as for many other blogs) was the global financial crisis and its aftermath. The blog posts and the many insightful comments that my readers posted on the blog were the principal vehicle through which I tried to understand the crisis and to formulate my own views about it. During the last year or so, things have become less exciting. The blogosphere has also become a lot more crowded than it was when I began. Many times, I find myself abandoning a potential blog post because so many others have already blogged about it.

When I look back at the best bloggers that I followed in the mid and late 2000s, some have quit blogging because they found that they no longer had enough interesting things to say; a few have sold out to commercial organizations that turned these blogs into clickbaits; at least one blogger has died; some blogs have gradually declined in relevance and quality; and only a tiny fraction have remained worthwhile blogs to read.

The tenth anniversary is therefore less an occasion for celebration, and more a reminder of senescence and impending mortality for a blog. I am convinced that I must either reinvent my blog or quit blogging. April and May are the months during which I take a long vacation (both from my day job and from my blogging). That gives me enough time to think about it and decide.

If you have some thoughts and suggestions on what I should do with my blog, please use the comments page to let me know.

How does a bank say that its employees are a big security risk?

Very simple. Describe them as your greatest resource!

In my last blog post, I pointed out that the Carbanak/Anunak hack was mainly due to the recklessness of the banks’ own employees and system administrators. Now that they are aware of this, banks have to disclose this as another risk factor in their regulatory filings. Here is how one well known US bank made this disclosure in their Form 10K (page 39) last week (h/t the ever diligent Footnoted.com):

We are regularly the target of attempted cyber attacks, including denial-of-service attacks, and must continuously monitor and develop our systems to protect our technology infrastructure and data from misappropriation or corruption.

Notwithstanding the proliferation of technology and technology-based risk and control systems, our businesses ultimately rely on human beings as our greatest resource, and from time-to-time, they make mistakes that are not always caught immediately by our technological processes or by our other procedures which are intended to prevent and detect such errors. These can include calculation errors, mistakes in addressing emails, errors in software development or implementation, or simple errors in judgment. We strive to eliminate such human errors through training, supervision, technology and by redundant processes and controls. Human errors, even if promptly discovered and remediated, can result in material losses and liabilities for the firm.

Carbanak/Anunak: Patient Bank Hacking

There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.

The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.

The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).

Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.

Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.

It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.

The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.

Loss aversion and negative interest rates

Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.

What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.

A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.

The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.

This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.

Bank deposits without those exotic swaptions

Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).

Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.

Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.

In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.

Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.

The politics of SEC enforcement or is it data mining?

Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:

  • The SEC is less likely to take enforcement action against firms that employ relatively more workers (“labour intensive firms”).
  • This effect is stronger in a year in which there is a presidential election
  • The election year effect in turn is stronger in the politically important states that determine the electoral outcome.
  • Enforcement action is also less likely if the labour intensive firm is headquartered in a district of a senior congressman who serves on a committee that oversees the SEC

All the econometrics appear convincing:

  • The data includes all enforcement actions pertaining to accounting restatements over a 30 year period from 1982 to 2012: nearly 700 actions against more than 300 firms.
  • A comprehensive set of control variables have been used including the F-score which has been used in previous literature to predict accounting restatements.
  • A variety of robustness and sensitivity tests have been used to validate the results

But then, I realized that there is one very big problem with the paper – the definition of labour intensity:

I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.

Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)

I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.

Why did the Swiss franc take half a million milliseconds to hit one euro?

Updated

In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.

So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:

  • At 9:30 am GMT, the Swiss National Bank (SNB) announced that it was “discontinuing the minimum exchange rate of CHF 1.20 per euro” that it had set three years earlier. I am taking the time stamp of 9:30 GMT from the “dc-date” field in the RSS feed of the SNB which reads “2015-01-15T10:30:00+01:00” (10:30 am local time which is one hour ahead of GMT).
  • The head line “SNB ENDS MINIMUM EXCHANGE RATE” appeared on Bloomberg terminals at 9:30 am GMT itself. Bloomberg presumably runs a super fast version of “if this then that”. (It took Bloomberg nine minutes to produce a human written story about the development, but anybody who needs a human written story to interpret that headline has no business trading currencies).
  • At the end of the first minute, the euro had traded down to only 1.15 francs, at the end of the third minute, the euro still traded above 1.10. The next couple of minutes saw a lot of volatility with the euro falling below 1.05 and recovering to 1.15. At the end of minute 09:35, the euro again dropped below 1.05 and started trending down. It was only around 09:39 that it fell below 1.00. It is these nine minutes (half a million milliseconds) that I find puzzling.
  • The euro hit its low (0.85 francs) at 09:49, nineteen minutes (1.1 million milliseconds) after the announcement. This overshooting is understandable because the surge in the franc would have triggered many stop loss orders and also knocked many barrier options.
  • Between 09:49 and 09:55, the euro recovered from its low and after that it traded between 1.00 and 1.05 francs.

It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.

Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?

Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.

Follow

Get every new post delivered to your Inbox.

Join 1,591 other followers