Prof. Jayanth R. Varma’s Financial Markets Blog

A blog on financial markets and their regulation

Funding Value Adjustments

The global financial crisis led to a lot of turmoil in derivative markets and large players introduced a number of changes in their valuation models. Acronyms like CVA (Credit Value Adjustment), DVA (Debit Value Adjustment) and FVA (Funding Value Adjustment) became quite commonplace. Of these, CVA and DVA have strong theoretical foundations and have gained wide ranging acceptance. But FVA remains controversial as it contradicts long standing financial theories. Hull and White wrote an incisive article The FVA Debate explaining why it is a mistake to use FVA either for valuing derivative positions on the balance sheet or for trading decisions. But four years later, FVA shows no signs of just going away.

Three months back, Andersen, Duffie and Song wrote a more nuanced piece on Funding Value Adjustments arguing that FVA will influence traded prices, but not balance sheet valuations. I have written a simplified note explaining the Andersen-Duffie-Song model, but at bottom it is a capital structure (debt overhang) issue than a derivative valuation issue.

Consider therefore a very simple capital structure problem of borrowing a small amount (say 1 unit) to invest in the risk free asset. The qualifier “small” is used to ensure that this borrowing itself does not change the company’s (risk neutral) Probability of Default (PD), Loss Given Default (LGD) or credit spread (s). From standard finance theory we get s=DL/(1DL) where the expected Default Loss (DL) is given by DL=PD×LGD. For simplicity, we assume that the interest rate is zero (which is probably not too far from the median interest rate in the world today).

  • At default (which happens with probability PD), the pre-existing creditors pay only (1LGD)(1+s) to the new lender and receive 1 from the risk free asset for a net gain of LGDs+s×LGD. The expected gain to the unsecured creditors is therefore: PD(LGDs+s×LGD) which after some tedious algebra reduces to (1PD)s

  • If there is no default (which happens with probability 1PD), the shareholders pay 1+s to the new lender but collect only 1 from the risk free asset. The expected loss to them is (1PD)s which is the same as the expected gain to the pre-existing creditors.

The transaction does not change the value of the firm, but there would be a transfer of wealth from shareholders to pre-existing creditors. Somebody who owns a vertical slice of the company (say 10% of the equity and 10% of the pre-existing debt) would be quite happy to buy the risk free asset at its fair value of 1, but if the shareholders are running the company, they would refuse to do so. (This is of course the standard corporate finance result that a debt overhang causes the firm to reject low-risk low-return positive NPV projects because they transfer wealth to creditors). The shareholders would be ready to buy the risk free asset only if it is available at a price of 1/(1+s). At this price, the shareholders are indifferent, the pre-existing creditors gain a benefit and the counterparty (seller of the risk free asset) suffers a loss equal to s/(1+s). The price of 1/(1+s) includes a FVA because it is obtained by discounting the cash flows of the risk free asset not at the risk free rate of 0, but at the company’s funding cost of s.

Now consider a derivative dealer doing a trade with a risk free counterparty in which it has to make an upfront payment (for example, a prepaid forward contract or an off-market forward contract at a price lower than the market forward price). If the derivative is fairly valued, the counterparty would be expected to make a payment to the dealer at maturity. From the perspective of the dealer, the situation is very much like investing in a risk free asset (note that we assume that the counterparty is risk free). The shareholders of the derivative dealer would not agree to this deal unless there were a funding value adjustment so that the expected payment from the counterparty were discounted at s instead of 0.

Now consider the opposite scenario where the dealer receives an upfront payment and is expected to have to make payments to the counterparty at maturity. This is very much like the dealer taking a new loan to repay existing borrowing (Andersen-Duffie-Song assume that the dealer uses all cash inflows to retire existing debt and finances all outflows with fresh borrowings). There is no transfer of wealth between shareholders and creditors and no funding value adjustment.

The result is the standard FVA model: all expected future inflows from the derivative are discounted at the funding cost and all expected outflows are discounted at the risk free rate. This is because the future inflows require an upfront payment by the dealer (which requires FVA) and future outflows require upfront receipts by the dealer (which do not require FVA).

Andersen, Duffie and Song correctly argue that (unlike CVA and DVA) the FVA is purely a transfer of wealth from shareholders to pre-existing creditors and is not an adjustment that should be made to the carrying value of the derivative in the books of the firm. This part of their paper therefore agrees with Hull and White. However, Andersen, Duffie and Song argue that in the real world where shareholders are running the company, the FVA would be reflected in traded prices. Dealers would buy only at fair value less FVA. They argue that this is quite similar to a bid-ask spread in market making. The market maker buys assets only below their fair value (bid price is usually below fair value). Just as for liquidity or other reasons, counterparties are willing to pay the bid ask spread, they would be willing to pay the FVA also as a transaction cost for doing the trade.

I wonder whether this provides an alternative explanation for the declining liquidity in many markets post crisis. Much of this has been attributed to enhanced regulatory costs (Basel 3, Dodd-Frank, Volcker Rule and so on). Perhaps some of it is due to (a) the higher post crisis credit spread s and (b) greater adoption of FVA. The increasing market share of HFT and other alternative liquidity providers may also be due to their lower leverage and therefore lower debt overhang costs.

SWIFT hacking threatens to erode confidence in financial sector

I had a short blog post on the Bangladesh-Bank SWIFT hacking shortly before I went on a two month long vacation. Since then, the story has become more and more frightening. It is no longer about Bangladesh Bank and its cheap routers: the hacking now appears to be global in scope and sophisticated in approach:

  • BAE Systems have identified parts of the malware that was used in the Bangladesh-Bank hacking. This malware “contains sophisticated functionality” and “appears to be just part of a wider attack toolkit”.

The tools are highly configurable and given the correct access could feasibly be used for similar attacks in the future.

The wider lesson learned here may be that criminals are conducting more and more sophisticated attacks against victim organisations, particularly in the area of network intrusions (which has traditionally been the domain of the ‘APT’ actor).

  • More than a year before the Bangladesh-Bank hacking, a total of $12 million was stolen from Banco del Austro (BDA) in Ecuador through SWIFT instructions to Wells Fargo in the US to transfer funds to a number of accounts around the world. The matter came to light only when BDA sued Well Fargo to recover the money.

Neither bank reported the theft to SWIFT, which said it first learned about the cyber attack from a Reuters inquiry.

  • In 2015, there had been an attempt to steal more than 1 million euros from Vietnam’s Tien Phong Bank through fraudulent SWIFT messages using infrastructure of an outside vendor hired to connect it to the SWIFT bank messaging system. TP Bank did not suffer losses because it detected the fraud quickly enough to stop the transfers.

  • SWIFT now admits that there were “a number of fraudulent payment cases where affected customers suffered a breach in their local payment infrastructure”. The whole set of press releases issued by SWIFT on this issue is worth reading.

The picture that emerges out of this is that on the one side there are well organized criminals who are building sophisticated tools to attack the banks. They may or may not be linked to each other, but they are certainly borrowing and building on each others’ tools. Their arsenal is gradually beginning to rival that of the APT (Advanced Persistent Threat) actors (who are traditionally focused on espionage or strategic benefits rather than financial gains). Very soon global finance could be attacked by criminals wielding Stuxnet-like APT tools re-purposed for stealing money.

On the other side is a banking industry that is unable to get its act together. Instead of hiring computer security professionals to shore up their defences, they are busy hiring lawyers to try and deflect the losses on to each other. It is evident that the banks are not sharing information with each other. Worse, my experience is that information is not even being shared within the banks. I have heard horror stories in India of security firms who have detected vulnerabilities in the IT systems of banks being told by the IT departments not to mention these to the top management. These IT people think that everything is fine so long as top management does not know about the problems. The top management in turn thinks that things are fine so long as the regulator does not know that there is a problem. I hear reports of banks quietly reimbursing a customer’s losses without either fixing the problem or reporting it to the regulators or other authorities. Most of the stories that I hear are from India, but the evidence suggests that the situation is not any different elsewhere in the world.

This state of denial and discord in the banking industry provides the hackers the perfect opportunity to learn the vulnerabilities of the banks, improve their hacking tools, and increase the scale and scope of their attacks. At some point, of course, the losses to the banking system would become too big to sweep under the carpet. That is when the confidence in the financial sector would begin to erode.

Another problem for the banks is that in their lawsuits against the paying banker, the victim bank is raising the issue of “red flags” and “suspicious transactions” to argue that the paying banker should have halted the payment. With large amounts of money at stake, this argument would be made by skilled lawyers and may even be successful in court. If that happens, it would set up a dangerous precedent against the banks themselves. So far, banks have taken the stand that their customers are responsible for the transactions so long as the valid authentication was provided. Bank customers typically do not have the resources and inside knowledge to challenge this stand. The inter-bank litigation is very different and has the potential to overturn the established distribution of liability.

I have not so far talked about nation state actors getting into the attack. Any nation state would love to hack the banks of an enemy country. Some rogue states that are excluded from global finance might even want to try and disrupt the global financial system. India is one of the countries at serious risk of an attack from a resourceful nation state, but as I look around, I see only complacency and no sense of concern let alone paranoia.

Regulatory priority: punish, deter or protect?

When a serious breach of market integrity is suspected, what should the regulators’ priorities be: should it try to punish the guilty, or should it seek to deter other wrong doers or should it focus on protecting the victims? Both bureaucratic and political incentives may be tilted towards the first and perhaps the second, but in fact it is the last that is most important. I have been thinking about these issues in the context of the order of the Securities and Exchange Board of India in the matter of Sharepro Services, a Registrar and Share Transfer Agent regulated by SEBI. The order which is based on six months of investigation and runs into 98 pages finds that:

  • Shares and dividends have been transferred from the accounts of the genuine investors to entities linked with the top management of Sharepro without any supporting documents

  • Records have been deliberately falsified avoid the audit trails.

  • Sharepro and its top management have authorized issuances of new certificates without any
    request or authorisation from shareholders.

  • The management of Sharepro has not cooperated with the investigation which being carried
    out by SEBI and on several occasions, it has attempted to mislead the investigation in the
    matter.

If one assumes that these findings are correct, then the key regulatory priority must be to take operational control of Sharepro and thereby protect the interests of investors who might have been harmed. A Registrar and Share Transfer Agent is a critical intermediary whose honest functioning is essential to ensure market integrity and maintain the faith of investors in the capital markets. I think that SEBI’s powers under section 11B of the SEBI Act would be adequate to achieve this objective, but in case of need, resort could also be had to section 242 of the Companies Act 2013.

The SEBI order does take some steps to punish the top management of Sharepro but does too little to protect the investors who appear to have lost money. It does not even cancel or suspend the registration of Sharepro as a Registrar and Share Transfer Agent, but merely advises companies who are clients of Sharepro switchover to another Registrar and Share Transfer Agent or to carry out these activities in-house. The only investor protection step in the order is the direction to companies who are clients of Sharepro to audit the records and systems of Sharepro. But if the records have been falsified, then only a regulator or other agency with statutory powers can carry out a meaningful audit by obtaining third party records.

A decade ago, when the Satyam fraud occurred, I was among the earliest to write that the government should simply take control of the company. I would argue the same in the case of Sharepro as well assuming that the SEBI findings are correct.

Why would a manufacturer finance a bank’s capital expenditure?

Usually a manufacturing company goes to a bank to finance its capital expenditure. But last month witnessed a deal where the US manufacturing giant GE stepped forward to finance the capital expenditure of one of the largest banks in the world – JPMorgan Chase – when the latter decided to buy 1.4 million LED bulbs to replace the lighting across 5,000 branches in the world’s largest single-order LED installation to date.

As a finance person, the first explanation that I looked at was the credit rating. GE lost its much vaunted AAA rating during the Global Financial crisis, but based on S&P long term unsecured ratings, GE’s AA+ rating is full five notches above JPMorgan Chase’ A- rating. Based on Moodys ratings, the gap is only two notches. Averaging the two and taking into account S&P’s negative outlook on GE, we could say that GE enjoys a rating that is a full letter grade (three notches) above JPMorgan. So perhaps, it makes sense for the manufacturer to finance the bank.

Another possible explanation is that trade credit has a set of advantages that are not fully understood. Some of the alleged advantages of trade credit (like the idea that a business relationship leads to superior information on credit worthiness) strain credulity when the recipient of the credit is one of the largest banks in the world with hundreds of publicly traded bonds outstanding. Similarly, the idea that vendor financing is a superior form of performance guarantee is hard to believe when the vendor is a manufacturing giant with such a high reputation and credit rating.

In a Slate story, Daniel Gross explained the logic in terms of the inefficiency and inertia of corporate bureaucracies:

The second barrier – “the capital barrier,” as Irick call it – is more difficult to surmount. The economics of buying and installing them can be a challenge to corporate bureaucrats. Companies often produce multiyear budgets well in advance. Going LED means spending a lot of money in a single year to buy and install them, make sure they work, and dispose of the old ones. And it is difficult even for a company like Chase to make a decision quickly to write a check to buy 1.4 million new light bulbs and pay for their installation. GE, of course, has a long track record of helping to finance customers’ purchases of its capital goods, structuring payments over a period of years rather than upfront.

That perhaps makes more sense than any of the finance theory arguments.

Random Justice

Voltaire wrote that “His Sacred Majesty Chance Decides Everything”. Neustadter comes to a similar conclusion in a fascinating paper entitled “Randomly Distributed Trial Court Justice: A Case Study and Siren from the Consumer Bankruptcy World” (h/t Credit Slips):

Between February 24, 2010 and April 23, 2012, Heritage Pacific Financial, L.L.C. (‘Heritage’), a debt buyer, mass produced and filed 218 essentially identical adversary proceedings in California bankruptcy courts against makers of promissory notes who had filed Chapter 7 or Chapter 13 bankruptcy petitions. Each complaint alleged Heritage’s acquisition of the notes in the secondary market and alleged the outstanding obligations on the notes to be nondischargeable under the Bankruptcy Code’s fraud exception to the bankruptcy discharge. …

Because the proceedings were essentially identical, they offer a rare laboratory for testing the extent to which our entry-level justice system measures up to our aspirations for ‘Equal Justice Under Law.’ …

The results in the Heritage adversary proceedings evidence a stunning and unacceptable level of randomly distributed justice at the trial court level, generated as much by the idiosyncratic behaviors of judges, lawyers, and parties as by even handed application of law …

Neustadter summarizes the outcome of these proceedings as follows (Table 1, page 20):

Recovery by Heritage Filed Settlement Agreements Heritage Requests Dismissal Dismissal for Other Reasons Default Judgments Summary Judgments Trials
Zero 49 26 12 N/A 4 3
Positive 103 ($1m) N/A N/A 10 ($0.9m) 1 ($0.06m) 2 ($0.2m)

I remember reading Max Weber’s Economy and Society decades ago and being fascinated by his argument that legal rights only increase the probability of certain outcomes (incidentally, Weber obtained a doctorate in law before becoming an economist and sociologist). Weber believed that the function of law in a modern economy was to make things more predictable, but by this also he only meant that probabilities could be attached to outcomes. I resisted Weber’s argument at that time, but over the course of time, I have come around to accepting them. In fact, I now think that it is only the conceit of false knowledge that leads to a belief that certainty is possible.

A greater degree of acceptance of randomness would make litigation a lot more efficient. In my view of things, a judge should be required to set a time limit for the amount of time to be devoted to a particular dispute (depending on the importance of the dispute). When that time has been spent, the judge should be able to say that he thinks there is say a 40% probability that the plaintiff is right and a 60% chance that the defendant is right. He should then draw a random number between 0 and 1; if the number that is drawn is less than 0.4, he should rule for the plaintiff, otherwise for the defendant. All litigation could be resolved in a time bound manner by this method. Even greater efficiency is possible by the use of the concepts of Expected Value of Perfect Information and Expected Value of Sample Information to decide when to terminate the hearings and proceed to drawing the random numbers. If an appeal process is desired, then of course the draw of the random number could be postponed until the appeal process is exhausted and the final value of the probability determined. In a blog post a couple of months ago, I have discussed cryptographic techniques to draw the random number in a completely transparent and non manipulable manner.

In my experience, there is enormous resistance to deciding anything by a draw of lots or other randomization technique though I believe that it is the most rational way of decision making. Instead society creates very complex mechanisms that lead effectively to a process of randomization based on which judge gets to hear the matter and what procedural or substantive legal provisions the judge or the lawyer is aware of. In fact, one way of making sense of the bewildering complexity of modern law is that it is just a very costly way of achieving randomization – if the law is too complex to be remembered by any individual, then what provision is remembered and applied is a matter of chance. That is how I interpret Neustadter’s findings.

In case you are wondering why I am discussing all this in a finance blog, let me remind you that the litigation in question was about recovery of defaulted debt and that is definitely a finance topic.

Bangladesh Bank hacking is yet another wake up call

A year ago, I blogged about the Carbanak hacking and thought that it was a wake up call for financial organizations to improve their internal systems and processes to protect themselves from patient hackers. The alleged patient hacking reported this week at the central bank of Bangladesh shows that the lessons have not been learned. There is too much of silo thinking in large organizations – cyber security is still thought to be the responsibility of some computer professionals. The reality is that security has to be designed into all systems and processes in the entire organization. Institutions like central banks that control vast amounts of money need to defend in depth at all levels of the organization. Physical security, hardware security, software security and robust internal systems and processes all contribute to a culture of security in the whole organization. In my experience, even senior management at large banking and financial organizations have a highly complacent attitude towards security that makes the organization highly vulnerable to a patient and determined hacker.

For example, there is no reason not to have a dedicated terminal for large (say $100 million) SWIFT transactions. Cues like dedicated hardware tends to make humans more alert to security considerations. In the paper world, we went to great lengths to institutionalize such cues. For example, the law on cheques permits cheques to be written on plain paper (the law only says “instrument in writing”), but in practice it was always written on special security paper. The importance of keeping blank security paper under lock and key was drilled into every person who worked in a bank from the chairman to the messenger boy. I have yet to see any similar attempt to inculcate a culture of computer security in any bank.

Investment Banks and IPO Pricing Power

Krigman and Wendy have an interesting paper on how issuers pay for their investment banks’ past mistakes. Their conclusions are based on the IPOs that came to market after the the botched Facebook IPO of 2012 in which the stock fell below the IPO price and the investment banks had to buy shares in the market to stabilize the price. IPOs after this event were underpriced by an average of 20% compared to only 11% prior to the Facebook IPO. More interestingly:

We show that the entire increase in underpricing is concentrated in the IPOs of the Facebook lead underwriters. We find no statistical difference in underpricing pre and post-Facebook for non-Facebook underwriters. We argue that investment bank loyalty to their institutional investor client based propelled the Facebook underwriters to increase underpricing to compensate for the perceived losses on Facebook.

“Loyalty to investor client” sounds very nice in a scandal dominated era where we have to come to believe that bankers have no loyalty to anybody. Yet, it must be remembered that the alleged generosity to investor clients did not come out of the bankers’ profits; it came out of the pockets of another bunch of clients – the issuers. This raises a very disturbing question: what gives them the pricing power to underprice issues relative to what their competitors were doing? The first possibility that came to my mind is that these were deals that the bankers had already won and it was difficult for the clients to change their lead banks after they had already been chosen. However, the data seem to show that the effect lasted more than a year, and moreover there was a 41 day period following Facebook during which there were no IPOs at all. The other possibility is that this is not a competitive market at all and the investment banks have a lot of market power. Chen and Ritter wrote a famous paper about this at the turn of the century (“The seven percent solution.” The Journal of Finance 55.3 (2000): 1105-1131).

Were most crisis era bank acquisitions failures?

JPMorgan Chairman Jamie Dimon states in a Bloomberg interview that he now regards JPMorgan’s acquisition of Bear Stearns and of Washington Mutual during the global financial crisis as “mistakes”. I used to think that these were among the better deals in the whole lot of crisis era acquisitions which include such monumental disasters as Bank of America’s acquisition of Countrywide or Lloyds’ acquisition of HBOS. But Dimon says that the Bear Stearns purchase ended up costing JPMorgan $20 billion while if I remember right the headline acquisition cost was only a little over $1 billion (and that after Dimon raised the price per share from $2 to $10). If even JPMorgan’s relatively good deals ended up being big mistakes, then I wonder whether the only sound crisis era banking acquisition might be Wells Fargo’s acquisition of Wachovia. Of course, the very best deals were Warren Buffet’s minority stakes in Goldman Sachs and GE, but these do not count as acquisitions. On a purely accounting basis, the US government did make money on many of its rescue deals, but this accounting does not include the hidden costs that contribute heavily to the $20 billion price tag that Dimon now puts on the Bear deal. What all this means is that even at the depths of the global financial crisis, it would have made a lot of sense to heed the good old advice not to try to catch a falling knife.

In the sister blog and on Twitter during December 2015 and January 2016

The following posts appeared on the sister blog (on Computing) during the last
two months.

Tweets during the last two months (other than blog post tweets):

Does Regulation Crowd out Private Ordering and Reputational Capital?

The crowdfunding portal Kickstarter commissioned an investigative journalist to write a report on the failure of Zano which had raised $3.5 million on that platform, and the report has now been published on Medium. I loved reading this report for the quality of the information and the balance in the conclusions. It left me thinking why London’s AIM market never published something similar on many of the failures among the companies listed there, or why NASDAQ never commissioned something like this after the dotcom bust, or why the Indian exchanges never did anything like this about the vanishing companies of the mid 1990s.

Is it because these highly regulated exchanges are protected by a regulatory monopoly and they can safely leave this kind of thankless job to their regulators? Or are they worried that an honest investigative report might be used against them because of the regulatory burden that they face? Does regulation have the side effect of crowding out the private ordering that emerges in the absence of regulation? Does regulation weaken reputational incentives?

In the context of crowdfunding, the reputational incentives and private ordering are well described in Schwartz’ paper on “The Digital Shareholder”:

These intermediaries [funding portals] want investors to have a good experience so they will return to invest again on their website, making them sensitive to a reputation feedback system. A funding portal with lots of poorly rated companies will find it difficult to attract future users to its site. Importantly, this appears to be an effective constraint for existing reward crowdfunding sites, such as Indiegogo, which take care to avoid having their markets overrun by malfeasance.

It is true that regulation does have positive effects, but the challenge in framing regulations is to avoid weakening private ordering.