Prof. Jayanth R. Varma’s Financial Markets Blog

A blog on financial markets and their regulation

Carbanak/Anunak: Patient Bank Hacking

There were a spate of press reports a week back about a group of hackers (referred to as the Carbanak or Anunak group) who had stolen nearly a billion dollars from close to a hundred different banks and financial institutions from around the world. I got around to reading the technical reports about the hack only now: the Kaspersky report and blog post as well as the Group-IB/Fox-IT report of December 2014 and their recent update. A couple of blog posts by Brian Krebs also helped.

The two technical analyses differ on a few details: Kaspersky suggests that the hackers had a Chinese connection while Group-IB/Fox-IT suggests that they were Russian. Kaspersky also seems to have had access to some evidence discovered by law enforcement agencies (including files on the servers used by the hackers). Group-IB/Fox-IT talk only about Russian banks as the victims while Kaspersky reveals that some US based banks were also hacked. But by and large the two reports tell a similar story.

The hackers did not resort to the obvious ways of skimming money from a bank. To steal money from an ATM, they did not steal customer ATM cards or PIN numbers. Nor did they tamper with the ATM itself. Instead they hacked into the personal computers of bank staff including system administrators and used these hacked machines to send instructions to the ATM using the banks’ ATM infrastructure management software. For example, an ATM uses Windows registry keys to determine which tray of cash contains 100 ruble notes and which contains 5000 ruble notes. For example, the CASH_DISPENSER registry key might have VALUE_1 set to 5000 and VALUE_4 set to 100. A system administrator can change these settings to tell the ATM that the cash has been loaded into different bins by setting VALUE_1 to 100 and VALUE_4 to 5000 and restarting Windows to let the new values take effect. The hackers did precisely that (using the system administrators’ hacked PCs) so that the ATM which thinks it is dispensing 1000 rubles in the form of ten 100 ruble notes would actually dispense 50,000 rubles (ten 5000 ruble notes).

Similarly, an ATM has a debug functionality to allow a technician to test the functioning of the ATM. With the ATM vault door open, a technician could issue a command to the ATM to dispense a specified amount of cash. There is no hazard here because with the vault door open, the technician anyway has access to the whole cash without issuing any command. With access to the system administrators’ machines, the hackers simply deleted the piece of code that checked whether the vault door was open. All that they needed to do was to have a mole stand in front of the ATM when they issued a command to the ATM to dispense a large amount of cash.

Of course, ATMs were not the only way to steal money. Online fund transfer systems could be used to transfer funds to accounts owned by the hackers. Since the hackers had compromised the administrators’ accounts, they had no difficulty getting the banks to transfer the money. The only problem was to prevent the money from being traced back to the hackers after the fraud was discovered. This was achieved by using several layers of legal entities before being loaded into hundreds of credit cards which had been prepared in advance.

It is a very effective way to steal money, but it requires a lot of patience. “The average time from the moment of penetration into the financial institutions internal network till successful theft is 42 days.” Using emails with malicious attachments to hack a bank employee’s computer, the hackers patiently worked their way laterally infecting the machines of other employees until they succeeded in compromising a system administrator’s machine. Then they collected data patiently about the banks’ internal systems using screenshots and videos sent from the administrator’s machines by the hackers’ malware. Once they understood the internal systems well, they could use the systems to steal money.

The lesson for banks and financial institutions is that it is not enough to ensure that the core computer systems are defended in depth. The Snowden episode showed that the most advanced intelligence agencies in the world are vulnerable to subversion by their own administrators. The Carbanak/Anunak incident shows that well defended bank systems are vulnerable to the recklessness of their own employees and system administrators using unpatched Windows computers and carelessly clicking on malicious email attachments.

Loss aversion and negative interest rates

Loss aversion is a basic tenet of behavioural finance, particularly prospect theory. It says that people are averse to losses and become risk seeking when confronted with certain losses. There is a huge amount of experimental evidence in support of loss aversion, and Daniel Kahneman won the Nobel Prize in Economics mainly for his work in prospect theory.

What are the implications of prospect theory for an economy with pervasive negative interest rates? As I write, German bund yields are negative up to a maturity of five years. Swiss yields are negative out to eight years (until a few days back, it was negative even at the ten year maturity). France, Denmark, Belgium and Netherlands also have negative yields out to at least three years.

A negative interest rate represents a certain loss to the investor. If loss aversion is as pervasive in the real world as it is in the laboratory, then investors should be willing to accept an even more negative expected return in risky assets if these risky assets offer a good chance of avoiding the certain loss. For example, if the expected return on stocks is -1.5% with a volatility of 15%, then there is a 41% chance that the stock market return is positive over a five year horizon (assuming a normal distribution). If the interest rate is -0.5%, a person with sufficiently strong loss aversion would prefer the 59% chance of loss in the stock market to the 100% chance of loss in the bond market. Note that this is the case even though the expected return on stocks in this example is less than that on bonds. As loss averse investors flee from bonds to stocks, the expected return on stocks should fall and we should have a negative equity risk premium. If there are any neo-classical investors in the economy who do not conform to prospect theory, they would of course see this as a bubble in the equity market; but if laboratory evidence extends to the real world, there would not be many of them.

The second consequence would be that we would see a flipping of the investor clientele in equity and bond markets. Before rates went negative, the bond market would have been dominated by the most loss averse investors. These highly loss averse investors should be the first to flee to the stock markets. At the same time, it should be the least loss averse investors who would be tempted by the higher expected return on bonds (-0.5%) than on stocks (-1.5%) and would move into bonds overcoming their (relatively low) loss aversion. During the regime of positive interest rates and positive equity risk premium, the investors with low loss aversion would all have been in the equity market, but they would now all switch to bonds. This is the flipping that we would observe: those who used to be in equities will now be in bonds, and those who used to be in bonds will now be in equities.

This predicted flipping is a testable hypothesis. Examination of the investor clienteles in equity and bond markets before and after a transition to negative interest rates will allow us to test whether prospect theory has observable macro consequences.

Bank deposits without those exotic swaptions

Yesterday, the Reserve Bank of India did retail depositors a favour: it announced that it would allow banks to offer “non-callable deposits”. Currently, retail deposits are callable (depositors have the facility of premature withdrawal).

Why can the facility of premature withdrawal be a bad thing for retail depositors? It would clearly be a good thing if the facility came free. But in a free market, it would be priced. The facility of premature withdrawal is an embedded American-style swaption and a callable deposit is just a non callable deposit bundled with that swaption whether the depositor wants that bundle or not. You pay for the swaption whether you need it or not.

Most depositors would not exercise that swaption optimally for the simple reason that optimal exercise is a difficult optimization problem to solve. Fifteen years ago, Longstaff, Santa-Clara and Schwartz wrote a paper showing that Wall Street firms were losing billions of dollars because they were using over simplified (single factor) models to exercise American-style swaptions (“Throwing away a billion dollars: The cost of suboptimal exercise strategies in the swaptions market.”, Journal of Financial Economics 62.1 (2001): 39-66.). Even those simplified (single factor) models would be far beyond the reach of most retail depositors. It is safe to assume that almost all retail depositors behave suboptimally in exercising their premature withdrawal option.

In a competitive market, the callable deposits would be priced using a behavioural exercise model and not an optimal exercise strategy. Still the problem remains. Some retail depositors would exercise their swaptions better than others. A significant fraction might just ignore the swaption unless they have a liquidity need to withdraw the deposits. These ignorant depositors would subsidize the smarter depositors who exercise it frequently (though still suboptimally). And it makes no sense at all for the regulator to force this bad product on all depositors.

Post global financial crisis, there is a push towards plain vanilla products. The non callable deposit is a plain vanilla product. The current callable version is a toxic/exotic derivative.

The politics of SEC enforcement or is it data mining?

Last month, Jonas Heese published a paper on “Government Preferences and SEC Enforcement” which purports to show that the US Securities and Exchange Commission (SEC) refrains from taking enforcement action against companies for accounting restatements when such action could cause large job losses particularly in an election year and particularly in politically important states. The results show that:

  • The SEC is less likely to take enforcement action against firms that employ relatively more workers (“labour intensive firms”).
  • This effect is stronger in a year in which there is a presidential election
  • The election year effect in turn is stronger in the politically important states that determine the electoral outcome.
  • Enforcement action is also less likely if the labour intensive firm is headquartered in a district of a senior congressman who serves on a committee that oversees the SEC

All the econometrics appear convincing:

  • The data includes all enforcement actions pertaining to accounting restatements over a 30 year period from 1982 to 2012: nearly 700 actions against more than 300 firms.
  • A comprehensive set of control variables have been used including the F-score which has been used in previous literature to predict accounting restatements.
  • A variety of robustness and sensitivity tests have been used to validate the results

But then, I realized that there is one very big problem with the paper – the definition of labour intensity:

I measure LABOR INTENSITY as the ratio of the firm’s total employees (Compustat item: EMP) scaled by current year’s total average assets. If labor represents a relatively large proportion of the factors of production, i.e., labor relative to capital, the firm employs relatively more employees and therefore, I argue, is less likely to be subject to SEC enforcement actions.

Seriously? I mean, does the author seriously believe that politicians would happily attack a $1 billion company with 10,000 employees (because it has a relatively low labour intensity of 10 employees per $1 million of assets), but would be scared of targeting a $10 million company with 1,000 employees (because it has a relatively high labour intensity of 100 employees per $1 million of assets)? Any politician with such a weird electoral calculus is unlikely to survive for long in politics. (But a paper based on this alleged electoral calculus might even get published!)

I now wonder whether the results are all due to data mining. Hundreds of researchers are trying many things: they are choosing different subsets of SEC enforcement actions (say accounting restatements), they are selecting different subsets of companies (say non financial companies) and then they are trying many different ratios (say employees to assets). Most of these studies go nowhere, but a tiny minority produce significant results and they are the ones that we get to read.

Why did the Swiss franc take half a million milliseconds to hit one euro?

Updated

In high frequency trading, nine minutes is an eternity: it is half a million milliseconds – enough time for five billion quotes to arrive in the hyperactive US equity options market at its peak rate. On a human time scale, nine minutes is enough time to watch two average online content videos.

So what puzzles me about the soaring Swiss franc last week (January 15) is not that it rose so much, nor that it massively overshot its fair level, but that the initial rise took so long. Here is the time line of how the franc moved:

  • At 9:30 am GMT, the Swiss National Bank (SNB) announced that it was “discontinuing the minimum exchange rate of CHF 1.20 per euro” that it had set three years earlier. I am taking the time stamp of 9:30 GMT from the “dc-date” field in the RSS feed of the SNB which reads “2015-01-15T10:30:00+01:00” (10:30 am local time which is one hour ahead of GMT).
  • The head line “SNB ENDS MINIMUM EXCHANGE RATE” appeared on Bloomberg terminals at 9:30 am GMT itself. Bloomberg presumably runs a super fast version of “if this then that”. (It took Bloomberg nine minutes to produce a human written story about the development, but anybody who needs a human written story to interpret that headline has no business trading currencies).
  • At the end of the first minute, the euro had traded down to only 1.15 francs, at the end of the third minute, the euro still traded above 1.10. The next couple of minutes saw a lot of volatility with the euro falling below 1.05 and recovering to 1.15. At the end of minute 09:35, the euro again dropped below 1.05 and started trending down. It was only around 09:39 that it fell below 1.00. It is these nine minutes (half a million milliseconds) that I find puzzling.
  • The euro hit its low (0.85 francs) at 09:49, nineteen minutes (1.1 million milliseconds) after the announcement. This overshooting is understandable because the surge in the franc would have triggered many stop loss orders and also knocked many barrier options.
  • Between 09:49 and 09:55, the euro recovered from its low and after that it traded between 1.00 and 1.05 francs.

It appears puzzling to me that no human trader was taking out every euro bid in sight at around 9:33 am or so. I find it hard to believe that somebody like a George Soros in his heyday would have taken more than a couple of minutes to conclude that the euro would drop well below 1.00. It would then make sense to simply lift every euro bid above 1.00 and then wait for the point of maximum panic to buy the euros back.

Is it that high frequency trading has displaced so many human traders that there are too few humans left who can trade boldly when the algorithms shut down? Or are we in a post crisis era of mediocrity in the world of finance?

Updated to correct 9:03 to 9:33, change eight billion to five billion and end the penultimate sentence with a question mark.

RBI is also concerned about two hour resumption time for payment systems

Two months back, I wrote a blog post on how the Basel Committee on Payments and Market Infrastructures was reckless in insisting on a two hour recovery time even from severe cyber attacks.

I think that extending the business continuity resumption time target to a cyber attack is reckless and irresponsible because it ignores Principle 16 which requires an FMI to “safeguard its participants’ assets and minimise the risk of loss on and delay in access to these assets.” In a cyber attack, the primary focus should be on protecting participants’ assets by mitigating the risk of data loss and fraudulent transfer of assets. In the case of a serious cyber attack, this principle would argue for a more cautious approach which would resume operations only after ensuring that the risk of loss of participants’ assets has been dealt with. … The risk is that payment and settlement systems in their haste to comply with the Basel mandates would ignore security threats that have not been fully neutralized and expose their participants’ assets to unnecessary risk. … This issue is all the more important for countries like India whose enemies and rivals include some powerful nation states with proven cyber capabilities.

I am glad that last month, the Reserve Bank of India (RBI) addressed this issue in its Financial Stability Report. Of course, as a regulator, the RBI uses far more polite words than a blogger like me, but it raises almost the same concerns (para 3.58):

One of the clauses 31 under PFMIs requires that an FMI operator’s business continuity plans must ‘be designed to ensure that critical information technology (IT) systems can resume operations within two hours following disruptive events’ and that there can be ‘complete settlement’ of transactions ‘by the end of the day of the disruption, even in the case of extreme circumstances’. However, a rush to comply with this requirement may compromise the quality and completeness of the analysis of causes and far-reaching effects of any disruption. Restoring all the critical elements of the system may not be practically feasible in the event of a large-scale ‘cyber attack’ of a serious nature on a country’s financial and other types of information network infrastructures. This may also be in conflict with Principle 16 of PFMIs which requires an FMI to safeguard the assets of its participants and minimise the risk of loss, as in the event of a cyber attack priority may need to be given to avoid loss, theft or fraudulent transfer of data related to financial assets and transactions.

Heterogeneous investors and multi factor models

I read two papers last week that introduced heterogeneous investors into multi factor asset pricing models. The papers help produce a better understanding of momentum and value but they seem to raise as many questions as they answer. The easier paper is A Tug of War: Overnight Versus Intraday Expected Returns by Dong Lou, Christopher Polk, and Spyros Skouras. They show that:

100% of the abnormal returns on momentum strategies occur overnight; in stark contrast, the average intraday component of momentum profits is economically and statistically insignificant. … In stark contrast, the profits on size and value … occur entirely intraday; on average, the overnight components of the profits on these two strategies are economically and statistically insignificant.

The paper also presents some evidence that “is consistent with the notion that institutions tend to trade intraday while individuals are more likely to trade overnight.” In my view, their evidence is suggestive but by no means compelling. The authors also claim that individuals trade with momentum while institutions trade against it. If momentum is not a risk factor but a free lunch, then this would imply that individuals are smart investors.

The NBER working paper (Capital Share Risk and Shareholder Heterogeneity in U.S. Stock Pricing) by Martin Lettau, Sydney C. Ludvigson and Sai Ma presents a more complex story. They claim that rich investors (those in the highest deciles of the wealth distribution) invest disproportionately in value stocks, while those in lower wealth deciles invest more in momentum stocks. They then examine what happens to the two classes of investors when there is a shift in the share of income in the economy going to capital as opposed to labour. Richer investors derive most of their income from capital and an increase in the capital share benefits them. On the other hand, investors from lower deciles of wealth derive most of their income from labour and an increase in the capital share hurts them.

Finally, the authors show very strong empirical evidence that the value factor is positively correlated with the capital share while momentum is negatively correlated. This would produce a risk based explanation of both factors. Value stocks lose money when the capital share is moving against the rich investors who invest in value and therefore these stocks must earn a risk premium. Similarly, momentum stocks lose money when the capital share is moving against the poor investors who invest in momentum and therefore these stocks must also earn a risk premium.

The different portfolio choices of the rich and the poor is plausible but not backed by any firm data. The direction of causality may well be in the opposite direction: Warren Buffet became rich by buying value stocks; he did not invest in value because he was rich.

But the more serious problem with their story is that it implies that both rich and poor investors are irrational in opposite ways. If their story is correct, then the rich must invest in momentum stocks to hedge capital share risk. For the same reason, the poor should invest in value stocks. In an efficient market, investors should not earn a risk premium for stupid portfolio choices. (Even in a world of homogeneous investors, it is well known that a combination of value and momentum has a better risk-return profile than either by itself: see for example, Asness, C. S., Moskowitz, T. J. and Pedersen, L. H. (2013), Value and Momentum Everywhere. The Journal of Finance, 68: 929-985)

FCA Clifford Chance Report Part II: The Menace of Selective Briefing

Yesterday, I blogged about Clifford Chance report on the UK FCA (Financial Conduct Authority) from the viewpoint of regulatory capture. Today, I turn to the issue of the selective pre-briefing provided by the FCA to journalists and industry bodies. Of course, the FCA is not alone in doing this: government agencies around the world indulge in this anachronistic practice.

In the pre internet era, government agencies had to rely on the mass media to disseminate their policies and decisions. It was therefore necessary for them to cultivate the mass media to ensure that their messages got the desired degree of coverage. One of the ways of doing this was to provide privileged access to select journalists in return for enhanced coverage.

This practice is now completely anachronistic. The internet has transformed the entire paradigm of mass communication. In the old days, we had a push channel in which the big media outlets pushed their content out to consumers. The internet is a pull channel in which consumers pull whatever content they want. For example, I subscribe to the RSS/Atom feeds of several regulators around the world. I also subscribe to the feeds of several blogs which comment on regulatory developments world wide. My feed reader pulls all this content to my computer and mobile devices and provides me instant excess to these messages without the intermediation of any big media gatekeepers.

In this context, the entire practice of pre-briefing is anachronistic. Worse, it is inimical to the modern democratic ideals of equal and fair access to all. The question then is why does it survive at all. I am convinced that what might have had some legitimate function decades ago has now been corrupted into something more nefarious. Regulators now use privileged access to suborn the mass media and to get favourable coverage of their decisions. Journalists have to think twice before they write something critical about the regulator who may simply cut off their privileged access.

It is high time we put an end to this diabolical practice. What I would like to see is the following:

  1. A regulator could meet a journalist one-on-one, but the entire transcript of the interview must then be published on the regulator’s website and the interview must be embargoed until such publication.
  2. A regulator could hold press conferences or grant live interviews to the visual media, but such events must be web cast live on the regulator’s website and transcripts must be published soon after.
  3. The regulators should not differentiate between (a) journalists from the mainstream media and (b) representatives of alternate media (including bloggers).
  4. Regulator web sites and feeds must be more friendly to the general public. For example, the item description field in an RSS feed or the item content field in an Atom feed should contain enough information for a casual reader to decide whether it is worth reading in full. Regulatory announcements must provide enough background to enable the general public to understand them.

Any breach of (1) or (2) above should be regarded as a selective disclosure that attracts the same penalties as selective disclosure by an officer of a listed company.

What I also find very disturbing is the practice of the regulator holding briefing sessions with select group of regulated entities or their associations or lobby groups. In my view, while the regulator does need to hold confidential discussions with regulated entities on a one-on-one basis, any meeting attended by more than one entity cannot by definition be about confidential supervisory concerns. The requirement of publication of transcripts or live web casts should apply in these cases as well. In the FCA case, it seems to be taken for granted by all (including the Clifford Chance report) that the FCA needs to have confidential discussions with the Association of British Insurers (ABI). I think this view is mistaken, particularly when it is not considered necessary to hold a similar discussion with the affected policy holders.

Regulatory capture is a bigger issue than botched communications

I just finished reading the 226 page report that the non independent directors of the UK FCA (Financial Conduct Authority) commissioned from the law firm Clifford Chance on the FCA’s botched communications regarding its proposed review of how insurance companies treat customers trapped in legacy pension plans. The report published earlier this month deals with the selective disclosure of market moving price sensitive information by the FCA itself to one journalist, and the failure of the FCA to issue corrective statements in a timely manner after large price movements in the affected insurance companies on March 28, 2014.

I will have a separate blog post on this whole issue of selective disclosure to journalists and to industry lobby groups. But in this post, I want to write about what I think is the bigger issue in the whole episode: what appears to me to be a regulatory capture of the Board of the FCA and of HM Treasury. It appears to me that the commissioning of the Clifford Chance review serves to divert attention from this vital issue and allows the regulatory capture to pass unnoticed.

The rest of this blog post is based on reading between the lines in the Clifford Chance report and is thus largely speculative. The evidence of regulatory capture is quite stark, but most of the rest of the picture that I present could be totally wrong.

The sense that I get is that there were two schools of thought within the FCA. One group of people thought that the FCA needed to do something about the 30 million policy holders who were trapped in exploitative pension plans that they could not exit because of huge exit fees. Since the plans were contracted prior to 2000 (in some cases they dated back to the 1970s), they did not enjoy the consumer protections of the current regulatory regime. This group within the FCA wanted to use the regulator’s powers to prevent these policy holders from being treated unfairly. The simplest solution of course was to abolish the exit fees, and let these 30 million policy holders choose new policies.

The other group within the FCA wanted to conduct a cosmetic review so that the FCA would be seen to be doing something, but did not want to do anything that would really hurt the insurance companies who made tons of money off these bad policies. Much of the confusion and lack of coordination between different officials of the FCA brought out in the Clifford Chance report appears to me to be only a manifestation of the tension between these two views within the FCA. It was critical for the second group’s strategy to work that the cosmetic review receive wide publicity that would fool the public into thinking that something was being done. Hence the idea of doing a selective pre-briefing to a journalist known to be sympathetic to the plight of the poor policy holders. The telephonic briefing with this journalist was not recorded, and was probably ambiguous enough to maintain plausible deniability.

The journalist drew the reasonable inference that the first group in the FCA had won and that the FCA was serious about giving a fair deal to the legacy policy holders and reported accordingly. What was intended to fool only the general public ended up fooling the investors as well, and the stock prices of the affected insurance companies crashed after the news report came out. The big insurance companies were now scared that the review might be a serious affair after all and pulled out all their resources to protect their profits. They reached out to the highest levels of the FCA and HM Treasury and ensured that their voice was heard. Regulatory capture is evident in the way in which the FCA abandoned even the pretence of serious action, and became content with cosmetic measures. Before the end of the day, a corrective statement came out of the FCA which made all the right noises about fairness, but made it clear that exit fees would not be touched.

The journalist in question (Dan Hyde of the Telegraph) nailed this contradiction in an email quoted in the Clifford Chance report (para 16.8)

But might I suggest that by any standard an exit fee that prevents a customer from getting a fairer deal later in life is in itself an unfair term on a policy.

On March 28, 2014, the top brass of the FCA and HM Treasury could see the billions of pounds wiped out on the stock exchange from the market value of the insurance companies, and they could of course hear the complaints from the chairmen of those powerful insurance companies. There was no stock exchange showing the corresponding improvement in the net worth of millions of policy holders savouring the prospect of escape from unfair policies, and their voice was not being heard at all. Out of sight, out of mind.

Unwarranted complacency about regulated financial entities

Two days back, the Securities and Exchange Board of India (SEBI) issued a public Caution to Investors about entities that make false promises and assure high returns. This is quite sensible and also well intentioned. But the first paragraph of the press release is completely wrong in asking investors to focus on whether the investment is being offered by a regulated or by an unregulated entity:

It has come to the notice of Securities and Exchange Board of India (SEBI) that certain companies / entities unauthorisedly, without obtaining registration and illegally are collecting / mobilising money from the general investors by making false promises, assuring high return, etc. Investors are advised to be careful if the returns offered by the person/ entity is very much higher than the return offered by the regulated entities like banks, deposits accepted by Companies, registered NBFCs, mutual funds etc.

This is all wrong because the most important red flag is the very high return itself, and not the absence of registration and regulation. That is the key lesson from the Efficient Markets Hypothesis:

If something appears too good to be true, it is not true.

For the purposes of this proposition, it does not matter whether the entity is regulated. To take just one example, Bernard L. Madoff Investment Securities LLC was regulated by the US SEC as a broker dealer and as an investment advisor. Fairfield Greenwich Advisors LLC (through whose Sentry Fund, many investors invested in Madoff’s Ponzi scheme) was also an SEC regulated investment advisor.

Regulated entities are always very keen to advertise their regulated status as a sign of safety and soundness. (Most financial entities usually prefer light touch regulation to no regulation at all.) But regulators are usually at pains to avoid giving the impression that regulation amounts to a seal of approval. For example, every public issue prospectus in India contains the disclaimer:

The Equity Shares offered in the Issue have not been recommended or approved by the Securities and Exchange Board of India

In this week’s press release however, SEBI seems to have inadvertently lowered its guard, and has come dangerously close to implying that regulation is a seal of approval and respectability. Many investors would misinterpret the press release as saying that it is quite safe to put money in a bank deposit or in a mutual fund. No, that is not true at all: the bank could fail, and market risks could produce large losses in a mutual fund.

Follow

Get every new post delivered to your Inbox.

Join 1,466 other followers