If you are trying to sell $200 million of nearly flawless counterfeit $20 currency notes, there is only one real buyer – the US government itself. That seems to be the moral of a story in GQ Magazine about Frank Bourassa.

The story is based largely on Bourassa’s version of events and is possibly distorted in many details. However, the story makes it pretty clear that the main challenge in counterfeiting is not in the manufacture, but in the distribution. Yes, there is a minimum scale in the production process – Bourassa claims that a high end printing press costing only $300,000 was able to achieve high quality fakes. The challenge that he faced was in buying the correct quality of paper. The story does not say why he did not think of vertically integration by buying a mini paper mill, but I guess that is because it is difficult to operate a paper mill secretly unlike the printing press which can be run in a garage without anybody knowing about it. Bourassa was able to proceed because some paper mill somewhere in the world was willing to sell him the paper that he needed.

The whole point of anti counterfeiting technology is to increase the fixed cost of producing a note without increasing the variable cost too much. So high quality counterfeiting is not viable unless it is done in scale. But the distribution of fake notes suffers from huge diseconomies of scale – while it is pretty easy to pass off a few fake notes (especially small denomination notes), Bourassa found that it was difficult to sell large number of notes at even 70% discount to face value. He ended up selling his stockpile to the US government itself. The price was his own freedom.

To prevent counterfeiting, the government needs to ensure that at every possible scale of operations, the combined cost of production and distribution exceeds the face value of the note. At low scale, the high fixed production cost makes counterfeiting uneconomical, while at large scale, the high distribution cost is the counterfeiter’s undoing. That is why the only truly successful counterfeiters have been other sovereigns who have two decisive advantages: first for them the fixed costs are actually sunk costs, and second, they have access to distribution networks that ordinary counterfeiters cannot dream of.

A few days back, the IMF made a change in its rule for setting interest rates on SDRs (Special Drawing Rights) and set a floor of 5 basis points (0.05%) on this rate. The usual zero lower bound on interest rates does not apply to the SDR as there are no SDR currency notes floating around. The SDR is only a unit of account and to some extent a book entry currency. There is no technical problem with setting the interest rate on the SDR to a substantially negative number like -20%.

In finance theory, there is no conceptual problem with a large negative interest rate. Though we often describe the interest rate (r) as a price, actually it is 1+r and not r itself that is a price. The price of one unit of money a year later in terms of money today is 1+r. Prices have to be non negative, but this only requires that r can not drop below -100%. With bearer currency in circulation, a zero lower bound (ZLB) comes about because savers have the choice of saving in the form of currency and earning a zero interest rate. Actually the return on cash is slightly negative (probably close to -0.5%) because of storage (and insurance) costs. As such, the ZLB is actually not at zero, but at somewhere between -0.25% and -0.50%.

It has long been understood that a book entry (or mere unit of account) currency like the SDR is not subject to the ZLB at all. Buiter for example proposed the use of a parallel electronic currency as a way around the ZLB.

In this context, it is unfortunate that the IMF has succumbed to the fetishism of positive interest rates. At the very least, it has surrendered its potential for thought leadership. At worst, the IMF has shown that it is run by creditor nations seeking to earn a positive return on their savings when the fundamentals do not justify such a return.

ICE Benchmark Administration (IBA), the new administrator of Libor has published a position paper on the future evolution of Libor. The core of the paper is a shift to “a more transaction-based approach for determining LIBOR submissions” and a “more prescriptive calculation methodology”. In this post, I discuss the following IBA proposals regarding interpolation and extrapolation:

Interpolation and extrapolation techniques are currently used where appropriate by benchmark submitters according to formulas they have adopted individually.

We propose that inter/extrapolation should be used:

  1. When a benchmark submitter has no available transactions on which to base its submission for a particular tenor but it does have transaction-derived anchor points for other tenors of that currency, and
  2. If the submitter’s aggregate volume of eligible transactions is less than a minimum level specified by IBA.

To ensure consistency, IBA will issue interpolation formula guidelines

Para 5.7.8

In my view, it does not make sense for the submitter to perform interpolations in situations that are sufficiently standardized for the administrator to provide interpolation formulas. It is econometrically much more efficient for the administrator to perform the interpolation. For example, the administrator can compute a weighted average with lower weights on interpolated submission – ideally the weights would be a declining function of the width of the interpolation interval. Thus where many non interpolated submissions are available, the data from other tenors would be virtually ignored (because of low weights). But where there are no non-interpolated submissions, the data from other tenors would drive the computed value. The administrator can also use non linear (spline) interpolation across the full range of tenors. If submitters are allowed to interpolate, perverse outcomes are possible. For example, where the yield curve has a strong curvature but only a few submitters provide data on the correct tenor, these will differ sharply from the incorrect (interpolated) submissions of the majority of the submitters. The standard procedure of ignoring extreme submissions would discard all the correct data and average all the incorrect submissions!

Many people tend to forget that even the computation of an average is an econometric problem that can benefit from the full panoply of econometric techniques. For example, an econometrician might suggest interpolating across submission dates using a Kalman filter. Similarly, covered interest parity considerations would suggest that submissions for Libor in other currencies should be allowed to influence the estimation of Libor in each currency (simultaneous equation rather than single equation estimation). So long as the entire estimation process is defined in open source computer code, I do not see why Libor estimates should not be based on a complex econometric procedure – a Bayesian Vector Auto Regression (VAR) with Garch errors for example.

For quite some time now, I have been concerned that the SIM card in the mobile phone is becoming the most vulnerable single point of failure in online security. The threat model that I worry about is that somebody steals your mobile, transfers the SIM card to another phone, and goes about quickly resetting the passwords to your email accounts and other sites where you have provided your mobile number as your recovery option. Using these email accounts, the thief then proceeds to reset passwords on various other accounts. This threat model cannot be blocked by having a strong PIN or pattern lock on the phone or by remotely wiping the device. That is because, the thief is using your SIM and not your phone.

If the thief knows enough of your personal details (name, data of birth and other identifying information), then with a little bit of social engineering, he could do a lot of damage during the couple of hours that it would take to block the SIM card. Remember that during this period, he can send text messages and Whatsapp messages in your name to facilitate his social engineering. The security issues are made worse by the fact that telecom companies simply do not have the incentives and expertise to perform the authentication that financial entities would do. There have been reports of smart thieves getting duplicate SIM cards issued on the basis of fake police reports and forged identity documents (see my blog post of three years ago).

Modern mobile phones are more secure than the SIM cards that we put inside them. They can be secured not only with PIN and pattern locks but also fingerprint scanner and face recognition software. Moreover, they support encryption and remote wiping. It is true that SIM cards can be locked with a PIN which has to be entered whenever the phone is switched off and on or the SIM is put into a different mobile. But I am not sure how useful this would be if telecom companies are not very careful while providing the PUK code which allows the PIN to be reset.

If we assume that the modern mobile phone can be made reasonable secure, then it should be possible to make SIM cards more secure without the inconvenience of entering a SIM card PIN. In the computer world, for example, it is pretty common (in fact recommended) to do remote (SSH) login using only authentication keys without any user entered passwords. This works with a pair of encryption keys – the public key sits in the target machine and the private key in the source machine. A similar system should be possible with SIM cards as well, with the private key sitting on the mobile and backed up on other devices. Moving the SIM to another phone would not work unless the thief can also transfer the private key. Moreover, you would be required to use the backed up private key to make a request for a SIM replacement. This would keep SIM security completely in your hands and not in the hands of a telecom company that has no incentive to protect your SIM.

This system could be too complex for many users who use a phone only for voice and non critical communications. It could therefore be an opt-in system for those who use online banking and other services a lot and require higher degree of security. Financial services firms should also insist on the higher degree of security for high value transactions.

I am convinced that encryption is our best friend: it protects us against thieves who are adept at social engineering, against greedy corporations who are too careless about our security, and against overreaching governments. The only thing that you are counting on is that hopefully P ≠ NP.

Much has been written since the Global Financial Crisis about how modern banking system has become less and less about financing productive investments and more and more about shuffling pieces of paper in speculative trading. Last month, Jordà, Schularick and Taylor wrote an NBER Working Paper “The Great Mortgaging: Housing Finance, Crises, and Business Cycles” describing an even more fundamental change in banking during the 20th century. They construct a database of bank credit in advanced economies from 1870 to 2011 and document “an explosion of mortgage lending to households in the last quarter of the 20th century”. They conclude that:

To a large extent the core business model of banks in advanced economies today resembles that of real estate funds: banks are borrowing (short) from the public and capital markets to invest (long) into assets linked to real estate.

Of course, it can be argued that mortgage lending is an economically useful activity to the extent that it allows people early in their career to buy houses. But it is also possible that much of this lending only boosts house prices and does not improve the affordability of houses to any significant extent.

The more important question is why banks have become less important in lending to businesses. One possible answer that in this traditional function, they have been disintermediated by capital markets. On the mortgages side, however, perhaps, banks are dominant only because they with their Too-Big-To-Fail (TBTF) subsidies can afford to take the tail risks that capital markets refuse to take.

I think the Jordà, Schularick and Taylor paper raises the fundamental question of whether advanced economies need banks at all. If regulators impose the kind of massive capital requirements that Admati and her coauthors have been advocating, and banks were forced to contract, capital markets might well step in to fill the void in the advanced economies. The situation might well be different in emerging economies.

The CME futures contracts on the S&P 500 index comes in two flavours – the big or full-size (SP) contract is five times the E-Mini (ES) contract. For clearing purposes, SP and ES contracts are fungible with a five to one ratio. The daily settlement price of both contracts is obtained by taking a volume weighted average price of both contracts taken together weighted in the same ratio.

Yet, according to a recent SEC order against Latour Trading LLC and Nicolas Niquet, a broker-dealer is required to maintain a net-capital on the two contracts separately. In Para 28 of its order, the SEC says that in February 2010, Latour held 333,251 long ES contracts and 66,421 short SP contracts, and it netted these out to a long position of 1,146 ES contracts requiring a net capital of $14,325. According to the SEC, these should not have been netted out and Latour should have held a net capital of $8.32 million ($4.17 million for the ES and $4.15 million for the SP). This is surely absurd.

It is not as if the SEC does not allow netting anywhere. It allows index products to be offset by qualified stock baskets (para 10). In other words, an approximate hedge (index versus an approximate basket) can be netted but an exact hedge (ES versus SP) cannot be netted.

PS: I am not defending Latour at all. The rest of the order makes clear that there was a great deal of incompetence and deliberate under-estimation of net capital going on. It is only on the ES/SP netting claim that I think the SEC regulations are unreasonable.

It is well known that financial repression more or less disappeared in advanced economies during the 1980s and 1990s, but has been making a comeback recently. Is it possible that financial repression did not actually disappear, but was simply outsourced to China? And the comeback that we are seeing after the Global Financial Crisis is simply a case of insourcing the repression back?

This thought occurred to me after reading an IMF Working Paper on “Sovereign Debt Composition in Advanced Economies: A Historical Perspective”. What this paper shows is that many of the nice things that happened to sovereign debt in advanced economies prior to the Global Financial Crisis was facilitated by the robust demand for this debt by foreign central banks. In fact, the authors refer to this period not as the Great Moderation, but as the Great Accumulation. Though they do not mention China specifically, it is clear that the Great Accumulation is driven to a great extent by China. It is also clear that much of the Chinese reserve accumulation is made possible by the enormous financial repression within that country.

This leads me to my hypothesis that just as the advanced economies outsourced their manufacturing to more efficient manufacturers in China, they outsourced their financial repression to the most efficient manufacturer of financial repression – China. Now that China is becoming a less efficient and less willing provider of financial repression, advanced economies are insourcing this job back to their own central banks.

In this view of things, we overestimated the global reduction of financial repression in the 1990s and are overestimating the rise in financial repression since the crisis.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers