Friday, September 19, 2008

Too complex to fail


Economists View pointed to this picture above. The authors of Too Big To Fail have recognized this for a while now (as well as the risks posed by GSEs). It's an interesting thought - to come up with an index of connectedness - and I think these are already available somewhat in the form of social networks analysis and as one commenter noted - elements of graph theory can be used.

The main problem is access to this information and whether the Fed or anyone has the right to require all firms to send in this type of information. Recall that not just an financial institutions enter into derivatives contracts. AIG was an insurance company and firms like the Boston Beer Company enter into futures contracts for hops in order to lock in prices. Should they be required to report their positions as well even though they may have a "small" notional amount of derivatives (compared to financial institutions)?

From an information perspective, it seems like having a central clearing house for all derivatives might be the best way to attack the problem. This way all the information is centrally located.

And to try to take this a little further and try to operationalize it. Let's consider two institutions A and B. If the failure of A causes the failure of B and the failure of B causes the failure of A, then the "index of systemic risk" is 100 (or 1). If neither fails then the index is 0. What if the failure of A causes the failure of B but not the inverse so that if B fails A is still safe. Would this index be 50?

If there are 3 institutions, A, B, and C and the failure of A causes the failure of B and the failure of B causes the failure of C then again the index is 100. If A fails and neither B nor C fails then the index is 33 while if A and B fails but not C the index is 66.

The main issue then is not just connectedness (where connectedness is defined by whether "lines" can be drawn between two vertices(?) but also by how "degrees of separation" can be reduced so that in a sense we are all 100 percent connected) because in the above example, A could be connected to B (e.g. have derivatives with B) and B could be connected to C but C need not be connected to A (or vice versa). The main issue has and always had been how to trace the probability of failure of one institution to the rest of the institutions in the system. Here C is 2 degrees away from A but the failure of A also implies C's demise and thus the degree of separation is reduced to 1.

Just rambling thoughts here. A "working" example of using social network analysis has been in epidemiology. EPISIMS is an attempt to model the outbreak of small pox in Portland, OR. Some slides are also available. In epidemiology, when two people meet there is a probability that one will contract small pox from another. (The analogy to contagion or financial panic is that because one bank is related to another does not automatically imply that if one collapses the other will as well.) This probability varies with among other things the length of exposure. The simulation (which is agent based) tries to find the most effective way of preventing the epidemic by graph shattering, i.e. removing nodes. Weakness? Like many agent based systems it assumes that behavior is invariant to knowledge of the epidemic (contagion/financial panic).

No comments: