NTSB, Coop Bank

My two interests of air crash investigations and financial systems are coinciding today as I read through the Coop Bank annual results. Unlike RBS’s decline in 2008, this isn’t a dramatic story of poorly understood risk lurking behind complex financial instrument, it’s a bit more straightforward. But, since I spent some time picking through the numbers I thought I’d capture it for posterity.

A traditional high-street bank makes money from loans because customers have to pay interest on their mortgages and car loans, hence banks consider loans to be assets. The money which you or I have in our current or instant-access saving accounts, “demand deposits”, are liabilities of the bank. The bank pays out interest to savers. Unsurprisingly, the interest rate on loans is higher than what the bank pays to savers, and the difference (called “net interest income”) is income for the bank which ideally helps increase the banks equity (ie. money owned by the bank, which shareholders have claims on).

At first glance, Coop Bank are doing fine here. They have £15.3bn of loans to people (14.8bn) and businesses (0.4bn). They have £22.1bn of customer deposits [page 16], spread fairly evenly between current accounts, instant savings accounts, term savings accounts and ISAs, being a mixture of individuals (£19.4bn) and companies (£2.7bn). A quick check of their website shows they pay savers around 0.25%, and mortgage rates around something like 3%, which directly gets you to their “net interest income” of £394m from their high-street (aka “retail operations”). So that’s a big bunch of money coming in the door, good news!

(They used to be big into commercial property loans, but by 2014 their £1650m of loans included about £900m which were defaulting, and they sold off the rest and got out of that business)

But every business has day-to-day costs, like rent and staff salaries to pay. Staff costs were £187m which sounds like a lot of money, but a UK-wide bank has a lot of staff – 4266 [page 33] of which 3748 were fulltime and 1018 part-time. That’s an average of £43k each, but it’s not spread evenly – the four executive directors got £4172k between them [page 92], and the eleven non-exec directors got £1052k between them [page 95]. In addition, they paid £121m on contractors [page 178]. So, total staff costs were £300m. Hmm, now that £394m income isn’t looking so peachy. We’ve only got £94m left – let’s hope there’s nothing else we have to pay for.

Oops, forgot about the creaking IT infrastructure! The old IT setup was pretty bad it seems. The bank themselves warned investors in 2015 that “the Bank does not
currently have a proven end-to-end disaster recovery capability, especially in the case of a
mainframe failure or a significant data centre outage.” (page 75). The FCA (Financial Conduct Authority) who regulate banks and check that they are at least meeting some basic minimum standards told the Coop Bank in 2015 that they were in breach of those basic standards. So, they came up with a cunning plan to get off their clunky mainframes and onto a whizzy IBM “managed service platform” which, one would hope, is much shinier and has a working and tested backup solution. All of this “remediation” work wasn’t cheap though, clocking in at £141m for the year. The good news is that the FCA are all happy again and it should be a one-off cost, but we’re looking at loss overall for the year of £47m.

But we’re not done yet! We also have some “strategic” projects on the go, which managed to burn £134m [page 19]. A while back, Coop decided to “outsource” its retail mortgage business to Capita, and then spent a lot of time bickering with them, before finally making up this year. Nonetheless, a planned “transformation” of IT systems is getting canned, the demise of which is somehow costing the bank £82m! At the more sensible end, £10m went into “digital” projects, which I assume includes their shiny new mobile app [page 12]. But all in all, those “strategic” projects means we’re now up to a £181m loss.

Only one more big thing to go. Back in 2009, Coop Bank merged/acquired Britannia Building Society, gaining about £4bn of assets in the form of risky commercial property loans, and some liabilities. Those liabilities included IOUs known as Leek Notes which Britannia had issued to get money in the short-term. When Coop acquired Britannia, there was some accountancy sleight of hand done to make the liability look smaller [page 26 in Kelly Review] but nonetheless a £100 IOU still has to ultimately be paid back with £100, and so now Coop Bank is drudging through the reality of a paying back (aka “unwinding”, gotta love the euphemisms) a larger-then-expected liability. In 2016, that was to the tune of £180m.

So now we’re up to a £361m loss. Chuck in a few more projects like EU Payment Directives, some “organizational design changes” which cost £20m and you get to a final overall loss for the year of £477m.

Now, in the same way that I (as a person) can have money that I own in my pocket, Banks can have money that they (as a company) own – which is their equity. In good times, some of that equity gets paid out to shareholders as a dividend, and some is retained within the company to fund future growth. But in bad times, that equity is eroded by losses. Coop Bank started the year with about £1100m of (tier 1) equity, and the £400m loss has chopped that down to £700m. If you’re losing £400m in a year, £700m doesn’t look like a lot of runway and that’s why they’re trying to sell the business or bit of it, or raise equity by converting bonds to shares or issuing bonds.

Like any business, you’ve got to have more assets than liabilities otherwise your creditors can have you declared insolvent. And Coop Bank certainly has more assets than liabilities. But the loans which make up part of the banks assets are fairly illiquid, meaning they can’t be readily turned into cash. Furthermore, they’re somewhat risky since the borrower might run away and default on the loan. So, in order to be able to soak up defaulting loans and have enough money around to people to withdraw their deposits on demand, banks need to have a certain level of equity in proportion to their loans. You can either look at straight equity/assets, aka leverage ratio, which is 2.6% for Coop Bank (down from 3.8% last year). Or you can do some risk-weighting of assets, and get the Tier 1 Capital ratio of 11% (down from 15%). The Bank of England says that “the appropriate Tier 1 equity requirement …be 11% of risk-weighted assets” so Coop Bank is skirting the edges of that.

All in all, if interest rates stay unchanged and Coop Bank’s loans and deposits stay where they are, then you could imagine a small profit from net interest income minus staff/related costs. But the burden of bad acquisitions, failed integration projects and massive IT overhauls are overshadowing all of that and that’s what’s put Coop Bank where it is today.

The cost of generality?

The nice thing about BUGS/JAGS/Stan/etc is that they can operate on arbitrarily complex bayesian networks. You can take my running ‘coin toss’ example and add extra layers. Imagine that we believe that the mint who made the coin produces coins who bias ranges between theta=0.7 and theta=0.9 uniformly. Now we can take data about coin tosses, and use it to infer not only knowledge about the bias of one coin, but also about the coins made by the mint.

But this kind of generality comes at a cost. Let’s look at a simpler model: we have ten datapoints, drawn from a normal distribution with mean mu and standard deviation sigma, and we start with uniform priors over mu and sigma.

For particular values of mu and sigma, the posterior density is proportional to the likelihood, which is a product of gaussians. However, we can avoid doing a naive N exponentials with a bit of algebra, instead doing a single exponential involving a summation. So, as we add more data points, the runtime cost of evaluating the posterior (or at least something proportional to it) will rise, but only at the cost of a few subtractions/squares/divides rather than more exponentials.

In contrast, when I use JAGS to evaluate 20 datapoints, it does twice as many log() calls as it does for 10 datapoints, so seems not to be leveraging any algebraic simplifications.

Next step: write a proof of concept MCMC sampler which runs faster than JAGS for the non-hierarchical cases which are most useful to me.

JAGS: normal and not-normal performance

Previously, we’ve walked through a coin-tossing example in JAGS and looked at the runtime performance. In this episode, we’ll look at the cost of different distributions.

Previously, we’ve used a uniform prior distribution. Let’s baseline on 100,000 step chain, with 100 datapoints. For uniform prior, JAGS takes a mere 0.2secs. But change to a normal prior, such as dnorm(0.5,0.1), and JAGS takes 3.3sec – with __ieee754_log_avx called from DBern::logDensity taking up 80% of the CPU time, according to perf:


  70.90%  jags-terminal  libm-2.19.so         [.] __ieee754_log_avx
   9.27%  jags-terminal  libjags.so.4.0.2     [.] _ZNK4jags20ScalarStochasticNode10logDensityEjNS_7PDFTypeE
   5.09%  jags-terminal  bugs.so              [.] _ZNK4jags4bugs5DBern10logDensityEdNS_7PDFTypeERKSt6vectorIPKdSaIS5_EES5_S5_

If we go from bernoulli data with normal prior, to normal data with normal priors on mean/sd, it gets more expensive again – 4.8 seconds instead of 3.3 – as the conditional posterior gets more complex. But still it’s all about the logarithms.

Logarithms aren’t straightforward to calculate. Computers usually try to do a fast table lookup, falling back to a series-expansion approach if that doesn’t work. On linux, the implementation comes as part of the GNU libc, with the name suggesting that it uses AVX instructions if your CPU is modern enough (there’s not a “logarithm” machine code instruction, but you can use AVX/SSE/etc to help with your logarithm implementation).

Notably, JAGS is only using a single core throughout all of this. If we wanted to compute multiple chains (eg. to check convergence) then the simple approach of running each chain in a separate JAGS process works fine – which is what the jags.parfit R package does. But could you leverage the SIMD nature of SSE/AVX instruction to run several chains in parallel? To be honest, the last time I was at this low level, SSE was just replacing MMX! But since AVX seems to have registers which hold eight 32-bit floats perhaps there’s the potential to do 8 chains in blazing data-parallel fashion?

(Or alternatively, how many people in the world care both about bayesian statistics and assembly-level optimization?!)

Just for reference, on my laptop a simple “double x=0; for (int i=0; i<1e8; i++) x = log(x);” loop takes 6.5 seconds, with 75% being in __ieee754_log_avx – meaning each log() is taking 48ns.

To complete the cycle, let’s go back to JAGS with a simple uniform prior and bernoulli likelihood, only do ten updates, with one datapoints and see how many times ‘log’ is called. For this, we can use ‘ltrace’ to trace calls to shared objects like log():


$ ltrace -xlog   $JAGS/jags/libexec/jags-terminal  example.jags  2>&1 | grep -c log@libm.so

Rather surprisingly, the answer is not stable! I’ve seen anything from 20 to 32 calls to log() even though the model/data isn’t changing (but the random seed presumably is). Does that line up with the 3.4 seconds to do 10 million steps @ 10 data points, if log() takes 48ns? If we assume 2 calls to log(), then 10e6 * 10 * 2 * 48e-9 = 9.6secs. So, about 2.8x out but fairly close.

Next step is to read through the JAGS code to understand the Gibbs sampler in detail. I’ve already read through the two parsers and some of the Graph stuff, but want to complete my understanding of the performance characteristics.

Performance impact of JAGS

In the previous two posts (here and here) I walked through an example of using JAGS directly to analyse a coin-toss experiment.

I’m interested to learn how the runtime of JAGS is affected by model choice and dataset size, and where the time is spent during evaluation. JAGS is open-source, and written in C++, so it’s quite easy to poke around the innards.

First, let’s do some high level black-box tests. We’ll take the coin-flip example from the previous post, and see how the wall-clock time on my rather old laptop increases as a) we increase the chain length, and b) as we increase the dataset size. My expectation is that both will be linear, since JAGS only uses a single core.

For ten coin flips, and 10/20/30 million steps it takes 3.4/6.9/10.4 seconds without monitors, and 4.5/8.8/12.9 seconds with a monitor. Plugging that into R shows a nice linear relationship, and we can get R to build a linear model for us to stop us having to think too hard:


> t < - read.table(stdin(), header=T)
0: steps time
1: 10000000 3.4
2: 20000000 6.9
3: 30000000 10.4

> lm( time ~ steps, t)
Coefficients:
(Intercept)        steps  
   -1.0e-01      3.5e-07  

In other words, each step takes 0.3 microseconds.

Similarly, if we stick to 10 million steps and no monitors, but increase the dataset size across 10/20/50/100, it takes 3.4/4.2/5.8/8.6 seconds which R also shows is linear albeit with a 3 second intercept.


> t < - read.table(stdin(), header=T)
0: datapoints time
1: 10 3.5
2: 20 4.2
3: 50 5.8
4: 100 8.6
5: 
> lm(time ~ datapoints, t)
Coefficients:
(Intercept)   datapoints  
    3.00408      0.05602  

So this mean that it takes 3 seconds to do a 10 million step walk, and although adding more datapoints makes each step more expensive, it’s only a little bit more expensive – 10 datapoints being about 0.5 seconds more than 1 datapoint. However, if we desired to go to “big data” with, say 10 million data points, we’d be talking about half a million seconds – ie. 11 days. So let’s hope we don’t need 10 million steps on a 10 million point dataset!

Next thing on the list is to understand where all that time is going. For this we can use the lovely perf tools which were added to linux in 2.6:


$ perf record jags example.jags 
Welcome to JAGS 4.2.0 on Sun Feb 26 16:00:26 2017
...
Initializing model
Updating 10000000
[ perf record: Woken up 2 times to write data ]
[ perf record: Captured and wrote 0.578 MB perf.data (14737 samples) ]

$ perf report
Overhead  Command        Shared Object        Symbol                                                                                                                           
  33.32%  jags-terminal  libm-2.19.so         [.] __ieee754_log_avx
  19.87%  jags-terminal  basemod.so           [.] _ZN4jags4base15WichmannHillRNG7uniformEv
  11.90%  jags-terminal  libjrmath.so.0.0.0   [.] jags_rbeta
  10.08%  jags-terminal  libm-2.19.so         [.] __ieee754_exp_avx

So this shows that the vast majority of time is being spent calculating logs and exponentials. Wichmann Hill is a pseudo-random uniform number generator. But why would you need exp/log to generate bernoulli distribution using a uniform prior?

Let’s use a debugger to see why it’s calling the log function ..


$ jags -d gdb
(gdb) b __ieee754_log_avx
Breakpoint 1 (__ieee754_log_avx) pending.
(gdb) r example.jags
Starting program: /home/adb/tmp/jags/libexec/jags-terminal example.jags
Welcome to JAGS 4.2.0 on Sun Feb 26 16:14:24 2017
...
Initializing model
Updating 10000000

Breakpoint 1, __ieee754_log_avx (x=16.608779218128113) at ../sysdeps/ieee754/dbl-64/e_log.c:57
57	../sysdeps/ieee754/dbl-64/e_log.c: No such file or directory.
(gdb) bt
#0  __ieee754_log_avx (x=16.608779218128113) at ../sysdeps/ieee754/dbl-64/e_log.c:57
#1  0x00007ffff66abce9 in jags_rbeta (aa=1, bb=0.5, rng=0x63b700) at rbeta.c:102
#2  0x00007ffff690e42e in jags::bugs::ConjugateBeta::update (this=0x63c200, chain=0, rng=0x63b700) at ConjugateBeta.cc:157
#3  0x00007ffff7b8b464 in jags::ImmutableSampler::update (this=0x63c170, rngs=std::vector of length 1, capacity 1 = {...}) at ImmutableSampler.cc:28

Our uniform prior is equivalent to a beta(1,1) prior, and since beta and bernoulli distributions are conjugate, our posterior will be a beta distribution. For Gibbs sampling, each “jump” is a draw from a single parameter conditional distribution – and since we only have one parameter theta, each “jump” sees us draw from a beta distribution.

Of course, we could’ve used this fact to calculate the posterior distribution algebraically and avoid all of this monkeying about with MCMC. But the purpose was to explore the performance of the JAGS implementation rather than solve a coin-toss problem per-se.

In the next article, I’ll look at the performance cost of switching to other distributions, such as normal and lognormal.

JAGS, and a bayesian coin toss

In the previous post, I talked about Bayesian stats and MCMC methods in general. In this post, I’ll work through an example where we try to infer how fair a coin-toss is, based on the results of ten coin flips. Most people use JAGS via an R interface, but I’m going to use JAGS directly to avoid obfuscation.

(Note: a coin-toss is a physical event determined by physics, so the “randomness” arises only through uncertainty of how hard it’s tossed, how fast it spins, where it lands etc, and therefore is open to all sorts of evil)

Firstly, we have to tell JAGS about our problem – eg. how many coin tosses we’ll do, and that we believe each coin toss is effectively a draw from a Bernoulli distribution with unknown proportion theta, and what our prior beliefs about theta are.

To do this, we create “example.model” containing:


model {
  for (i in 1:N){
    x[i] ~ dbern(theta)
  }
  theta ~ dunif(0,1)
}

This says that we’ll have N coin-flips, and each coin flip is assumed to be drawn from the same Bernoulli distribution with unknown proportion theta. We also express our prior belief that all values of theta from zero to one are equally likely.

We can now launch “jags” in interactive mode:


$ jags
Welcome to JAGS 4.2.0 on Sun Feb 26 14:31:57 2017
JAGS is free software and comes with ABSOLUTELY NO WARRANTY
Loading module: basemod: ok
Loading module: bugs: ok

.. and tell it to load our example.model file ..


. model in example.model

If the file doesn’t exist, or the model is syntactically invalid you’ll get an error – silence means everything has gone fine.

Next, we need the data about the coin flip, which corresponds to the x[1] .. x[N] in our model. We create a file called “example.data” containing:


N < - 10
x <- c(0,1,0,1,1,1,0,1,0,0)

The format for this file matches what R’s dump() function spits out. Here we’re saying that we have flipped ten coins (N is 10) and the results were tails/heads/tails/heads/heads etc. I’ve chosen the data so we have the same number of heads and tail, suggesting a fair coin.

We tell JAGS to load this file as data:


. data in example.data
Reading data file example.data

Again, it’ll complain about syntax errors (in an old-school bison parser kinda way) or if you have duplicate bindings. But it won’t complain yet if you set N to 11 but only provided 10 data points.

Next, we tell JAGS to compile everything. This combines your model and your data into an internal graph structure, ready for evaluating. It’s also where JAGS will notice if you’ve got too few data points or any unbound names in your model.


. compile
Reading data file example.data
. compile
Compiling model graph
   Resolving undeclared variables
   Allocating nodes
Graph information:
   Observed stochastic nodes: 10
   Unobserved stochastic nodes: 1
   Total graph size: 14

The graph consists of ten “observed” nodes (one per coin flip) and one unobserved stochastic node (the unknown value of theta). The other nodes presumably include the bernoulli distribution and the uniform prior distribution.

At this stage, we can tell JAGS where it should start its random walk by providing an initial value for theta. To do this, we create a file “example.inits” containing:


theta < - 0.5

.. and tell JAGS about it ..


. parameters in example.inits
Reading parameter file example.inits

Finally, we tell JAGS to initialize everything so we’re ready for our MCMC walk:


. initialize
Initializing model

Now we’re ready to start walking. We need to be a bit careful at first, because we have to choose a starting point for our random walk (we chose theta=0.5) and if that’s not a good choice (ie. it corresponds to a low posterior probability) then it will take a while for the random walk to dig itself out of the metaphorical hole we dropped it in. So, we do a few thousand steps of our random walk, give it a fancy name like “burn-in period” and cross our fingers that our burn-in period was long enough:


. update 4000
Updating 4000
-------------------------------------------------| 4000
************************************************** 100%

(JAGS give some enterprise-level progress bars when in interactive mode, but not in batch mode).

JAGS has happily done 4000 steps in our random walk, but it hasn’t been keeping track of anything. We want to know what values of theta is jumping between, since that sequence (aka “chain”) of values is what we want as output.

To tell JAGS to start tracking where it’s been, we create a sampler for our ‘theta’ variable, before proceeding for another 4000 steps, and then writing the results out to a file:


. monitor theta
. update 4000
-------------------------------------------------| 4000
************************************************** 100%
. coda *

The last command causes two files to be written out – CODAindex.txt and CODAchain1.txt. CODA is a hilariously simple file format, coming originally from the “Convergence Diagnostic and Output Analysis” package in R/S-plus. Each line contains a step number (eg. 4000) and the value of theta at that step (eg. 0.65).

Here’s an interesting thing – why would we need a “Convergence Diagnostic” tool? When we did our “burn-in” phase we crossed our fingers and hoped we’d ran it for long enough. Similarly, when we did the random walk we also used 4000 steps. Is 4000 enough? Too many? We can answer these questions by looking at the results of the random walk – both to get the answer to our original question, but also to gain confidence that our monte-carlo approximation has thrown enough darts to be accurate.

At this point, we’ll take our coda files and load them into R to visualize the results.

$ R
R version 3.0.2 (2013-09-25) -- "Frisbee Sailing"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

> require(coda)
Loading required package: coda

> c < - read.coda(index.file="CODAindex.txt",output.file="CODAchain1.txt")
Abstracting theta ... 5000 valid values

> summary(c)
Iterations = 4001:9000
Thinning interval = 1 
Number of chains = 1 
Sample size per chain = 5000 

1. Empirical mean and standard deviation for each variable,
   plus standard error of the mean:

          Mean             SD       Naive SE Time-series SE 
      0.501658       0.139819       0.001977       0.001977 

2. Quantiles for each variable:

  2.5%    25%    50%    75%  97.5% 
0.2436 0.4000 0.5022 0.6017 0.7675 

This is telling us that, given ten coin flips and our prior uniform belief and our bernoulli assumption, the most probably value for theta (the proportion of coin-flip yielding heads) is close to 0.5. Half of the probability mass lies between theta=0.4 and theta=0.6, and 95% of the probability mass lies between theta=0.25 and theta=0.75.

So it’s highly unlikely that the coin flip is extremely biased – ie. theta<0.25 or theta>0.75. Pleasantly, “highly unlikely” means “probability is less than 5%”. That’s a real common-or-garden probability. Not any kind of frequencist null-hypothesis p-value. We can make lots of other statements too – the probability that the bias is greater than 0.75 is about 40%. If we had a second coin (or coin flipper) we could make statement like “the probability that coin2 has a higher bias than coin1 is xx%”.

Let’s briefly revisit the question of convergence. There’s a few ways to determine how well your random walk represents (or “has converged to”) the true posterior distribution. One way, by Rubin and Gelman, is to run several random walks and look at the variance between them. The coda package in R comes with a function gelman.diag() for this purpose. However, in our simple example we only did one chain so we can’t run it on our coda files. (Incidentally, Gelman writes a great blog about stats).

In the next post, I’m will look at the performance characteristics of JAGS – how it scales with the number of data points, and what tools you can use to track this.