The Wrong Balance?

The Wrong Balance?

The Wrong Balance?

Following from my previous post on the IT spend in large banks, I suggested that too much spend was focused on the new capital adequacy measures, in particular Basel 2.5 and Basel 3, at the expense of spending on improving risk infrastructure at a more basic level, such as simplifying the dizzying number of duplicate systems and focusing on data quality issues. So how did we get into this mess?

Of course with hindsight it is a lot easier to see and this is very well covered in another very good Basel paper BCBS258 which covers the balancing act required between:

  • risk sensitivity - ability of risk models to capture all of the real world risks
  • simplicity - ensuring the risk models don't get too complex, and
  • comparability - ensuring that results of risk models can be compared across the industry

The paper talks about the evolution of the Basel process from the initial 1988 Basel Accord through to the present day Basel 2.5 and Basel 3. From this it is easy to see how regulators (and banks) took well intentioned decisions at the time, however it is only when you stand back and look at the current state that you see how we have ended up with completely the wrong balance. From the initial simple risk weights of the 1988 Accord through to the internal models approach of Basel 2 and now the raft of new measures under Basel 2.5 and 3 have left the industry too focused on risk sensitivity. Too much attention is given to calculating capital on models with spurious degrees of accuracy, at the expense of something which is simple enough to understand and able to be compared across institutions. Just reading the text of the latest Basel documents alone would take a week (and a damp towel around the head) let alone actually implementing it in the web of systems found in these large banks.

If this advanced modeling was actually getting to the right answer I could live with the complexity, however, having had first-hand knowledge of how these calculations actually work in practice it is my view that most of this complexity has no real business value. My view is based upon the "use test" which is one of the key principles of banks getting advanced model status, where model approval is dependent on the banks actually using the measures as part of their management of risk. The reality unfortunately is that these measures are purely used for calculation of capital adequacy numbers and any use for management of risk is paid "lip service".

How to we try to redress this balance? The Basel paper talks about the possibility of going back to standardised measures for everything. In practice there already some movement underway within the regulators towards at least showing both the advanced model results alongside standardised results. In my view this is not the right solution. Whilst I don't agree with the current implementation of the "use test", I do agree with the principles on which it is based. The risk departments of banks are indeed trying to manage the risk of the bank in the best way possible, and they have a host of very good tools which allow them to do this (just not the ones currently prescribed by Basel). For purposes of capital adequacy, the key tool would be stress testing, which all the banks already do as part of every day risk management and the results of which could be used to estimate required capital charges. For the most part these stress test are pretty simplistic, being a simple shock (based on some historical view of worst case moves) and would pass the simplicity test reasonably well. Of course I can see the quants wanting to start creating all sorts of fancy correlation models to aggregate the results of these stress tests, but I would suggest they be ignored in favour of a simple sum (call it my "multiplier" effect for want of a better term). These are exactly the type of stresses that risk managers use in practice and talk to heads of business about regularly and therefore would pass the "use test" with flying colours. In addition, they are for the most part simple shocks to existing risk sensitivities and therefore redress the balance back to simplicity.

What about comparability I hear you asking? There could be a solution here too.  Already regulators are asking banks to do regular stress tests and as part of that they are specifying the type of scenarios that banks should use. With a little more co-ordination and effort, these scenarios could evolve into a more detailed set of shocks to be applied consistently across banks and because these are simple shocks to existing sensitivities there is significantly less model risk and therefore would be more comparable.

Are we likely to end up here? In my view this is not likely given the momentum behind the current trajectory. What are your thoughts?

About the Author

Gavin Slater Gavin Slater

Gavin has 22 years experience with Consulting and Investment Bank. In his previous roles he held positions as Global Head of Risk Infrastructure with a focus on Market and Credit Risk. Gavin has managed large and complex delivery organisations as well as being the primary contact managing the overall relationship for Risk with key regulators, in particular the FSA in the UK and Fed and SEC in the US.

  •    28 Marshalsea Road
         London
         SE1 1HF
         United Kingdom
  •   +44 (0)20 3627 2908
  •    info@stream-financial.com

About Stream Financial

We enjoy those intractable business problems you hate. We employ our experience and leading edge software solutions to help your organisation realise it’s full potential. Our approach is widely effective for business processes across all organisations and business sectors.


Read more

Request a Demo


Request a Demo