BCBS239: A Bad Case of Deja Vu

BCBS239: A Bad Case of Deja Vu

BCBS239: A Bad Case of Deja Vu

 

On 23rd January 2015 the BIS published its progress report on BCBS239 Principles for Effective Risk Data Aggregation and Risk Reporting. This progress report was striking in a number of ways; first, despite a significant investment there was only marginal improvement in the banks assessment of their ability to meet the principles and worryingly in some cases banks reported a downgrade of their abilities. Second, banks' failure to recognise the fundamental importance of governance and architecture in ensuring overall compliance with all the principles, leaving significant reliance on manual processes and workarounds.

 

Digging a little deeper into the report, principles which received the lowest rating were (principle 2) data architecture/IT infrastructure, (principle 6) adaptability and (principle 3) accuracy/integrity. The responses from the banks largely attributed their low ratings to delays in initiating or implementing complex, large-scale strategic IT infrastructure projects.

  BCBS239: A bad case of deja vu

Another interesting nugget buried in this report, which for reasons I explain later in this post, is that one of the challenges reported by the banks was that IT Infrastructure, while adequate in normal times, is not adequate in stress or crisis situations.

 

The original BCBS239 draft consultation paper was published in June 2012 and banks have ploughed millions of pounds into remediation programmes and yet here we are two and a half years later reporting little progress. So what is the problem?

 

The problem?

The typical problems, according to the banks, is that their current IT infrastructure is too expensive to build and operate, too complex and difficult to change, with poor data quality, not timely and opaque due to large numbers of adjustments and manual workarounds.

 

These problems are exacerbated by a business environment where revenues are declining rapidly requiring fundamental changes to the banks' business model and associated infrastructure. Simultaneously, the regulatory environment is changing dramatically, with many new requirements creating additional demands on this already expensive and inefficient infrastructure.

 

What is the current solution being implemented by banks?

The banks typically have two distinct approaches to this type of reporting:

  1. Regular reporting using centralised back office systems . These reporting processes are largely automated but inflexible, inaccurate and inconsistent , due to the number of adjustment processes that take place

  2. In parallel, banks have developed ad hoc reporting solutions to cater for interim needs, not supported on these existing reporting platforms, but rather using manual queries on source systems often combined with spreadsheet aggregation . These reporting processes are fragile, by which I mean manual with poorly defined operational processes supporting them. However, they use the correct data sources.

The reaction from the banks as part of their BCBS239 initiatives has been to initiate large-scale IT and business change programmes to address the issues inherent in the regular reporting process, in particular by building consolidated IT platforms across multiple risk silos and in some cases across both risk and finance. These IT programmes are accompanied by business change programmes to enhance the operational reporting processes within these functions, including:

  • Automating existing manual processes

  • Identifying data owners and stewards

  • Documenting processes, including terminologies into data dictionaries

  • Developing enhanced data quality processes

  • Enhancing governance mechanisms to ensure that data quality issues are addressed and remedied in a controlled manner

The ad hoc reporting processes are deemed to be interim measures, with longer term plans for these to be migrated later onto the remediated regular reporting process, which would eliminate the need for the current duplication.

 

What is wrong with the current solutions?

To the untrained eye the solution described above appears entirely logical, but to anyone who has spent time in the reporting function of a globally significant bank will know, this approach is the same one that has been tried for the past 20 or more years without success. A quote often attributed to Albert Einstein springs to mind “The definition of insanity is doing the same thing over and over and expecting a different result.” There are those who believe that this time it will be different because BCBS239 has created more focus from senior management which will ensure success. Once again, years of experience in this field tells me otherwise.

 

It is my belief that the current proposed solutions are flawed for two main reasons:

  • The world has changed significantly since banks implemented their existing reporting infrastructure and operating model

  • These programmes are treating the symptoms, not the cause of the problems

What is new?

The existing regular reporting infrastructure of the banks was developed during a time of rapid expansion in new businesses, geographies, legal entities and systems to support expansion within the front office. Firm-wide level reporting was supported by the back office through centralised systems within each back office function, usually in the form of data feeds into an Extract-Transform-Load (ETL) processes into each back office systems. This firm-wide level reporting was used to support very specific functional views within risk, finance, compliance etc. rather than strategic business development.

 

Senior executives supported strategy decisions using judgement and macro-level views of the external business environment, which led to rapid expansion into new high-margin business (often in sophisticated structured products) and simply adding new feeds to the reporting process without regard for how these could be handled downstream.

 

But things have changed and post the financial crisis the high-margin products have largely been eliminated and the volumes (and profits) in the remaining “vanilla" products are significantly reduced.  More importantly, there are now a raft of new regulatory rules which have been mandated by the collective regulatory agencies throughout the G20. Many banks began addressing each of these regulatory rules separately, but most have now brought these initiatives under some common structure. Despite this, very few have identified the single thread that persists through all of these regulations. This thread is the need for near-real time global views across the entire organisation that can be aggregated across many different dimensions. This need for near real-time is driven by the fact that some of the regulatory rules require business decisions to be made against these global views, for example in determining whether potential trades will fall under the remit of the Volcker Rule or not. There is also an increasing need for near real-time global views for business purposes, in particular those driven by crisis events.

 

This need for near-real time global views with the ability to aggregate across different dimensions  is a significant change for organisations that have been accustomed to less frequent and more static reporting, and is the catalyst that will force banks to re-assess the current approach.

 

What is the root cause?

The problems of current IT infrastructure being too expensive to build and operate, too complex and difficult to change, with data that is of poor quality, opaque and not timely due to adjustments and manual workarounds, are symptoms and not the root cause.

 

The root cause is that data is being copied into large centralised systems. It is these copies that cause the need for adjustments to be made to data in downstream systems that are supported by large numbers of operational staff. Often these copies are pre-aggregated, which results in portfolio level adjustments which are inconsistent in systems that use data aggregated across different dimensions. These large centralised systems have slow release cycles and cannot be changed without co-ordinating data feeds across all upstream systems which causes them to be highly inflexible.

 

Historically this feeds-based approach into centralised systems has been acceptable as there was:

  • less need for global aggregated views close to real-time - most reporting tended to stay within businesses and global views were focused on specific back office functions e.g. risk

  • There was less need for crisis or stress market condition reporting

The second but more important root cause is that the culture of the banks allowed an operating model where the front office treated the back office as “second class citizens”. The requirements of the back office were met by data feeds that were more akin to “data dumps” rather than correct and accurate reflections of the trades actually booked in front office systems.

 

Are there any alternatives?

Given this assessment, are there any alternatives that can both meet the needs of near real-time global views and aggregations, without copying the data?

 

The answer is yes, and strangely enough, it it is staring most banks in the face without them even noticing. The answer is to move the logic, not the data! This is exactly what is being used in the ad hoc reporting process used by the banks to address interim needs, including much of the stressed market reporting which enabled them to survive during the financial crisis. This approach addresses the root cause by querying the original source data. This process however is considered too fragile to be relied upon for regular reporting and hence is dismissed as not a feasible option. I would argue that there is a serious business case for revisiting this assessment and that, by “industrialising” the ad hoc reporting process, the challenges being raised by the banks in their BCBS239 assessment can be overcome.

 

Why has this not been done before?

I would argue that there are 2 reasons why this was not considered before, first the environment was not suitable for the change in culture required, and second, the technology to enable such an approach was not sufficiently developed.

 

As already described, senior management in the banks allowed a culture to develop where back office units were treated as second class citizens. In the post-financial crisis era however, the regulatory regime now provides the environment whereby the cultural change can take effect. Senior management must now change the culture in a way that places the back office in a position where they are seen as a customer of the front office rather than a second class citizen. This means that data must be made available within front office systems in a way that allows it to be queried directly to facilitate the near real-time global views required for business as well as regulatory purposes. Not only will this allow for more accurate and timely reporting, but also keeps accountability with the true data owners in the front office.

 

This culture change is not restricted to the front office, the banks are notoriously silo’d organisations and there is a deep mistrust not only between the front office and back office but also between different back office functions. This creates a culture whereby data lying outside of the direct control of a particular function is not trusted and is another driver behind the continued copying and proliferation of data multiple times through the organisation. This culture of mistrust needs to change to allow data within the organisation to be stored once, but made available to many.

 

The technology has also historically not been available, in particular where there are extremely large volumes of data scattered across many disparate technologies, businesses, locations and legal entities, making it difficult to extract and use this data in a productive way. Data virtualisation technology can now handle large data volumes that allow a direct query approach across disparate data at an enterprise level, with performance similar to, or better than local copies.

 

One of the biggest concerns around this direct query approach is that it may compromise the performance of operational systems. Here too, advances in technology allow for building local caches that protect the operational systems from direct queries as well as enhancing the performance, making a slow system faster.

 

Another major concern is that disparate data is in different formats, nomenclatures and and structures, making it difficult to present back in a uniform manner required for global views across the organisation. Being realistic, these “translations” must occur and there is no avoiding the need to analyse and implement them. Using a direct query approach however simply moves the responsibility of where this translation occurs from a central team and system, to the location where the data is owned and understood best, at source. Using this approach, data can be queried according to a globally harmonised schema, but the translation from local data schemas, nomenclature and format is performed on-the-fly within the local data sources. This allows the same local data source to be accessed many times for different business contexts.

 

Another challenge raised in the BCBS239 update report relates to the legal restrictions in some regions/countries that hinders the ability to obtain granular risk data. In the traditional approach where data is copied, these data privacy issues often result in anonymised data being stored centrally which adds to the problems around the opaque nature of the reporting process. Using a direct query approach, the original data can still be queried directly, with only the results being anonymised on-the-fly in the reporting back to a central location.

 

Finally, but most importantly given the nature of the banks responses to the BCBS239 update report, this direct query approach can be implemented incrementally leveraging the existing legacy source systems. The data is accessed on-demand rather than copying into large centralised systems. The underlying issue is that a large centralised persistent data store simply cannot be agile enough given the amount of change in the original feeding systems. Every time a change occurs in one of the feeding systems, the logistical process of co-ordinating the testing and release across all feeder systems results in the complex large-scale IT projects which are at the heart of the challenge reported by banks. Having a direct query approach allows source systems to be changed independently, creating an agile infrastructure that can adapt to the level of change created by the current business and regulatory environment.

 

At a crossroads

This approach is a radical departure from the thinking that has prevailed for many years and it is understandable that banks will be hesitant to change. The slow progress witnessed since the original BIS consultation paper is evidence of how we often overestimate the speed of factors that challenge and change the status quo, on the contrary however we often underestimate the impact when it does change.

 

I think the banks are at a crossroads, the question is will they continue down the same path as before in the hope that the large strategic back office programmes finally deliver, or do they invest in an approach of streamlining and industrialising the ad hoc reporting processes which allowed them to survive the financial crisis and offers them an approach which:

  • Fixes the root causes rather than treating the symptoms

  • Leverages existing infrastructure

  • Avoids big strategic technology projects

  • Removes redundant data and associated operational process

  • Remove a large percentage of the cost base tied up in operational support and expensive IT and business change programmes

  • Increases agility

  • Delivers Incrementally

  • Increases controls by retaining accountability with original data/process owners

Given these advantages, is it not time for large banks to try a new approach?

About the Author

Gavin Slater Gavin Slater

Gavin has 22 years experience with Consulting and Investment Bank. In his previous roles he held positions as Global Head of Risk Infrastructure with a focus on Market and Credit Risk. Gavin has managed large and complex delivery organisations as well as being the primary contact managing the overall relationship for Risk with key regulators, in particular the FSA in the UK and Fed and SEC in the US.

  •    28 Marshalsea Road
         London
         SE1 1HF
         United Kingdom
  •   +44 (0)20 3627 2908
  •    info@stream-financial.com

About Stream Financial

We enjoy those intractable business problems you hate. We employ our experience and leading edge software solutions to help your organisation realise it’s full potential. Our approach is widely effective for business processes across all organisations and business sectors.


Read more

Request a Demo


Request a Demo