Regulators are turning up the heat on data quality management in financial institutions. Some regulations require named senior management ownership of the data governance function. Others come at the same issue indirectly, demanding proof of quality processes used by financial institutions to make their regulatory filings. Either way, the objective is the same – to require a named person to bear responsibility for the correctness of data used to run the business and construct routine regulatory filings. The message from politicians is clear – in the next crisis, if the filings were wrong, somebody is going to jail.
Regulators are hoping that requiring C-level ownership of data management will cause trickle-down quality. Empirically, this is not delivering much success in the field. We have heard of a global bank that now has 200 employees across countless departments all with the job title “Chief Data Officer”. When regulators require the name of an executive that will potentially go to jail if data is badly managed, the main result seems to be the big-4 sell a lot of data governance consulting packages. Entire rooms of cabinets are now jammed with ring binders stuffed with newly minted process documentation. The problem is process manuals improve nothing unless they are accompanied by genuine organizational change. This top-down orthodoxy is failing. Regulators want a concise, coherent answer when they ask, “show me how you know the data is right”. They certainly don’t want to be pointed to a cupboard full of swim-lane diagrams! Meanwhile, legacy data management software vendors continue to push their tired, one-size-fits-all, monolithic database platforms. We think fundamentally the approach needs to completely reverse from top-down to bottom-up.
Of course, this does not mean that senior management may wash their hands of data management and simply delegate it to the department level. There must be a coherent strategy agreed. However, empowering knowledge workers to control their own affairs and demonstrate progress is always superior to imposing grandiose one-size-fits-all schemes onto businesses that must be agile and innovative to thrive.
What we have learned in the field
We routinely work on data management projects where implementation of Misato Data Hub kicks off a push for improved data quality. This is particularly true for regulatory reporting projects. This push almost always comes from the operations team, real business owners, rather than top-down from the C-suite. All that is needed to ignite this drive is a healthy dose of transparency. We provide simple presentation of data quality problems and who they impact on a departmental scale presented without gobbledegook jargon, or being lost in the politics of finger-pointing across the enterprise.
Let’s get it off our chests
If you’ve ever had to work from poor quality data made by somebody else, you’ll know how deeply frustrating it is. A little venting of long-held frustrations can be healthy if the departments have right tools to quantify, visualize the problem and to demonstrate they are improving incrementally. No matter what legacy enterprise data management software vendors or big-4 consultancies say, there is no silver bullet for data quality. If you cannot measure the problem using language the business understands then you cannot manage it. It needs leaders in individual departments to put aside grievances and then commit to improve the service they provide to each other.
The truth is, once implemented, a modern data management solution is only the beginning of the journey to cracking enterprise-wide data quality. When departmental houses are in order, then, and only then, they can be linked together and SLA’s monitored across multiple departments. The right solution can ensure practitioners keep quality and transparency foremost in their minds long after the consultants have been paid their fees, packed up and gone.