Why visualising Data Quality issues can overcome institutional inertia.

Why visualising Data Quality issues can overcome institutional inertia.

A recurring problem with resolving cross domain data quality issues is the asymmetry of benefits. Essentially the data producer (responsible for entering or uploading data at the point of collection) has little visibility of how the quality of that data will affect the data consumer (the person or persons who use it). The utility of data is often scuppered at this collection point, as the producer – understandably – will apply only the business and quality rules relating to their own use cases.

This is not simple to fix. I used to believe merely showing people the implications of these actions would begin to change behaviours. Of course it often doesn’t, because that behaviour is driven by prioritising the right results for a single group or department. So if it’s good enough for us, it’s good enough becomes the maxim.  That’s asymmetry of benefits right there, with all the implications for multiple data cleansing, lack of trust, poor master data, etc, etc.

Any kind of sustainable Data Governance framework has to find a solution. There are many approaches depending on mandate, culture and – often – how many people care about fixing the problem. This is where the UCISA HE Capability Model can really help. The example below focusses on issues around a non mastered Course List. We’ve chosen this anonymised scenario as it’s an issue which crops up at many universities.

Course List Capability Heat Map – please click on the link to view the whole model with the heat map applied.

We already had a well managed Data Quality issue log which has been themed to pull together multiple similar issues into something which feels like a business case. We’re able to articulate a number of metrics around cost, time, risk etc to make a strong argument for change. The issue is how to present that. I’m not a fan of writing stuff no one is going to read, and data issues tend to have many different and diverse audiences. Hence going down the visual route. This took the form of considering each capability in terms of where data was captured and used. Clearly this particular issue ‘lights’ up the model in every section showing how important it has become to resolve it.

I don’t want to over-state the success of this approach, as it is still early days. But there are two repeatable benefits: firstly it cements the Capability Model as an artefact that can act as a ‘base’ for different types of business problems. Secondly it powerfully demonstrates how many capabilities (which decompose into people – staff and students -, processes, data management and technology) are adversely affected by a silo’d approach. It really is summarising the axiom that the good of the many outweighs the needs of the few!

So much of Data Governance in its early implementation is about fixing problems like this. It brings the owners, stewards, producers and consumers together. They solve the problem collaboratively and collectively. The outcome shows the value of doing things differently. The first one is the hardest because until there is publicised success on a resonating issue, treating data as an institution-wide asset still feels very theoretical.

The serendipity of improving data utility through the utility of another model is a fantastic thing.  From an initial post on LinkedIn explaining how we did this, I’ve been asked for a downloadable version. This is slightly complicated because of the limitations of the spreadsheet. If there’s sufficient interest – please let me know in the comments – I’ll try and hack something releasable. Although my current thinking is that time would be better spent creating a flexible application….

By |2019-01-28T17:05:40+00:00January 28th, 2019|What I've done|

Leave A Comment