Fractional Data Governance

Working in the ‘New Normal’

By |2020-03-31T12:55:25+01:00March 31st, 2020|What I've done|

Testing out the whiteboard. Optional Labrador available for all remote workshops! Let’s start by saying there are far more important things going on in the world right now. We’re in uncharted territory with the concept of being on a ‘war footing’ not seeming too far-fetched. The efforts being made by our healthcare, retail, logistics and so many other sectors renders what we do significantly less important. Although how data is being used in this crisis is fascinating. We’ll be back to that in a later post. Our approach has been to control only what we can control and not [...]

Data Owners: first item on their agenda

By |2020-03-31T18:52:59+01:00November 25th, 2019|What I've done|

  Once we've identified, allocated and trained our Data Owners then we're done, right? Wrong! This is exactly the time we need to be mentoring that group as their role is not entirely intuitive. Why is that? Well Data Owners should be members of your organisations senior leadership team with functional accountability for a domain.  That's a great start, but their data governance role extends beyond that and across the organisation. Inhabiting the owner role may deliver a dis-benefit to their own functional area. That's a whole other post, but today I'm focussing on their other primary responsibility: empowering their stewards. The question [...]

The why, how and what of Data Governance

By |2019-10-09T10:06:16+01:00October 9th, 2019|What I've done|

After my last post on ‘How to make the Data Governance business case’, I’ve had requests to share the full slide deck from the event this post originated from. These cover - briefly - the whole data journey plotted out in the schematic above. You can download them below. This was from my session at the excellent HESPA (Higher Education Strategic Planners Association) Data Governance 1 day conference on September 28th, 2019. The next post will be on ‘how to value the data asset’. If you have questions or comments, please do get in touch.

Data Governance Business Case – what are the benefits?

By |2019-10-04T11:18:05+01:00October 4th, 2019|What I've done|

As my colleague, and fellow data professional Nicola Askham, wrote in her last blog. it can be a ‘real struggle to get your data governance initiative approved in the first place’. She sets out the reasons why, and recommendations on how to overcome them. In this post, I’m going to dive into the detail of how to frame the benefits and risks within that business case.  These have been developed over many years of creating signature ready business cases as part of a wider Data Governance initiative or programme. What are the benefit and risks that make the case? Every organisation [...]

Incoming: three future blogs

By |2019-09-27T11:44:10+01:00September 27th, 2019|What I've done|

I’ve been a little remiss in creating content and templates over the last six months. This is the joy and curse of being busy! However after an excellent data governance conference arranged by HESPA, three recurring themes clearly need exploring. And a blog is a great place to do that. So in the next 4-6 weeks I’ll post an entry to answer: What goes into a Data Governance business case, and how do I choose? What is the cost of bad data / the value of good data, and how can they be calculated? What skills are needed for Data Governance [...]

How do I assess change in terms of my data asset?

By |2019-08-27T17:22:15+01:00August 27th, 2019|What I've done|

How many times have you been asked 'We have a new system being implemented next month, can you sort out the data?'. Okay this might be a bit extreme, but the proposition holds. Data is rarely considered in the same way as the other asset classes - finance, staff and estates. This needs to change. Data is the fluid over which processes flow. If the impact is ignored or quality is assumed, this fluid can quickly turn into a grit. However, in many organisations, a lack of change governance is incompatible with a rigorous impact analysis approach. Leaving a combination pragmatism [...]

Data delays shouldn’t mean data disorganisation

By |2019-07-11T15:02:06+01:00July 11th, 2019|What I've done|

I know it’s been a while since I’ve posted anything. This is not by any means due to a lack of content, it’s more a lack of time. Or - to be more accurate - time management! I was however moved to write a short article on why Universities should persevere with their initiatives to improve the quality of the data asset, in spite of the news this week that Data Futures has been put back at least another year. WONKHE were kind enough to publish it on their website Hopefully this will kick start my approach to dealing with the [...]

Are Data Maturity assessments worth the effort?

By |2019-12-06T08:45:31+00:00February 12th, 2019|What I've done|

The answer should be a firm yes, but first let me explain why it is often a definite no. Assessment scores are amongst the dirtiest data you can collect, with most methodologies being entirely qualitative, and Completing the assessment may give you a grade or a level, but other than printing it out and sticking it on the wall, what are you going to do with it? This was the problem we faced five years ago when assessing the data capability of the UK Higher Education sector. The original proposal envisaged publishing a sector wide maturity assessment backed by an accompanying set [...]

Restricted content

By |2019-02-12T17:26:51+00:00February 12th, 2019|Concepts and Templates, Members Area|

This template is a version of the Stanford EDU data maturity assessment model.  It is formatted to allow each question to be assessed via a drop down box. When completed, a number of graphs representing the scores will be available for review. The original material is copyrighted for Stanford EDU, but free to use. Please find more details here.

Why visualising Data Quality issues can overcome institutional inertia.

By |2019-01-28T17:05:40+00:00January 28th, 2019|What I've done|

A recurring problem with resolving cross domain data quality issues is the asymmetry of benefits. Essentially the data producer (responsible for entering or uploading data at the point of collection) has little visibility of how the quality of that data will affect the data consumer (the person or persons who use it). The utility of data is often scuppered at this collection point, as the producer - understandably - will apply only the business and quality rules relating to their own use cases. This is not simple to fix. I used to believe merely showing people the implications of these actions [...]

Go to Top