The answer should be a firm yes, but first let me explain why it is often a definite no.

  • Assessment scores are amongst the dirtiest data you can collect, with most methodologies being entirely qualitative
  • Completing the assessment may give you a grade or a level, but other than printing it out and sticking it on the wall, what do you do with it?

The HEDIIP programme originally envisaged publishing a data maturity assessment across the HE sector. My view was without a framework for that assessment to operate in, the cost of collection was not commensurate to the value we would receive. This was the genesis of the Data Capability framework with as-is and to-be assessments bookending a pragmatic activity plan to close the gaps. So to take the second point first, assessments are valuable if:

  • They form part of a wider approach to improving the data asset
  • The to-be state demonstrates visible and measurable links to wider objectives
  • The as-is state is defensible and accepted.

That last bullet is really important. I actually don’t really care what the as-is state looks like. This may sound counter intuitive but there are so many views of data, personal and group agendas and individual issues that creating a ‘perfect’ assessment is not practical. I’m more interested in a spirited debate, a sharing of ideas and a coalescence around an idea that the current state is not sustainable. We need to get everyone to a similar stating point by running workshops to gain the widest stakeholder input. Data is an institution wide asset, so everyone needs to have their say. Although briefly as the value curve drops off exponentially when trying to fix an always shifting as-is state.

I hope we’ve established that a maturity assessment has a place in your Data Governance initiative. I tend to place it up front as part of my discovery phase. So which one to use? There are quite a few out there- I’ve used the (subscription based) CMMI Data Maturity kit a number of times. It’s pretty time intensive though, and covers a lot of areas that may not need assessing for a DG type programme.  I’ve written a couple including the HEDIIP one mentioned earlier. My preference tho is something open-source with real world experiences of implementation.

The Stanford EDU model fits perfectly here. It’s worth a read of how the work came about, and how it was used. Although it was developed back in 2011, it stands up in terms of getting to the heart of quality, accountability and sustainability. Originally supplied as a PDF, obviously I felt the urge to transform it into a spreadsheet with some easy to print graphs once the assessment is complete. That’s the document linked to this post.

My approach is to create two versions; the as-is and the to-be and use these to fix the scope of the activity, and help to prioritise it. The gap analysis essentially forms most of the activity plan including who needs to be involved. There’s far more to it than that of course, but as a generic approach it’s worked well for me over a number of years and projects.

Anyway, as ever, it’s in the community section and provided without warranty or recourse!  I hope you find it useful, Drop me a line for questions, or leave a comment here.