The answer should be a firm yes, but first let me explain why it is often a definite no.
- Assessment scores are amongst the dirtiest data you can collect, with most methodologies being entirely qualitative, and
- Completing the assessment may give you a grade or a level, but other than printing it out and sticking it on the wall, what are you going to do with it?
This was the problem we faced five years ago when assessing the data capability of the UK Higher Education sector. The original proposal envisaged publishing a sector wide maturity assessment backed by an accompanying set of recommendations. My view was without a framework for that assessment to operate in, the cost of collection was not commensurate to the value that sector would receive.
This then was the genesis of the HEDIIP Data Capability framework, with as-is and to-be assessments bookending a pragmatic activity plan to close the gaps. So to take the second point first, assessments are valuable if:
- They form part of a wider approach to improving the quality and value of the data asset
- The to-be state demonstrates visible and measurable links to wider organisational objectives
- The as-is state is defensible and accepted.
That last bullet is really important. I really don’t care what the as-is state looks like. This may sound counter intuitive, but with so many disparate views of data quality, conflicting group agendas and alternative individual perspectives make creating a ‘perfect’ assessment impractical.
I’m more interested in a spirited debate, a sharing of ideas and a coalescence around an idea that this current state is not sustainable. We need to get everyone to a similar stating point through multiple workshops to gain the widest stakeholder input. Data is an organisational wide asset, so everyone needs to have their say. Although briefly, as the value curve drops off exponentially when trying to fix an always shifting as-is state.
I hope we’ve established then that a maturity assessment has a place in your Data Governance initiative. I tend to place it up front as part of my discovery / business case phase. So which one to use? There are quite a few out there- I’ve used the (subscription based) CMMI Data Maturity kit a number of times. It’s very time intensive though, and covers a lot of areas which may not need assessing for a DG type programme.
There are a number of other frameworks I’m less familiar with so I’ll not include them here. I’ve also written a couple including the HEDIIP one highlighted earlier. My preference is for something pragmatic, easy to complete, simple to analyse and free to use!
The Stanford EDU model fits perfectly here. Although it was developed back in 2011, it stands up in terms of getting to the heart of quality, accountability and sustainability. Originally supplied as a PDF, it has now been removed from their website. I’ve created a spreadsheet based template which supports quick scoring and a simple visualisation tool.
My approach is to create two versions; the as-is and the to-be. Then use the gap between them to fix the scope of the activity and prioritise implementation. This gap analysis essentially forms most of the activity plan including those needing to be involved. There’s far more to it than that of course, but as a generic approach it’s worked well for me over a number of years and projects.
If you’d like to read the ‘Call to Action‘ report written on completion our 2015 Higher Education assessment, please find both a summary and detailed version hosted on the HESA website.
Finally the template described here is in the community section and provided without warranty or recourse! I hope you find it useful, Drop me a line for questions, or leave a comment here.
Leave A Comment