At Force2019 the other day the one session that I really wanted to see, but missed, was the one by Dr. Elizabeth Gadd on responsible metrics.
She has posted her slides here Responsible metrics: what’s the state of the art?. This is a great deck, and I highly encourage reading through it.
My takeaways from reading through them are the following:
- misapplication of metrics is dangerous, leads to stress, has led to some tragic incidents.
- Misapplication of metrics just leads to bad decisions.
- Consider the “advise-police-judge” spectrum.
- Get senior leadership to own this.
She introduces the inorms scope model https://inorms.net/activities/research-evaluation-working-group/. This starts from your values and moves from there towards a model of evaluation.
- Beware using quantitative indicators as a proxy for qualitative things.
- Citations do not equal quality
- Ranking position does not equal excellence
There is advice to probe the potential negative effects of measurement, ahead of implementing a measurement practice.
I love the image of the research evaluation food chain:
I’ve been recently reading about outcomes, and using outcomes in preference to outputs. In this webinar on outcomes over outputs Josh Seiden discusses the topic in an approachable level of detail. (Webinar: Outcomes over output with Josh Seiden - YouTube.). Looking at the state we are in, in research assessment, it looks like what is often being measured on are outputs over outcomes. Furthermore it also looks like the kinds of outputs that people are looking at are very one sided. Josh talks about using two-sided outcomes to ensure that you don’t create incentives for building things that ultimately have negative impacts on the business. An example he gives in the Q&A is about driving visits to a page. You might be able to spend a lot on driving traffic, but if you don’t check that the people landing on the page are having good interactions there, then you have created an outcome, but one that is ultimately futile.
In research we often have a system that rewards citation, publication, awarding of grants, but rarely is self aware enough to be able to setup the rewards in a way that look at two-sided outcomes. I don’t think this is a way of looking at the world for which a specific product can be created. It takes nuance, and buy-in from the communities involved. There are going to be a lot of different people who have views on what the ultimate outcomes are that they are shooting for. I guess a good place to start is just by raising awareness, trying to make spaces where we can discuss these issues more.