
Performance Indicators are an important ingredient in how we design Wellcome Collection’s digital services. But being overly focused on metrics can be counterproductive and lead to unintended consequences just as much as having no data can lead to wasted effort and poor decision making. In this article I discuss our approach to setting and using KPIs and how they fit into our wider strategy and working practice.
Our digital platform is part of Wellcome Collection, it is part of the experience we offer our visitors whether or not they can physically visit us in London. As a department we’re therefore here to help deliver Wellcome Collection’s vision:
To challenge how we think and feel about health, by connecting science, medicine, life and art.
We do that by:
- Designing outstanding user-centric digital experiences to engage a large, diverse audience;
- Producing innovative, challenging and thought-provoking content, and;
- Supporting researchers of all kinds use Wellcome’s collections and research spaces.
And by trying to follow a series of principles on which we hang our work and our working practice, including the principle that:
We embrace decisions based on an understanding of our audiences
- Understanding our users is at the heart of everything we do;
- We spend time understanding what they do, what they want to do, what they can’t do, and we test our ideas with them;
- We consider those who use Wellcome Collection today and those who might;
- Our users decide what is good.
Within this framework we collect data that helps inform us about whether we’re on the right path to achieving our goals. We use qualitative and quantitative data to provide insight and give us confidence that our ideas, our designs, our content are improving our users’ experience of Wellcome Collection. The data we collect is used to help inform us and used as evidence rather than being the primary or sole driver. That is, we aim to be ‘data informed’ rather than ‘data driven’.
When we think about KPIs we don’t start with a number or a target but by trying to think about ‘who will do what differently as a result of what we do?’ i.e. what change we hope to make in the world. Only then do we think about what indicators we need to help us understand whether or not that change has occurred. We think of indicators as:
- Statements or questions around which we can collect evidence on a regular basis to show a trend;
- Something that can be used to illuminate progress towards an objective;
- Strong surrogates for impact;
- Limited in number and used as a package to tell us whether we are making a difference and capture real change.
Indicators therefore aren’t targets, they aren’t even data points and they don’t determine what we should do or why we should do it. But they do inform us and give us confidence that we’re moving in the right direction (or not) and help us understand how fast we’re progressing — velocity and direction of travel matter more, much more, than the destination. Indicators tell us what evidence we need to collect to help us understand whether or not we’re making progress towards our objectives and whether or not we’re making a difference. In that sense indicators are more akin to ‘definition of done’ but applied to objectives.
What counts as good evidence?
For a given indicator we need to collect evidence/ data to help us decide whether or not we’re making progress. When thinking about what evidence to collect, we try to consider if it is systematic, rigorous and credible. We seek a good balance of different methods to collect our evidence, including:
- Ask: targeted surveys; gathering representative stories of the experiences of those who may experience change;
- Observe: systematic gathering and analysis e.g. ethnographic studies or A/B tests;
- Infer: analysis of data to infer a changes in behaviour, practices, policies e.g. analysis of Google Analytics data;
- Count: metrics to monitor a trend over time e.g. reading time or unique users.
We also try to balance precision with effort — we aren’t looking for certainty, we’re looking for enough evidence to give us confidence that we’re making a change. It is important to remember that the map is not the territory.
We pull all of this together into our planning framework like this:
- Goal — who will do what differently as a result of what we do? / what changes as a result of what we do? Goals can be longer term and span multiple quarters or years. They are the why;
- Objectives — the things we want to achieve in pursuit of our goal in a quarter or year. These are the what;
- Key results — the things we believe we need to do to achieve an objective. Key results are measurable, they are the how;
- Indicators — what we need to know to inform us that change has occurred, they are how we measure delivery against the key results (and by extension meet our objectives and goals);
- Evidence — the data we collect throughout the quarter/year to report on the indicators.
For example we have a goal (from our Access, Diversity and Inclusion strategy) of ensuring “there are no barriers to accessing Wellcome Collection”.
We might therefore have a Quarterly Objective to: Ensure D/deaf, blind and neurodiverse visitors to 183 Euston Road have the necessary access information to plan their visit so that they can do so confidently and know that they can come to our building.
Key Results might include: (1) ‘Visit us’ provides the information necessary to help plan a visit and to feel like a welcoming venue; (2) Exhibitions and events information pages always contain contextually relevant access information.
Indicators might be: Do D/deaf, blind and neurodiverse users have the information they need when planning a visit? And do they have the information they need to understand what to expect when visiting?
Evidence: Ethnographic research results with D/deaf, blind and neurodiverse visitors; test results of designs and content with D/deaf, blind and neurodiverse users.
The same process is applied when we think about longer term objectives, key results and indicators for the department.
To deliver digital products and services that people want means you need to be ready to do it on their terms. Our designs and our content are no more than a hypothesis, our best (informed) guess at how to help our users solve their goals, meet their needs and challenge how they think and feel about health. Our users will judge if we’ve got that right, our challenge is to devise a good ‘map’ and collect the evidence to help us understand if we’re succeeding or not so that our decisions are better informed.