Social mobile and the missing metrics

Scenario 1: Five hundred people gather together for three days. They talk, they discuss, they share and they learn. And then they leave. Some stay in touch, others have picked up enough to start a project of their own. Others just leave with a satisfied curiosity, others with the odd new blog post behind them

Scenario 2: A charitable foundation funds the creation of a new mobile tool. Over a one year period there is software development, a new website, user testing and roll-out

Scenario 3: A university professor embarks on a piece of field-based research to examine the impact of a mobile-based health initiative in Africa. He or she writes a paper, highlights what did and didn’t work, gets it published and presents it at a conference

Question: What do these three scenarios have in common?
Answer: It’s unlikely we’ll ever know their full, or real, impact

Let’s assume, for one moment, that everyone working in social mobile wants to see their work have real, tangible impact on the ground. That would equate to:

  • A patient receiving health information through their phone which can be directly attributed to improving their health, or their likelihood of staying alive
  • A farmer receiving agricultural information which can be directly attributed to better family nutrition, or an increase in income or standard of living
  • A team of human rights activist reporting violations which can be directly attributed to the fall of an evil regime, or the passing of new legislation, or the saving of a specific person’s life
  • And so on…

Fine. But are things ever this clear cut? Ever this black or white?

The social mobile world is full of anecdotes. Qualitative data on how certain services in certain places have been used to apparent great effect by end-users. But what we so often lack is the quantitive data which donors and critics clamour for. You know – real numbers. Take the 2007 Nigerian Presidential elections, an event close to my own heart because of the role of FrontlineSMS. This year – 2010 – will witness another election in Nigeria. What was the lasting impact of the 2007 mobile election monitoring project? Will things be done any differently this year because of it? Did it have any long-term impact on behaviour, or anti-corruption efforts?

Much of the data we have on FrontlineSMS falls into the anecdotal and qualitative categories. Like many – maybe most – mobile-based projects, we have a lot of work to do in determining the very real, on-the-ground impact of our technology on individuals. We regularly write and talk about these challenges. But it’s not just about having the funding or the time to do it. It’s figuring out how we measure it.

If a farmer increases his income through a FrontlineSMS-powered agriculture initiative, for example, but then spends that extra money on beer, that’s hardly a positive outcome. But it is if he passes it to his wife who then uses it to send their third or fourth daughter to school. How on earth do we track this, make sense of it, monitor it, measure it, or even decide how we do all of these things? Do we even need bother at all?

Of course, as my recent Tweet suggests, we shouldn’t get too obsessed with the data. But it’s important that we don’t forget it altogether, either. We need to recognise the scale of the challenge – not just us as software developers or innovators, but also the mobile conference or workshop organiser, and the professor, both of whom need to face up to exactly the same set of questions. The case of the missing metrics applies just as much to one as it does to the others, and we all need to be part of finding the answer.