Social mobile and the missing metrics

Scenario 1: Five hundred people gather together for three days. They talk, they discuss, they share and they learn. And then they leave. Some stay in touch, others have picked up enough to start a project of their own. Others just leave with a satisfied curiosity, others with the odd new blog post behind them

Scenario 2: A charitable foundation funds the creation of a new mobile tool. Over a one year period there is software development, a new website, user testing and roll-out

Scenario 3: A university professor embarks on a piece of field-based research to examine the impact of a mobile-based health initiative in Africa. He or she writes a paper, highlights what did and didn’t work, gets it published and presents it at a conference

Question: What do these three scenarios have in common?
Answer: It’s unlikely we’ll ever know their full, or real, impact

Let’s assume, for one moment, that everyone working in social mobile wants to see their work have real, tangible impact on the ground. That would equate to:

  • A patient receiving health information through their phone which can be directly attributed to improving their health, or their likelihood of staying alive
  • A farmer receiving agricultural information which can be directly attributed to better family nutrition, or an increase in income or standard of living
  • A team of human rights activist reporting violations which can be directly attributed to the fall of an evil regime, or the passing of new legislation, or the saving of a specific person’s life
  • And so on…

Fine. But are things ever this clear cut? Ever this black or white?

The social mobile world is full of anecdotes. Qualitative data on how certain services in certain places have been used to apparent great effect by end-users. But what we so often lack is the quantitive data which donors and critics clamour for. You know – real numbers. Take the 2007 Nigerian Presidential elections, an event close to my own heart because of the role of FrontlineSMS. This year – 2010 – will witness another election in Nigeria. What was the lasting impact of the 2007 mobile election monitoring project? Will things be done any differently this year because of it? Did it have any long-term impact on behaviour, or anti-corruption efforts?

Much of the data we have on FrontlineSMS falls into the anecdotal and qualitative categories. Like many – maybe most – mobile-based projects, we have a lot of work to do in determining the very real, on-the-ground impact of our technology on individuals. We regularly write and talk about these challenges. But it’s not just about having the funding or the time to do it. It’s figuring out how we measure it.

If a farmer increases his income through a FrontlineSMS-powered agriculture initiative, for example, but then spends that extra money on beer, that’s hardly a positive outcome. But it is if he passes it to his wife who then uses it to send their third or fourth daughter to school. How on earth do we track this, make sense of it, monitor it, measure it, or even decide how we do all of these things? Do we even need bother at all?

Of course, as my recent Tweet suggests, we shouldn’t get too obsessed with the data. But it’s important that we don’t forget it altogether, either. We need to recognise the scale of the challenge – not just us as software developers or innovators, but also the mobile conference or workshop organiser, and the professor, both of whom need to face up to exactly the same set of questions. The case of the missing metrics applies just as much to one as it does to the others, and we all need to be part of finding the answer.

63 thoughts on “Social mobile and the missing metrics

  1. Pingback: Josef Scarantino
  2. Pingback: Ory Okolloh
  3. Pingback: frog design
  4. Kenyanpundit says:

    My two cents, it is difficult to trace every possible outcome that is triggered by social/tech platforms and communities ..and we shouldn’t get so obsessive about data to the point of losing focus on why we do what we do. That being said there is value in trying to keep figuring out how to improve how we measure impact not only because it is one of the best ways to tweak the tools, but also because getting a better picture as far as impact is re-energizing and fulfilling.

  5. Pingback: Heather LaGarde
  6. Pingback: patrick bosteels
  7. Pingback: Dave Allen
  8. Pingback: Philip Auerswald
  9. Pingback: steve wright
  10. Pingback: Laynara
  11. Pingback: Emily Cunningham
  12. Kelly says:

    Great post. Having done a lot more in broadcast technologies before mobile or other point-to-point, I’d recommend looking at how change/ impact is measured in programs using FM, television, or even paper circulars. In a sense these broadcast media have an even tougher time knowing who they reached and the effect. At least with mobile you have idea of the number of recipients and any resulting dialog / reporting — although this is not a measure of change. Of course there are no easy answers, and such evaluations are not cheap to undertake, but I bet there are lessons from the broadcast community that could be applied in mobile.

  13. Jenny Aker says:

    Ken,

    Thanks so much for raising this issue. It’s not an easy one, but it is an incredibly important one. We know that people in some of the world’s poorest countries are already adopting mobile phones at a rapid rate, which suggests that they are deriving some type of economic and social benefits: Do we need to quantify these benefits? And do we need to show that these benefits are at least partly due to mobile phones?

    I would argue that yes, we do. Not because “Westerners want to measure everything”, as one commenter suggested. But because mobile phone-based services and products are increasingly being used as part of development projects. The reason for this is clear: A lot of people in Africa, Asia and Latin America are already adopting mobile phones, and mobile phones can provide a more efficient and effective way of sharing information, coordinating logistics and services.

    But just assuming that mobile phones are better isn’t good enough, especially when we’re talking about development projects. Why? Because: 1) donor resources are limited, and if they are, we want to invest them in something that works; 2) even if donor resources aren’t limited, asking poor households to adopt or use a new technology that might – or might not – be better than the traditional approach isn’t sustainable; and 3) in the worst-case scenario, the new technology could cause harm – without knowing it.

    Let’s take the example of mobile phones and cash transfer projects (which already occurred in Kenya, and is being considered in Niger and Haiti). On the surface, this seems like a good idea – it could reduce the risk involved in transporting and distributing large sums of cash, especially in countries where there are few bank branches. It could also have important benefits, by allowing cash to be transferred more often (allowing households to smooth consumption) and more discreetly (therefore allowing women to better “control” the use of the cash). But what about the risks? Could the mobile-phone based approach actually increase risks for beneficiaries, because now they have to travel outside of their village to find an agent? Is it possible that there could be greater fraud, especially for illiterate populations who are unable to type in the necessary codes and PINs? And do the additional costs involved with the program (in some cases, purchasing mobile phones for beneficiaries) match the benefits, especially when compared with the traditional distribution approach?

    You can see why answering these questions might be important.
    And if we don’t do an evaluation of the program (or even evaluate the mobile-based cash transfer program with the traditional distribution approach), we won’t know the answer.

    The question is, how can we answer these questions? It isn’t easy, and it isn’t possible, feasible or even ethical in all contexts — but it is possible. And this is the idea behind impact evaluations. Impact evaluations are evaluations that: 1) try to determine whether an intervention has met its objectives; and 2) try to attribute those changes to the actual intervention (rather than another project, or natural trends, or dumb luck). To do this, they often involve comparing a group who participated in the program with one that didn’t, to see what would have happened if the program did not take place (but contrary to popular belief, impact evaluations are not synonymous with randomized evaluations).

    Let’s take another concrete example. I am working on a mobile phone/literacy project in Niger with Catholic Relief Services, the Ministry of Non-Formal Education and the agricultural market information service (SIMA). The project – called Project ABC — combines traditional literacy training with mobile phones. Participants are taught how to use mobile phones and send and receive SMS, so that they can practice reading and writing in their local languages outside of the classroom. The idea seems as if it would be an improvement over normal literacy classes, but how do we know? To figure this out, the project is using an impact evaluation approach. During the two-year pilot period, across 142 villages, half the villages receive regular literacy training, half receive the mobile phone literacy training. All of the villages were equally worthy, so to choose which ones got the mobile phone literacy training, they were chosen out of a hat (randomly). Then we compare the literacy results before, during and after the program, to see if Project ABC participants have better literacy outcomes and the non-ABC participants. Since the villages were chosen at random, on average, they should be pretty much the same — it was just chance that selected the village. If Project ABC works – early results suggest that ABC participants are reading and writing at a higher level – then the project will scale up to the other villages, and perhaps nationally. If it doesn’t, then all of the project partners and beneficiaries try to figure out why it didn’t work, and if we can do something better.

    This approach could be tried in other areas as well – within reason. Usually, impact evaluations are best-suited to pilot projects, but could be tried in large-scale projects as well — especially if they are new to a country or region. And they can (and should be) combined with qualitative and quantitative approaches. Qualitative information is crucial, and can’t be captured in a survey form.

    No, we can’t get so obsessed with the data that we lose the forest for the trees. At the same time, we can’t forget about it either. Good intentions aren’t always good enough.

    (P.S. And there is a way to answer your Nigeria election question — especially with PVTs).

Comments are closed.