Scenario 1: Five hundred people gather together for three days. They talk, they discuss, they share and they learn. And then they leave. Some stay in touch, others have picked up enough to start a project of their own. Others just leave with a satisfied curiosity, others with the odd new blog post behind them
Scenario 2: A charitable foundation funds the creation of a new mobile tool. Over a one year period there is software development, a new website, user testing and roll-out
Scenario 3: A university professor embarks on a piece of field-based research to examine the impact of a mobile-based health initiative in Africa. He or she writes a paper, highlights what did and didn’t work, gets it published and presents it at a conference
Question: What do these three scenarios have in common?
Answer: It’s unlikely we’ll ever know their full, or real, impact
Let’s assume, for one moment, that everyone working in social mobile wants to see their work have real, tangible impact on the ground. That would equate to:
- A patient receiving health information through their phone which can be directly attributed to improving their health, or their likelihood of staying alive
- A farmer receiving agricultural information which can be directly attributed to better family nutrition, or an increase in income or standard of living
- A team of human rights activist reporting violations which can be directly attributed to the fall of an evil regime, or the passing of new legislation, or the saving of a specific person’s life
- And so on…
Fine. But are things ever this clear cut? Ever this black or white?
The social mobile world is full of anecdotes. Qualitative data on how certain services in certain places have been used to apparent great effect by end-users. But what we so often lack is the quantitive data which donors and critics clamour for. You know – real numbers. Take the 2007 Nigerian Presidential elections, an event close to my own heart because of the role of FrontlineSMS. This year – 2010 – will witness another election in Nigeria. What was the lasting impact of the 2007 mobile election monitoring project? Will things be done any differently this year because of it? Did it have any long-term impact on behaviour, or anti-corruption efforts?
Much of the data we have on FrontlineSMS falls into the anecdotal and qualitative categories. Like many – maybe most – mobile-based projects, we have a lot of work to do in determining the very real, on-the-ground impact of our technology on individuals. We regularly write and talk about these challenges. But it’s not just about having the funding or the time to do it. It’s figuring out how we measure it.
If a farmer increases his income through a FrontlineSMS-powered agriculture initiative, for example, but then spends that extra money on beer, that’s hardly a positive outcome. But it is if he passes it to his wife who then uses it to send their third or fourth daughter to school. How on earth do we track this, make sense of it, monitor it, measure it, or even decide how we do all of these things? Do we even need bother at all?

Of course, as my recent Tweet suggests, we shouldn’t get too obsessed with the data. But it’s important that we don’t forget it altogether, either. We need to recognise the scale of the challenge – not just us as software developers or innovators, but also the mobile conference or workshop organiser, and the professor, both of whom need to face up to exactly the same set of questions. The case of the missing metrics applies just as much to one as it does to the others, and we all need to be part of finding the answer.


Sometimes I think Westerners tend to put too much value in weighing and analyzing everything. It’s nearly impossible to measure the secondary and tertiary benefits of some mobile social tool designed for “social good”. I mean, all we can do as creators of these things is make the best tool we can, listen to feedback from real people using it, then iterate and make it better.
If some guy wants to use the extra money he saved to buy more beer, I think that’s a little beyond what our goal is.
I have the same problem with people saying that they want to provide internet enabled computers to a village, and then being upset when people don’t surf the things that they think they should surf online. It’s both paternalistic and pedantic.
Ken,
Outputs are easy to measure (how many SMS sent, how many received, how many forwarded, how many calls into an IVR for how many minutes), outcomes (better information received that was otherwise not available, better health or political decisions made by individuals, better services provided by CHWs) are much harder and more expensive but not impossible to measure. The relation of mobile projects to social impacts, on the other hands, are really hard to measure and expensive because there are so many variables (such as was there a better government elected, is a group of people healthier and thus more productive over the long run, etc). Most projects are simply to small to ever really get to social impact beyond a small group of people.
I would venture to say that most projects are actually measuring outputs and, to a lesser extent, outcomes. We do. Many of the projects I am working on have to because they get funding where the donor requires it. In fact, working on a PMEP right now – a ‘project and monitoring and evaluation plan” – complete with very concrete metrics pre- and post-intervention. We have, for MobileActive08, measured the number of projects that emerged as a direct result of the event and the number of partnerships forged three and six months after the event to report back to the donors that funded the event.
I have seen dozens of these evaluations of various projects but they are typically not published and often end up in (figurative) dusty drawers. I personally think we need, in this mobile field, a Poverty Action-Lab approach. Maybe we should call it the “mobile action lab” that actually begins to demystify monitoring and evaluation and at the very least aggregates the available data in a more meaningful and accessible way than is done now. We should do that, come to think of it.
I am seeing more meetings focused on this very thing (and yes, they are more meetings in far-flung places) and that is a good thing in many ways.
Hi Ken.
The people who want the proof, in quantitative terms, are usually just trying to stall us all. I agree we need to crack this though.
Mark
Great post which really asks some tough questions! Never considered this as a problem faced across the board in mobile, but you’re right to question the impact and outputs from the many meetings, workshops and conferences. Who’s measuring the impact, and who’s accountable?
Can and should we care about measures/measuring? Yes, I think we can and should. Rather, some of the important questions are: what can and should measurement explore…and is this exploration meaningful? …Focus is then on what meaningful explorations of mobile technologies and social change look like, enabling social change and its measurements to be defined in all contexts…by developers, users, recipients.
Ok, I’ll have a crack at some thoughts. I read your post and went “yeah!” then I read Hash’s comment and went “yeah!”. I think the issue of measuring depends on the angle that you are coming from. And a HUGE disclaimer here – I’m nothing close to a monitoring and evaluation person or an ICT innovator, so I am probably totally wrong :-). However, I like discussions and taking risks so here goes….
There is the “development” angle, where I would say ideally you are looking at existing types of programs (health, education, youth engagement, emergency, disaster preparedness, HIV/AID, gender, etc. etc.) that have certain desired impacts. These programs regularly undergo either program impact evaluations or long-term impact evaluations in most serious organizations. When you incorporate ICTs into the programs, I’d say you’re often looking at how new technologies can enhance your methodology or your activities and/or enable you to be more effective at reaching specific goals or reaching a broader population, engaging or educating a public, aiding in decentralizing your activities, helping to manage information better, communicating more effectively or more widely, saving community members or volunteers or staff time and money, reaching people you wouldn’t normally reach, etc. I think in this case the ICTs could be anything from a small to a huge part of achieving a larger program impact. It would seem that you’d want to measure your larger impact and somehow also pinpoint how your methodologies (including ICTs) made a difference. So you’d probably have outcome indicators (eg, did you reduce infant mortality) as well as process indicators, (what happened along the way) and your case studies (what could be replicated).
Then there is the angle of conferences, meetings, events, networks, websites for networking and educating on ICTs. I think those are notoriously difficult to evaluate, whether they are about ICTs or mobile or about improving agriculture or stopping climate change or empowering women. The personal contacts made there are invaluable. The public opinion generated or awareness raising before donors, colleagues and the public; the best practices and case studies shared are all valuable learning experiences. The agreements and follow up often come to nothing, but maybe they are not the most important part anyway – it’s the process that matters. But in order to try to prove that these events and networks have value, we’re expected to come up with indicators and do evaluations, when in reality the take aways are often intangible and long lasting and unpredictable.
Then there is the angle, (which I’m not sure what to call since I come from an international development background), that looks more at local ICT innovation. Maybe this is the whole “Design with a capital D” thing…. So you come more from the angle of developing a particular technology, hopefully engaging the people that it’s designed to ‘help’ or support, or you support people to develop something they already had in mind. And then you just let that technology run on its own, and let people decide how to use it however they want. And it’s then a product of sorts that’s in the market and it does any number of things, as it has rather a life of its own and the measure of success is whether it’s purchased or not.
And then I’m sure there are any number of hybrids in between these, or different ways of looking at this. It probably has to do with whether you are a .org or a .com or a .edu. Anyway, it seems to me that the whole issue of evaluation has something to do with how you’re looking at it.
Dear Colleagues
In my humble opinion the progress of technology has been amazing over the past 50 years … I would argue technology has power that is one million times more than when I started my career … and in some areas of endeavor this power is being used (e.g. the human genome project, a modern aircraft, derivative profit (not risk) optimization, etc) … but in the area of socio-economic progress and performance metrics … technology is almost totally missing. I am trying to put Community Analytics (CA) in this space to bring socio-economic metrics out of the stone age and into the 21st century. Part of the trick is to aggregate data in a meaningful way and have less data with a lot more value. Another piece of the trick is to think in terms of value consumption, value creation and value change rather than only money cost, money revenue and money profit. Another piece of the trick is to have a focus on place or community rather than only on activity and the organization. With these elements there can be much needed paradigm change in the metrics that are used for the analysis of economics.
Peter Burgess
Community Analytics
Thanks for your comments, everyone.
@HASH – I’m with you on this, Erik. And although I agree we have no control, ultimately would we want to continue with our work if it consistently produced outcomes we didn’t want? The beer example was a little extreme, but used to make a point. 🙂
@Katrin – I don’t agree that outputs are easy to measure. If you run and control the project, sure. But for tools like ours which users can take and implement independently, it’s a totally different ballgame collecting this data (but not impossible). In terms of conferences and workshops, I’m not sure if even counting projects/partnerships gives you a true indication of final impact
@Pauline – I guess the organisers are accountable. I’m not 100% sure if many donors strictly set or expect these kinds of stats from conferences or workshops they fund. It’s probably more a case of bums on seats?
@Tess – Totally with you on this. But the problem is ‘what’ we measure and ‘how’. I don’t know if we’ve really figured this out yet
@Linda – From my experiences working in the field on and off over the past 15 years or so, I’ve seen many, many projects which failed yet the donor reports glow with success stories. The challenges of measuring impact apply just as much to non-ICT projects, as you imply. There’s no “one size fits all” for measuring impact
@Peter – Nice to hear from you again! I’m very much looking forward to continuing our discussions, and agree that a shift in what, how and who might be a crucial step forward. Perhaps CA will be the answer
Ken,
Thanks for the important post to remind us of the limited (and limitations of) anecdotes and qualitative data.
I actually think some of the analysis you’re touching on wouldn’t be all that hard to implement. It would require smart scaling up post-pilot (i.e. randomizing rolls outs at some level vs. what are probably the current criteria — opportunistic, capabilities, partners, etc. which would bias any data). One could then use census or other national survey level data to understand a lot about the impacts, which wouldn’t be terribly expensive (vs. doing surveys ourselves). There would need to be decisions about tradeoffs between efficiency and getting this data, and a big question would be who would do the analysis. To Katrin’s point, it would seem to make sense for one org to consult with projects on this scale up and do the analysis vs. diverting resources and expertise for every one. This would at least give us a good idea at the micro level of impact.
At the macro level however analysis is much, much harder and most implementations would have to come with methodology caveats.
I’ll think this through some more and post on my blog soon.
BTW, that comment about consumption shifting (i.e. buying the beer vs. sending another kid to school?) — it’s a big criticism of mine of the microfinance studies. We often look at poverty, as measured by absolute consumption level, vs. understanding if consumption shifts.
Thanks for getting the wheels spinning!
The question is “social mobile and the missing metrics”. The problem is how to measure in a situation where the norm is some degree of chaos. In chaos, the variables are many … and most typical survey approaches used in the academic analysis of socio-economic development are horribly expensive, horribly slow and horrible inconclusive. There has to be a better way.
I concluded a long time ago that “a single silver bullet” was never going to be much help in solving the problem of poverty and feeble socio economic progress. Rather, my observations suggested that progress was remarkably fast when a mix of interventions took place in a community … and, not surprisingly, that this mix of interventions was actually what the community needed and wanted!
Community Analytics (CA) therefore starts with the premise that there should be some recording of what is important in the community … starting off with simply what these few important things are. A set of say five important things is likely to be quite different depending on the situation in the community.
A next step is to learn about how these important things have changed over time in the past … and how they need to change in the future.
And then to know what caused them to change in the past and what is needed for them to change in the future.
If these datapoints are obtained for a small community they have some tangible understandable meaning … for a whole nation they do not have much reality at all.
My guess is that an issue that is emerging in Haiti at this time is the problem of “profiteering”. A database of prices would help to understand the scale of this problem (if it is a problem at all). Every datapoint in the CA system has a time and place characteristic … then what, unit of measure and quantity … then if possible an identified person or organization. There is also an ID for the origination of the data.
With these data elements it is possible to get a time series of prices. This basic simple plot of data will show changes but will not directly show why the changes. If the price change is because the costs have increased that is one thing … if the price change is because someone or organization has monopoly control over some area of the economy, that is another and is profiteering.
The UN and the official emergency and relief assistance organizations, including the military are a source of amazing help and huge economic distortion. The damage to the sustainable economy arising from the economic distortion is usually ignored … but is a part of the CA framework … starting with prices.
@Jenny – Thanks for your thoughts! Glad so many people have taken an interest in this topic, and I look forward to reading your post when you get round to it. 🙂