Best practice begins in the classroom

In The Rise of the Reluctant Innovator and my more recent book, Social Entrepreneurship and Innovation, I dedicate more than a few pages to emerging best practice in technology-for-development projects. While we certainly need as many bright minds as possible turning their skills, energy and attention to solving many of the problems in the world, their efforts should be respectful to the communities they seek to help, and properly guided in order for those efforts to have the greatest possible impact and chance of success.

But if you step back for a moment, it defies logic that someone should try to solve a problem they’ve never seen, or don’t fully understand, from tens of thousands of miles away. It’s hard to argue that they have the knowledge or qualifications – even the right – to attempt such an audacious feat. Yet that’s precisely what’s happening in many universities across much of the developed world multiple times each academic year. Students are being ‘skilled up’ in design thinking and global development issues, pointed to a few exciting new and emerging technologies, and told to fix something. Their primary purpose is to pass a course in most cases, which almost makes it worse.

macaskill-quote

Speaking at schools, colleges and universities around the world has been a big part of my work over recent years, and I always make a point of sharing emerging best practice when I do. My inbox is always open to students wanting to share their ideas, or talk about how they might contribute to making the world a better place. A highlight was almost certainly a discussion in front of several hundred students with Archbishop Desmond Tutu a few years ago. I’m happy to connect, guide and mentor anyone with a good idea and even better intentions, and have even gone to the effort of editing two books to help share the stories of others who have gone about innovating in impactful and respectful ways.

At a time when we know we need to be building capacity among local innovators to start solving their own problems, it’s tough to see so many outsiders continuing to take charge – students and tech-focused international development organisations among them. The developing world becomes a sand pit where people take and play out their ideas. It rarely turns out well for a whole number of reasons.

To help students think through what they’re doing before they reach out for help, I’ve added a Students page to the kiwanja website. I hope it helps them think a little more about what they’re doing, and why. There they can download a PDF of a checklist – made up of the same questions in my Donors Charter – to help them think through what they’re doing and, more importantly why it’s them doing it. I also hope teachers and lecturers make use of it, too. After all, in many cases it’s them encouraging and supporting these students with their project ideas.

You can check out the new Student page here. And feel free to print, share, re-post and distribute the checklist PDF anywhere you think it might be helpful.

Let’s start to put this right, one classroom at a time.

Social mobile and the missing metrics

Scenario 1: Five hundred people gather together for three days. They talk, they discuss, they share and they learn. And then they leave. Some stay in touch, others have picked up enough to start a project of their own. Others just leave with a satisfied curiosity, others with the odd new blog post behind them

Scenario 2: A charitable foundation funds the creation of a new mobile tool. Over a one year period there is software development, a new website, user testing and roll-out

Scenario 3: A university professor embarks on a piece of field-based research to examine the impact of a mobile-based health initiative in Africa. He or she writes a paper, highlights what did and didn’t work, gets it published and presents it at a conference

Question: What do these three scenarios have in common?
Answer: It’s unlikely we’ll ever know their full, or real, impact

Let’s assume, for one moment, that everyone working in social mobile wants to see their work have real, tangible impact on the ground. That would equate to:

  • A patient receiving health information through their phone which can be directly attributed to improving their health, or their likelihood of staying alive
  • A farmer receiving agricultural information which can be directly attributed to better family nutrition, or an increase in income or standard of living
  • A team of human rights activist reporting violations which can be directly attributed to the fall of an evil regime, or the passing of new legislation, or the saving of a specific person’s life
  • And so on…

Fine. But are things ever this clear cut? Ever this black or white?

The social mobile world is full of anecdotes. Qualitative data on how certain services in certain places have been used to apparent great effect by end-users. But what we so often lack is the quantitive data which donors and critics clamour for. You know – real numbers. Take the 2007 Nigerian Presidential elections, an event close to my own heart because of the role of FrontlineSMS. This year – 2010 – will witness another election in Nigeria. What was the lasting impact of the 2007 mobile election monitoring project? Will things be done any differently this year because of it? Did it have any long-term impact on behaviour, or anti-corruption efforts?

Much of the data we have on FrontlineSMS falls into the anecdotal and qualitative categories. Like many – maybe most – mobile-based projects, we have a lot of work to do in determining the very real, on-the-ground impact of our technology on individuals. We regularly write and talk about these challenges. But it’s not just about having the funding or the time to do it. It’s figuring out how we measure it.

If a farmer increases his income through a FrontlineSMS-powered agriculture initiative, for example, but then spends that extra money on beer, that’s hardly a positive outcome. But it is if he passes it to his wife who then uses it to send their third or fourth daughter to school. How on earth do we track this, make sense of it, monitor it, measure it, or even decide how we do all of these things? Do we even need bother at all?

Of course, as my recent Tweet suggests, we shouldn’t get too obsessed with the data. But it’s important that we don’t forget it altogether, either. We need to recognise the scale of the challenge – not just us as software developers or innovators, but also the mobile conference or workshop organiser, and the professor, both of whom need to face up to exactly the same set of questions. The case of the missing metrics applies just as much to one as it does to the others, and we all need to be part of finding the answer.