By Lois Aryee and Sara Flanagan

Rigorous impact evaluations are essential to determining program effectiveness. Yet, they are often time-intensive and costly, and may fail to provide the rapid feedback necessary for informing real-time decision-making and course corrections along the way that maximize programmatic impact. Capturing feedback that’s both quick and valuable can be a delicate balance.

In an ongoing impact evaluation we are conducting in Ghana, a country where smoking rates among adolescent girls are increasing with alarming health implications, we have been evaluating a social marketing campaign’s effectiveness at changing girls’ behavior and reducing smoking prevalence with support from the Bill & Melinda Gates Foundation. Although we’ve been taking a traditional approach to this impact evaluation using a year-long, in-person panel survey, we were interested in using digital feedback as a means to collect more timely data on the program’s reach and impact. To do this, we explored several rapid digital feedback approaches including social media, text message, and Interactive Voice Response (IVR) surveys to determine their ability to provide quicker, more actionable insights into the girls’ awareness of, engagement with, and feelings about the campaign. 

Digital channels seemed promising given our young, urban population of interest; however, collecting feedback this way comes with considerable trade-offs. Digital feedback poses risks to both equity and quality, potentially reducing the population we’re able to reach and the value of the information we’re able to gather. The truth is that context matters, and tailored approaches are critical when collecting feedback, just as they are when designing programs. Below are three lessons to consider when adopting digital feedback mechanisms into your impact evaluation design. 

Lesson 1: A high number of mobile connections does not mean the target population has access to mobile phones. 

Despite an exceedingly high number of mobile connections in Ghana, numbers nearing 133% of the population in 2021, most urban, adolescent girls don’t have consistent mobile access. In fact, only 44% of the teenage girls we interviewed had regular access to a mobile phone, and even fewer had internet access. This gap in access was especially prevalent among younger girls, as well as girls that were not enrolled in school due to experiencing extreme poverty, living in a parentless household, or recent migration. Knowing this, mobile- and internet-based surveys would have excluded their critical perspectives from program feedback. 

Lesson 2: High literacy rates and “official” languages do not mean most people are able to read and write easily in a particular language.

English may be Ghana’s official language, but it’s not the most widely spoken language in the country. Even though Ghana’s adult literacy rates have been steadily rising, we found gaps in English literacy among urban teenage girls. Reading and writing were especially difficult for girls from lower socioeconomic backgrounds, due to barriers preventing them from accessing formal education. However, reading comprehension remained difficult for some girls enrolled in school as well, because the language they’re often taught to read in at school, English, differs from their primary spoken language. This made text message and IVR surveys, where participants would need to enter responses on their phones, difficult.  

Lesson 3: Gathering data on taboo topics may benefit from a personal touch. 

Reaching people through automated digital media provides an exciting way to reach more people faster than through traditional, in-person surveys. However, digital surveys limit the depth of information able to be communicated and collected, with text message surveys limited to 160 characters and IVR surveys shortened to hold the respondent’s attention. Text-based questions are also more likely to be misunderstood, with little to no opportunity to provide clarity. Finally, digital surveys don’t allow for the same level of trust to be built with a participant that in-person surveys do, which may leave sensitive groups like teenage girls less likely to respond freely about taboo topics, like smoking and intimate relationships. 

Ultimately, digital feedback mechanisms are a promising option for generating rapid feedback at scale, but there are important equity and quality trade-offs to consider before implementing them. Because of this, program evaluators must keep context top of mind, considering who the target population is, their level of access to mobile phones, and the level of sensitive or complex information needed when implementing them. For feedback that requires a wide range of perspectives, explores taboo topics, or necessitates high-level comprehension, in-person surveys or mixed-method approaches will be more effective at producing the high-quality data necessary for maximizing impact.