Nonprofits are in trouble. Could more sensitive chatbots be the answer?

Body

In today’s attention economy, impact-driven organizations are arguably at a disadvantage. Since they have no tangible product to sell, the core of their appeal is emotional rather than practical—the “warm glow” of contributing to a cause you care about. But emotional appeals call for more delicacy and precision than standardized marketing tools, such as mass email campaigns, can sustain. Emotional states vary from person to person—even from moment to moment within the same person. 

Photo by Getty Images

Siddharth Bhattacharya and Pallab Sanyal, professors of information systems and operations management at the Donald G. Costello College of Business at George Mason University, believe that artificial intelligence (AI) can help solve this problem. A well-designed chatbot could be programmed to calibrate persuasive appeals in real time, delivering messaging more likely to motivate someone to take a desired next step, whether that’s donating money, volunteering time or simply pledging support. Automated solutions, such as chatbots, can be especially rewarding for nonprofits, which tend to be cash-conscious and resource-constrained.  

“We completed a project in Minneapolis and are working with other organizations, in Boston, New Jersey and elsewhere, but the focus is always the same,” Sanyal says. “How can we leverage AI to enhance efficiency, reduce costs, and improve service quality in nonprofit organizations?” 

Siddarth Bhattacharya. Photo provided

Sanyal and Bhattacharya’s working paper (coauthored by Scott Schanke of University of Wisconsin Milwaukee) describes their recent randomized field experiment with a Minneapolis-based women’s health organization. The researchers designed a custom chatbot to interact with prospective patrons through the organization’s Facebook Messenger app. The bot was programmed to adjust, at random, its responses to be more or less emotional, as well as more or less anthropomorphic (human-like).

“For the anthropomorphic condition, we introduced visual cues such as typing bubbles and slightly delayed response to mimic the experience of messaging with another human,” Sanyal says.  

The chatbot’s “emotional” mode featured more subjective, generalizing statements with liberal use of provocative words such as “unfair,” “discrimination” and “unjust.” The “informational” modes leaned more heavily on facts and statistics.  

Over the course of hundreds of real Facebook interactions, the moderately emotional chatbot achieved deepest user engagement, as defined by a completed conversation. (Completion rate was critical because after the last interaction, users were redirected to a contact/donation form.) But when the emotional level went from moderate to extreme, more users bailed out on the interaction.  

The takeaway may be that “there is a sweet spot where some emotion is important, but beyond that emotions can be bad,” as Bhattacharya explains. 

Pallab Sanyal. Photo provided

When human-like features were layered on top of emotionalism, that sweet spot got even smaller. Anthropomorphism lowered completion rates and reduced the organization’s ability to use emotional engagement as a motivational tool.  

“In the retail space, studies have shown anthropomorphism to be useful,” Bhattacharya says. “But in a nonprofit context, it’s totally empathy-driven and less transactional. If that is the case, maybe these human cues coming from a bot make people feel creepy, and they back off.” 

Sanyal and Bhattacharya say that more customized-chatbot experiments with other nonprofits are in the works. They are taking into careful consideration the success metrics and unique needs of each partner organization.  

“Most of the time, we researchers sit in our offices and work on these problems,” Sanyal says. “But one aspect of these projects that I really like is that we are learning so much from talking to these people.”  

In collaboration with the organizations concerned, they are designing chatbots that can cater their persuasive appeals more closely to each context and individual interlocutor. If successful, this method would prove that chatbots could become more than a second-best substitute for a salaried human being. They could serve as interactive workshops for crafting and refining an organization’s messaging to a much more granular level than previously possible.  

And this would improve the effectiveness of organizational outreach across the board—a consummate example of AI enhancing, rather than displacing, human labor. “This AI is augmenting human functions,” says Sanyal. “It’s not replacing. Sometimes it’s complementing, sometimes it’s supplementing. But at the end of the day, it is just augmenting.”