The human component in emergency population warning and protection systems

My talk @ Technology saving lives: modern emergency warning and protection systems in Europe

European Internet Forum, European Parliament, Brussels, 20.2.2019

 

This morning I’d like to focus on the human component in emergency population warning and protection systems.

 

According to me, the situation looks like this: it’s a pity that these systems have to deal with citizens because they constitute a major problem in the process. And it’s a pity that these systems are handled by people in governmental institutions because they are only human.

 

Let me explain. I’ll start with the citizens. Why are they a major problem?

  • Citizens might not trust official messages, might not take them sufficiently serious or might even not recognize them as warnings;
  • Citizens don’t react at once, there is a delay time between receiving an alert and taking protective action, the so-called “protective action initiation” (PAI);
  • General messages work less efficient than personalized messages. But privacy rules hinder personalization;
  • Non-official sources, read: citizens, can spread rumors or falsehoods or even obstruct warnings (consciously, f.i. by means of hacking, or unconsciously, f.i. by forwarding official warnings to those who are not involved);
  • Citizens are slow to adopt new technologies, slow to understand new technologies and are lazy: only a fraction of the citizens actively downloads new relevant emergency apps;
  • Citizens expect interactions as well as top-down information;
  • Citizens suffer from optimism bias.

 

A first answer to the problem of citizens is the personalization of emergency messages.

  • Personalized messages can provide personalized guidance and personalized time until impact – the two most important elements in warning messages. This can help shorten PAIs and overcome citizens’ optimism bias.
  • Personalized messages can also take into account different PAIs for citizens with different status and role characteristics.
  • i. being younger, having higher education, having employment and being a woman shortens PAIs.

Personalization of messages can be based on:

  • Crowd-sourced input data, f.i.:
    • Internet of Things input;
    • Vehicle input;
  • Citizen input data, f.i.:
    • Location data (citizen location and other locations of interest: user residence, frequently visited locations; locations of loved ones);
    • Medical data including disability data and allergy data;
    • Household data;
    • PAI relevant data (age, gender, education level, employment status);
    • Language data.

But:

  • Citizen and crowd-sourced input data can be used by authorities within other contexts;
  • It’s not clear how to automatically update citizen input data while respecting their privacy.

 

Before I turn to a possible way out of this, I’d like to introduce answer 2 to the problem of citizens: a community or group approach as an addition to sending general or personalized messages to individuals. In other words: to use social media.

Social media:

  • Are widely used channels during disasters;
  • They help to shorten PAI:
    • Supports milling: seeking confirmation by others;
    • Supports reunification with intimates;
  • They reach individuals and groups with low trust in official messages;
  • They offer real-time interaction tools for one-off or repeat communication;
  • They might help to overcome optimism bias.

But:

  • It’s difficult within the enormous volume to identify the signal from the noise;
  • There exists a fear of a mass scale spread of misinformation – rumors vs intentional spread of misinformation;
  • There exists a risk of increased credibility of misinformation;
  • They support de-contextualization of information (loss of connection to the original source);
  • There exists a risk of a digital divide – the poor might not be reached;
  • Receiving messages over multiple channels might lead citizens to opt-outs;
  • People expect reactions.

 

The problem with citizens and answers to these problems thus are profound. Is there a way out? Sure, there is.

 

Regarding personalization a way out is to create a database containing all personalization input data needed at the time of an emergency under very special conditions:

  • The database needs to be stand-alone, separated from all other databases by Chinese walls.
  • The database needs to be managed under a very restricted regime, like the harshest GDPR privacy regime for companies – and with no exceptions to the rule. This means f.i. that citizen data in this database are to be used ONLY within the context of population emergency warnings, are subject to transparency (citizens are informed what is stored on them and who accesses/ accessed them) and are subject to citizen review.

 

Regarding social media, a way out is to create specialized channel-specific emergency population warnings social media teams that function like company customer service centers. These teams actively monitor, debunk, and communicate.

If we focus on one crucial and fashionable element, debunking, the social media teams could use a mixture of technologies:

  • Real-time search of relevant concrete emergency-related content that is to be verified or debunked;
  • Artificial Intelligence-based network analysis that is capable of recognizing viral spreads of fake content because of the inherent different patterns that the spreading of fake content has compared to the spreading of real content [MIT study, published in Science, March 9, 2018]. But such an analysis only starts working after a few hours. Which is too late. So maybe the teams should go preventive.
  • Since it can be analyzed already which accounts have a history of spreading fake content, in case of an emergency these accounts could be temporarily disabled from spreading any content for the duration of the emergency. Since accounts with many followers are bottlenecks in spreading content, the temporary disabling of the content spreading functionalities should most of all concern those.

 

OK, so now we have these potential way outs. This is where the second problem kicks in: people in governmental institutions. In essence, the question is this: can these people be trusted to only apply these and other ways out – that constitute major deviations from normal life – in emergency situations? And not cut corners in non-emergency situations or broaden the concept of an emergency situation to such an extent that it encompasses virtually any situation – thus making the exception the rule.

By trusting special powers to people in governmental institutions we might end up having a centralized database that renders all citizen rights to privacy obsolete while enabling a Chinese-like social engineering system capable of forcing citizens to obedience.

If we look at the track record of people in governmental institutions dealing with the threat of terrorism, we should be in great doubt whether we should grant broadened powers to those people.

 

So here we have it. A major problem during emergencies for emergency messages is constituted by the mass of citizens they try to warn. There are ways out to this but they trigger the next problem: people in governmental institutions who are probably too human to deal with superhuman powers.

Which leads me to the following conclusion. Technology is like a superpower to protect a vulnerable and complicated population. In every upcoming national framework for EU Member States concerning emergency population warning and protection systems there should be a mechanism on how to check and evaluate the mere humans that can use this superpower.