“BALLISTIC MISSILE THREAT INBOUNDS TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” How would you respond to the message?
It sounds like the plot of a disaster movie. However, people living and visiting the state of Aloha had to face the real possibility of impending doom when they received such a message on January 13, 2018, at 8:07 AM local time.
This was an error made by an employee of the Hawaii Management Agency (EMA) during the preparation drill and was withdrawn 38 minutes later. Of course, there were no incoming missiles. Yet, for the long 38 minutes, the people of Hawaii really thought they were under attack – and when it came to displaying their tweets, it was horrible. Centers for Disease Control (CDC) wanted to understand the impact of the warning on the general public and since it is 2019, they have used Twitter as their modus operandi.
The team noticed four main themes so far during pre-withdrawal tweets. First, what they call “data processing”. It basically includes anything related to the mental processing of caution. For example, one reads “What’s going on IDIC … but is there a warning for a ballistic missile coming to Hawaii? Explosive deleted”.
Using search terms such as “Missile Hawaii”, “Ballistic”, “Shelter”, “Drill”, “Threat”, “Warning”, and “Alarm”, the researchers first collected tweets posted during the panic. Retweets and quote tweets were immediately disqualified, leaving a total of 14,530 leaving these tweets then divided into two parts – those posted before the withdrawal ((8.07-8.45 am) and those posted after (8.46-9.24 am). The second is what they refer to as “information sharing”, aka any attempt at caution and propaganda. It often includes other people’s Twitter handles – for example, “An iPhone received a warning about Hawaii’s inbound ballistic [sic] missile. A drill did not say.
Another key trend was “authentication”, which researchers described as an attempt to legitimize the warning, for example: “Is the threat of this missile true?” Although the final theme was something that researchers call an “emotional response”. This could include expressions of shock, fear, terror, or panic – for example, “Guys, there’s a missile threat here at the moment. I love you all and I’m scared as exclusively deleted.”
Once the message was corrected, researchers observed three different themes: “condemnation”, “insufficient knowledge to act”, and “distrust of authority”, which were (surprisingly) “fundamentally different” in the tweets posted when people believed they were under attack and “reflect incorrect information response and response”. “Condemnation” involves urgent warning and blaming the response. In particular, people were upset about how long it took to correct their mistakes. One read, “How do you” accidentally “send an entire [explicitly deleted] emergency alert that says a missile is coming from Hawaii and can cover it. And 30 minutes to correct?!?”
“Insufficient knowledge to work” refers to posts where there is a lack of response planning. For example, “Can you think of a warning waking up that says there’s a missile on the way to take refuge ’, like Bruh. What is the shelter for a missile? It can say forget to delete as well. “Come on Bruh Missile on the way. Good luck.”
Finally, a perfect example is, “Disbelief of authority” which includes a post that (you guessed it) expressed distrust of authority following a spectacular error. “And now, if there should be a threat of another ballistic missile, how can we believe it knowing that the last one was a serious mistake??”
The team does not acknowledge certain limitations. For example, it can be difficult to accurately assess the sincerity and tone of written statements. But they are hopeful that understanding the public’s response to the message will enable them to improve the way crisis warnings are developed and disseminated in the future.