By Alex Alhadeff & Alex Blau

As more and more tasks in our lives move online—from work to shopping to paying bills to streaming sports games or TV—online security becomes that much more critical. Yet when an all-too-familiar red and yellow warning pops up (“Warning! Visiting this site may harm your computer!”), do you close the site or take the risk and carry on?

Unfortunately, security warnings like these are disregarded every day, exposing vital technology to cyber-attacks such as viruses and malware. But this isn’t merely a technology problem—the knee-jerk dismissal of digital warnings that are designed to protect computers and data is inherently behavioral. That’s why we’re applying a behavioral lens to the problem when we ask: why do so many people ignore security warnings?

We found that a major reason people ignore online security warnings is a common mental model that warnings like these aren’t worth paying attention to. It’s no surprise how people arrived at this conclusion: 81% of the time, the warning is false, and the website is perfectly safe. But the belief that all warnings are invalid can cause people to click through even when there is a legitimate risk in proceeding. To add to this issue, cybersecurity is not a salient problem for most people. Everyone faces competing priorities—work, family, and social lives—so it’s difficult to keep the security of our home or office computers top-of-mind, even if the alerts were always accurate. And we know from behavioral science that when something is not salient, it’s easy to overlook.

How do we address behavioral challenges like these to help people better ward off security threats?

First, there should be fewer, more accurate warnings. It’s up to engineers to build algorithms that can correctly and consistently identify risk in order to change our mental models toward alerts. Next, warnings should not only be more salient, but they should also be more clear and actionable to users who are not technology experts. Security software designers should develop more effective warnings that identify what the problem is, what the implications are, and what the user should do.

But even if those solutions were implemented, we face another barrier: habituation. The more people encounter something, be it a threat, warning, or tragedy, the less they will respond to it. Though therapists have been using habituation to treat phobias for decades, researchers only recently began exploring the role of habituation in cybersecurity and threat warnings. As expected, they discovered that neural responses to a warning drop dramatically after the second exposure and continue to decrease with subsequent exposure. Even more concerning, because different warnings and notifications across the internet look very similar, a user may already be habituated to a warning he or she has never seen before. That means you may close a window without a second thought because you think you’ve seen it a million times, but it could actually be something new and important.

Once the alerts’ accuracy and salience are addressed, we can take steps to combat habituation. A potential solution is polymorphic warnings, which jiggle, zoom, and twirl on the screen. Unsurprisingly, they are far more resistant to habituation than regular warnings, at least in the short term. A second noteworthy tactic comes from Laura Brandimarte, a professor at the University of Arizona, who suggests that making users have “visceral reactions” to warnings is key to getting them to respond. For example, polymorphic warnings could change intensity, cadence, or form, or they could span across different senses. Imagine browsing the Internet when a strange odor seeps from your computer. Though futuristic, this idea may not be far off.

As the digital world expands, we need online security that is actually helpful to people. That’s where behavioral science can make a difference. During the development of Windows 95, there was a facetious placeholder notification that exemplified exchanges between people and their computers. It read: “In order to demonstrate our superior intellect, we will now ask you a question you cannot answer.” To improve cybersecurity effectiveness, we need to design systems that, instead of mystifying people, work with their needs, skill levels, and behaviors.