By Alex Blau & Michael Stern

Imagine you’re a software developer, building an application that secures sensitive data for a company with 100,000 employees. You build a piece of secure technology that’s nearly impossible to hack remotely and pat yourself on the back. The software is uploaded and installed on each computer, and now the organization is safe from internet intruders!

However, following the release and installation of your top-notch security software, an employee steps away from his desk and slips out a back door to take a private phone call. He leaves the door ajar to avoid locking himself out, and another person outside thanks him for saving her a trip around the building and enters through the open back door. Inside, she walks up to an unattended computer, inserts a malicious USB drive and easily steals the sensitive company records. What happened? You built good technology, but by focusing solely on digital threats from unknown locations, you forgot about the literal back door.

This (quite) simplified hypothetical example helps illustrate people’s mental models of “security.” We all use mental models to both categorize and make complex and abstract things easier to comprehend and work with in our minds. They are very useful, but as the scenario above makes clear, mental models can also be limiting.

When it comes to cybersecurity, most people think of widely-covered stories of leaked celebrity photos, the 2014 cyber attack on Sony, and anything to do with Wikileaks. In short, we think of remote digital threats, anonymous villains, and vulnerabilities in the technology—not human error. Yet, cybersecurity vulnerabilities extend into the physical world and are very much connected to human action – unlocked doors and cabinets, misplaced computers, and neglected work areas are just a few ways people expose their technology to a breach. In fact, research from Verizon suggests that 39% of physical theft of laptops, hard drives and other data storage occurs in the victims’ own work area. This is all preventable, but not if it doesn’t fit into people’s mental models of cybersecurity. If no one is thinking about the human element of cybersecurity, no one will try to solve it.

We want to shine light on these blind spots by reframing mental models so they better represent the security realities the public faces. For instance, by reframing cybersecurity “fixes” so they include physical interventions and address the human role in technology, we will have more ways to solve costly problems.

When defense personnel were about to convene to discuss classified information, many were concerned about the security risks of the cameras on their government-issued phones. Ahead of the convening, security personnel discussed many costly and complicated ways that they might be able to remotely disable the cameras during the event. But they realized a simpler, cheaper alternative existed. Government officials have little use for the cameras in their government-issued phones, so why not just break them? The security team simply drilled holes into the cameras belonging to defense personnel. By reframing the nature of the problem and applying a physical solution to what first seemed to be a complex technological problem, the security team was able to more easily remove the risk and avoid a costly alternative.

The importance of reframing mental models in cybersecurity has become clear from our research into human behavior, but this is just one of myriad behavioral challenges that are making our information systems less secure. As part of our ongoing work in cybersecurity, generously supported by the William and Flora Hewlett Foundation, we’ll be exploring this and other behavioral insights in a multi-part blog series, where we’ll attempt to answer questions such as: What makes updating my computer so hard? How did I become a phish? When are cybersecurity defaults good and bad for me? And, why are stunt-hacks good for society?

Tune in next time, and, as always, remember to lock the back door.