A data breach at financial company Capital One last week exposed the personal information of about 106 million of its customers and applicants. It is another victim on a long and fast growing list of cyberattacks.
Cyber attackers are relentless in breaching the defences despite the billions of dollars spent on protecting organisations’ networks and crown jewels. The truth is that organisations are finding it tough to detect when large amounts of information leave their networks.
Security perimeters have expanded and attack surfaces have become amorphous with the emergence of cloud services, mobile devices and expanding inter-connectivity. Traditional security measures that block all instances that look like data theft do not cut it any more.
A new approach is needed to understand the context behind a user’s actions. For example, an organisation may restrict the use of USB flash drives but some staff may routinely rely on portable storage devices to do their jobs.
Users will look for a workaround solution to this problem, leading to more headaches for security administrators. This situation makes it difficult to tell the difference between real threats and staff simply trying to work.
Hence understanding the context behind a user’s actions is becoming an important layer of security provision. This concept lies at the intersection of people and data, said Alvin Rodrigues, Forcepoint’s senior director and security strategist for Asia-Pacific and Japan.
Describing this as a human-centric approach to better manage risk, he said it focuses on how, when and why people use and access information, placing users’ actions into a larger context based on their normal activities.
A way to do this is via risk scores which establish baselines for normal and anomalous behaviour, said Rodrigues, in an interview with Techgoondu last week.
Depending on access level, different people have different risk scores which can be automatically tracked. In an event, like the downloading of data to a USB drive from the server, the security system automatically notes the risk score and the proportionate response kicks in.
This ranges from making the downloaded data read-only on an office computer to triggering encryption, based on the preset variables.
One issue to note in human-centric approach, however, is the concept of cognitive bias. David Coffey, Forcepoint senior vice-president of engineering, explained: “People’s decisions, behaviours, and experiences are influenced by the experiences of the past and the present.”
“This top-of-mind recall is a common human decision-making tool which can lead to faulty conclusions, resulting in even greater threat to enterprise data security,” he added.
There are different types of cognitive biases that can colour decision making, said Coffey, who was recently in Singapore.
The biases can colour security practitioners’ understanding of the cyber landscape, their perception of risks, and even the perceptions about each other, he added.
Aggregate bias or stereotyping is one cognitive bias. Older users, for example, may be unfairly considered to be riskier because they are not tech savvy.
However, Rodrigues said various studies have shown that young users, including digital natives, tend to share passwords.
“In aggregate bias, a security practitioner can focus on the wrong behaviour as they look for answers to support their assumptions. The danger is that this bias delays identification of the true source of security issues,” he pointed out.
Another is anchor bias, which refers to a permanent first impression or a latching on to a specific set of features, like numbers, early in the decision making process. A security analyst may be drawn to a specific feature, thus missing or discounting other influential information related to the threat.
Availability bias refers to memory. The more frequently a person encounters specific types of information, the faster the recall and the more readily accessible the information is in their memory.
So if a CEO or security practitioner comes across frequent news articles on ransomware, he can be influenced by what’s risky and impact his approach towards security.
Yet another bias is fundamental attribution error where the blame is always placed on someone else. Or that the mistake is due to other people’s failure or mistake. So, a person who trips is seen as clumsy, even though there may be other causes in the wider context.
Rodrigues said biases impact resource allocation and delay threat analysis, which endanger enterprise security. “No one is immune to these biases, but we must be aware of them, admit they play a part in our thinking process. With this awareness, we can make better informed decisions.”
Overcoming these biases requires creative thinking and studying the data closely to more accurately represent the state of the threat, he added.