Keeping secrets from insiders likely to turn on you
Paranoia aside, is it possible to tell when an employee, contractor or business partner is brewing an attack? Photo: Louise Kennerley
Defending against outside hackers is challenging enough, but predicting when trusted insiders will turn against their employers is tougher. Still, experts say insider threats can be contained and it doesn’t require a paranoid approach to data security.
Organisations need to trust employees, yet any number of events affecting their lives could impact their trustworthiness. At worst, an event could trigger a decision to abuse privileged access to IT systems in order to steal trade secrets, damage IT systems or manipulate data.
The problem for most victims, according to Nick Klein, chief executive of Australian computer forensics firm, Klein & Co, is that they don’t know about the attack until well after it happens.
“Most incidents of insider threat are not discovered proactively,” Klein tells IT Pro. The most common threat, according to Klein, is theft of commercial information upon an employee’s departure and most victims only discover the loss after the insider joins a rival or starts their own company.
So is it possible to tell when an employee, contractor or business partner is brewing an attack? Are there tell-tale signs of imminent betrayal that can be used to profile an inside threat actor and root them out before any damage is done?
In the wake of WikiLeaks’ exposure of classified US documents in 2010, a US government memo directed agencies to assess staff “trustworthiness”, asking whether they used a psychiatrist or sociologist to measure the “relative happiness” of staff as a gauge of trustworthiness, or “despondence and grumpiness” as an indicator of waning trustworthiness.
The US military’s Defence Advance Research Projects Agency (DARPA) also had a crack at pre-emptive insider threat detection post WikiLeaks under a program called ADAMS, which sought to pick up early trails of evidence that often go unnoticed until after the event.
And the FBI has explored dozens of “psychosocial indicators” in its search for early detection methods, including whether staff display signs of being disgruntled, or that they are suffering emotional vulnerabilities, ego problems, or relationship and financial problems. While the FBI’s research revealed stronger correlations between some indicators and inside threat actors, by its own account, the science behind psychosocial detection is still immature.
Given the recent exposure of the National Security Agency’s (NSA) PRISM surveillance program by its former IT contractor Edward Snowden, the answer would seem to be no. If the NSA can’t stop insiders, who can?
Not every attack can be prevented, whether from the inside or outside, but organisations can minimise insider threats by becoming familiar with them, according to Randy Trzeciak, a senior member at the computer emergency response team (CERT) Program at the Software Engineering Institute (SEI) at the Carnegie Mellon University.
Trzeciak and fellow CERT members have collected over 850 known insider threat cases since 2001 for the US Government funded program, which is tasked with describing insider threats and devising strategies to detect and prevent suspicious insider network activity.
"Even after 12 years research, we don’t have one indicator that someone is moving on to commit a harmful act to an organisation,” Randy Trzeciak, a senior member of the CERT program tells IT Pro.
A 2012 study by the CERT program of 80 prosecuted insider fraud cases in the US banking sector found personal and financial struggles did lead some subjects to commit their fraud. Yet there was no known common event, such as a divorce, personal bankruptcy, or changes to work assignment that triggered the fraud cases. Had there been one, it may have been possible to prevent fraud that otherwise took on average 32 months to discover.
Identifying insider threats as they emerge however is still possible, although it’s not an exact science and likely requires some old-fashioned detective work, like checking physical security systems for access to areas that house critical systems and building relationships with the human resources department. Combined with access logs, events that impact staff, such as downsizing or staff that have been passed over for a promotion, could provide invaluable insights.
“All we are saying is that if organisations can narrow down the search space a little bit -- give them a window of opportunity of who and when might be more likely in a theft of IP incident -- then you ask the operators who are looking at your data to look at these people a little more closely because they might be more likely to harm your assets,” says Trzeciak.
Insider threat programs should also avoid assuming that everyone is a threat. Besides potentially creating an environment of fear, implementing “complete security” is impractical and would slow the organisation down.
“Not everyone is a threat to everything,” says Trzeciak. “If you can prioritise the most critical assets and apply the most protection at those asset levels, then a risk benefit cost analysis can be done for the rest of the assets.”
Identifying critical assets can be tricky task, according to Klein. “Don't just assume you know where your critical data is. Look for it across your environment, as it usually turns up in unexpected places.”