World

Your Bosses Could Have a File on You, and They May Misinterpret It

Are you an “insider threat?”

The company you work for may want to know. Some corporate employers fear that employees could leak information, allow access to confidential files, contact clients inappropriately or, in the extreme, bring a gun to the office.

To address these fears, some companies subject employees to semi-automated, near-constant assessments of perceived trustworthiness, at times using behavioral science tools like psychology. Many employers are now concerned about retaining workers in the face of what has been called the Great Resignation. But in spite of worries that workers might be, reasonably, put off by a feeling that technology and surveillance are invading yet another sphere of their lives, employers want to know which clock-punchers may harm their organizations.

The language around this sort of worker-watching often mirrors that which is used within the government, where public agencies assess workers who receive security clearances to handle sensitive information related to intelligence collection or national security. Organizations that produce monitoring software and behavioral analysis for the feds also may offer conceptually similar tools to private companies, either independently or packaged with broader cybersecurity tools.

“When you think about insider risk in general, it probably emerges out of the government, and then makes its way into the private sector and commercial industry,” said Tom Miller, chief executive of Clearforce, which sells insider threat services to private clients.

Some private enterprises may be attracted to scrutinizing employees like an intelligence agency might keep tabs on analysts and spies, although employ don’t have access to the same data sources. Spokespersons from some of the companies that provide these services say their clients do not wish to be named, but they include Fortune 500 companies, and employers in sectors such as critical infrastructure, financial services, transportation, health care and entertainment. It’s possible you might be working for one now.

Software can watch for suspicious computer behavior or it can dig into an employee’s credit reports, arrest records and marital-status updates. It can check to see if Cheryl is downloading bulk cloud data or run a sentiment analysis on Tom’s emails to see if he’s getting testier over time. Analysis of this data, say the companies that monitor insider risk, can point to potential problems in the workplace.

“There is so much technology out there that employers are experimenting with or investing in,” said Edgar Ndjatou, the executive director of Workplace Fairness, a nonprofit organization. He anticipates that, at some point, there will be a reckoning regarding that technology’s unintended consequences. “Certainly what we’re hoping for, in terms of the workers’ rights community, are more checks in terms of what employers can and can’t do — not only in the workplace but at home.”

But the interest in anticipating insider threats in the private sector raises ethical questions about what level of monitoring nongovernmental employees should be subject to. And there’s another, related issue with insider vetting: It’s not always based on settled science.


For decades, much of the federal government’s security-clearance-granting process has relied on techniques that emerged in the mid-twentieth century.

“It’s very manual,” said Evan Lesser, president of ClearanceJobs, a website posting, jobs, news and advice for positions that involve security clearances. “Driving around in cars to meet people. It’s very antiquated and takes up a lot of time.”

A federal initiative that started in 2018 called Trusted Workforce 2.0 formally introduced semi-automated analysis of federal employees that occurs in close to real time. This program will let the government use artificial intelligence to subject employees who are seeking or already have security clearances to “continuous vetting and evaluation” — basically, rolling evaluation that takes in information constantly, throws up red flags and includes self-reporting and human analysis.

“Can we build a system that checks on somebody and keeps checking on them and is aware of that person’s disposition as they exist in the legal systems and the public record systems on a continuous basis?” said Chris Grijalva, senior technical director at Peraton, a company that focuses on the government side of insider analysis. “And out of that idea was born the notion of continuous evaluations.”

Such efforts had been used in government in more ad hoc ways since the 1980s. But the 2018 announcement aimed to modernize government policies, which typically re-evaluated employees every five or 10 years. The motivation for the adjustment in policy and practice was, in part, the backlog of required investigations and the idea that circumstances, and people, change.

“That’s why it’s so compelling to keep people under some kind of a constant, ever-evolving surveillance process,” said Martha Louise Deutscher, author of the book “Screening the System: Exposing Security Clearance Dangers.” She added that, “Every day you’ll run the credit check, and every day you’ll run the criminal check — and the banking accounts, the marital status — and make sure that people don’t run into those circumstances where they will become a risk if they weren’t yesterday.”

The program’s first phase, a transition period before full implementation, finished in fall 2021. In December, the U.S. Government Accountability Office recommended that the automation’s effectiveness be evaluated (though not, you know, continuously).

But corporations are moving forward with their own software-enhanced surveillance. While private-sector workers may not be subjected to the rigors of a 136-page clearance form, private companies help build these “continuous vetting” technologies for the federal government, said Lindy Kyzer of ClearanceJobs. Then, she adds, “Any solution would have private-sector applications.”

A 2019 RAND Corporation study on governmental continuous evaluation highlighted three large corporations that provided information that assisted with identifying potential government-insider threats: Thomson Reuters Special Services, LexisNexis and TransUnion. But companies that are not as well-known, like Forcepoint, Clearforce, Peraton and Endera, also offer semi-automated insider threat analysis services, and some seek private companies as customers.

Credit…Daniel Zender

“People are starting to understand that the insider threat is a business problem and should be handled accordingly,” said Mr. Grijalva.

And there may be few limits on a company’s ability to monitor its employees.

“The law gives employers a level of freedom — a pretty high level of freedom — to do surveillance, not just in the workplace but outside of the workplace,” said Mr. Ndjatou. The forthrightness with which employees are informed of such monitoring varies.

There are a number of behavioral frameworks for assessing insider threats, but one of the most well-known among experts is called “the critical pathway.” It lays out how “personal predispositions, stressors, concerning behaviors and problematic organizational responses,” can cumulatively lead to a “hostile act,” according to a paper published in the journal Studies in Intelligence in 2015 that widely publicized the concept.

Eric Shaw, a clinical psychologist and co-author of the 2015 study, acknowledged there are weaknesses with the model, including that there is no ideal, full control group for it overall. It also described risk factors but does not perfectly predict who goes on to present a threat.

“What about cases where they have all these risk indicators, but they never go on to become an insider risk?” Dr. Shaw said.

Dr. Shaw’s colleague Edward Stroz, a former FBI special agent, has helped apply critical-path principles to the analysis of text communications like emails and messages between employees. Mr. Stroz founded the cyber forensics firm Stroz Friedberg, where Dr. Shaw was once a consulting psychologist. Both are now part of the Insider Risk Group, of which Dr. Shaw is chief executive. The linguistic software package they offer, called SCOUT, uses psycholinguistic analysis to seek flags that, among other things, indicate feelings of disgruntlement, like victimization, anger and blame.

“The language changes in subtle ways that you’re not aware of,” Mr. Stroz said.

In the latest published test of the software — run on 50 million messages from around 69,000 senders — 383 messages from 137 senders were sent to a trained clinician for review, after filtering. Mr. Stroz said the small number indicates that this system could protect individual privacy because only the concerning messages would ever be seen by a human being.

“The fractional amount of email identified for this should give people a lot of comfort,” he said.

In the experiment, the software showed about a one-third false-positive rate: tagging someone as a threat when they didn’t seem to be one.

Mr. Stroz said there are ways to implement such monitoring ethically — like being transparent, introducing the idea in stages and keeping the analysis locked away unless a problem pops up.

“We need to ask better questions about what we do to protect our society, our institutions,” he said, “not just say, ‘It’s Big Brother; get out of here.’”

David Luckey, one of the authors of the RAND report on continuous evaluation from 2019, supports the idea of insider-threat programs — in government or private sector use — and taking behavioral psychology into account, even if such indicators are not foolproof.

“Just because it’s difficult doesn’t mean we shouldn’t consider it,” Mr. Luckey said. “We just need to figure out how to consider it while still protecting individuals’ privacy and those sorts of things.”


That’s a work very much in progress. Mr. Luckey’s report found that “there are limited behavioral or technical data available to develop and deploy an effective and predictive” continuous evaluation tool.

There’s not enough information, in other words, to construct algorithms about trustworthiness from the ground up. And that would hold in either the private or the public sector. Part of the reason is a good one: privacy protection. “We’re not living — not yet at least and hopefully not ever — in a ‘Minority Report’ movie,” Mr. Luckey said. Contrary to that film’s plot, the aim of many monitoring and behavioral analytics programs is to offer interventions before something bad happens — not punish people in advance.

“In an ideal world, any flag would be followed up with tools and resources to help an employee, whether it’s alcohol counseling or an employee-resource group for family issues,” said Ms. Kyzer of ClearanceJobs.

Even if all that dystopian data did exist, it would still be tricky to draw individual — rather than simply aggregate — conclusions about which behavioral indicators potentially presaged ill actions. “The behavioral sciences aren’t nearly as clear-cut as the physical sciences,” Mr. Luckey said.

On top of that squishiness, even if one could collect all data on so-called bad actors, it wouldn’t amount to much. “The numbers of insider threats are really, really small,” Mr. Luckey said. “And to try and model and understand these very small occurrences or incidents is very challenging.” Once you begin tagging behavioral traits to small-number statistics, you wade into scientific hot water.

“You’re starting to get into some very, very iffy math,” he said.

Then there’s the iffiness of personal factors, something that concerns Margaret Cunningham, a behavioral scientist who, until recently, worked at Forcepoint, one of the firms that provides insider threat analysis to private companies.

Incorporating factors such as chronic or mental health issues and family history into insider threat behavioral analytics can, if used improperly, lead to models that call up that old phrase: “garbage in, garbage out.”

“Depending too heavily on personal factors identified using software solutions is a mistake, as we are unable to determine how much they influence future likelihood of engaging in malicious behaviors,” Dr. Cunningham said.

Implementing such systems the wrong way can, too, degrade the employee-employer relationship. Part of skirting such Big-Brother territory is avoiding injudicious surveillance: not simply ingesting all data that’s available and legal, regardless of its proven utility.

As an example of this, Raj Ananthanpillai, chief executive at Endera, imagines running a trucking company. “I could care less about some of these financial stress indicators, because that’s part of the blue-collar work force sometimes,” he said.

“But I would want to know if they had a DUI,” he said. “Absolutely.”

If workers know or find out surveillance beyond what’s necessary or useful is happening, it can turn an employer into an antagonist.

“Frankly, it builds a lot of resentment,” Dr. Cunningham said. “By doing that, you’re not actually helping your insider-threat case. You’re making it worse.”

Private companies could take a tip from the federal government about transparency, according to Ms. Kyzer. “All cleared employees sign several authorizations for release of information when they apply for a security clearance,” she said. “Standard employee onboarding today should also include some kind of similar acknowledgment of how their devices, communication or financial and criminal records are tracked.”

Dr. Cunningham said that Forcepoint attempted to use a minimal and focused approach with private companies. While the approach considers factors like employee financial distress or disgruntlement in work for clients that have “the need and justification for that level of analysis,” it prefers to focus on deviations from normal work behavior in many cases.

That could include rule-breaking behavior that appears unusual in context: for instance, an uptick in screenshotting confidential documents during internal Zoom meetings or setting up a work email to auto-forward everything to a private Yahoo account.

The emphasis, she says, is on what the employee is doing on the job with work-owned equipment, not on building up an understanding of an employee’s personal life or using behavioral indicators that are difficult to measure.

“I have focused very heavily on identifying indicators that you can actually measure, versus those that require a lot of interpretation,” Dr. Cunningham said. “Especially those indicators that require interpretation by expert psychologists or expert so-and-sos. Because I find that it’s a little bit too dangerous, and I don’t know that it’s always ethical.”

Back to top button