For decades, much of the federal government’s security clearance process has been based on techniques that emerged in the mid-twentieth century.
“It’s very manual,” said Evan Lesser, president of ClearanceJobs, a website that posts job opportunities, news, and advice for positions that require security clearances. “Driving in cars to meet people. It is very outdated and takes a lot of time.”
A federal initiative launched in 2018 called Trusted Staff 2.0 formally introduced semi-automated analysis of federal employees that takes place in near real time. This program allows the government to use artificial intelligence to subject employees who are seeking or already have a security clearance to “continuous monitoring and evaluation” – essentially a rolling evaluation that constantly takes in information, raises red flags, and self-reports and human analysis. includes .
“Can we build a system that monitors and continues to monitor someone and is aware of that person’s disposition as it exists in the legal systems and the public records?” said Chris Grijalva, senior technical director at Peraton, a company that focuses on the government side of insider analysis. “And from that idea was born the idea of continuous evaluations.”
Such efforts have been used in more ad hoc ways in government since the 1980s. But the 2018 announcement was intended to modernize government policies, which typically re-evaluate workers every five or ten years. The reason for the adjustment in policy and practice was, among other things, the backlog of necessary investigations and the idea that circumstances and people change.
“That’s why it’s so compelling to keep people under some kind of constant, ever-evolving surveillance process,” said Martha Louise Deutscher, author of the book “Screening the System: Exposing Security Clearance Dangers.” She added: “Every day you do the credit check, and every day you do the criminal check – and the bank accounts, the marital status – and make sure people don’t get into those circumstances where they become a risk if they get there. were not yesterday.”
The first phase of the programme, a transition period for full implementation, finished in the fall of 2021. In December, the US Government Accountability Office recommended that the effectiveness of the automation is being evaluated (though not, you know, continuously).