The Warren County Sheriff’s Office says it has “contracted” with a Burlington-based technology company to use advanced artificial intelligence to monitor social media accounts and alert law enforcement to possible threats toward local schools.
We like that schools and law enforcement are looking to alternatives to armed resource officers when it comes to school safety, but we’d like to know more details before going forward. We believe a greater community discussion needs to be had.
We wonder if there is still a general expectation that social media posts are private. How many of us really understand our Facebook privacy settings?
The company, Social Sentinel, was founded by Gary Margolis, a 30-year veteran of law enforcement and security who had previously served as the chief of police at the University of Vermont.
The company describes itself this way on its website:
- We study and understand the language of harm and violence. We deploy advanced technology and work with subject matter experts to constantly evolve Social Sentinel’s proprietary machine learning methodologies.
- Machine learning (AI) algorithms classify data into alerts for immediate attention and insights for a broader context, leading to greater understanding.
- We built technology that searches worldwide to make local connections beyond a geo-fence.
- We create context by highlighting discussions and trends relevant to your communities right now.
You have free articles remaining.
It is a compelling solution to not only school violence, but anyone who might be contemplating doing harm to themselves.
It’s fighting technology with technology. Social Sentinel says it scans through “a billion posts daily to identify ones related to a school’s safety, security and wellness.” It scans a dozen social media platforms using an ever-evolving “language of harm” library as a baseline of what words to be concerned about.
Naturally, even the most advanced computer systems can’t be 100 percent accurate about intent. A CBS News report last fall pointed out a case where the post “I gotta kill you all” was flagged. It turned out it was a rap lyric. And one of the flaws of the system is many false positives, so there is an absolute need for someone to review any post that is flagged.
The way it works, when Social Sentinel has a concern with a post, it alerts the Sheriff’s Office, whose officers decide how to proceed and whether they should try to track down that person.
Because law enforcement is involved, there is a concern that the system could be abused.
Deciding what is harmless, a bad joke or a real threat can be subjective. It is unclear how law enforcement officials might respond as they investigate.
How would a hacked account be handled?
Will officers be given advanced training on how to evaluate social media threats? What will be the standards? And once it becomes widely known that social media posts are regularly reviewed, won’t the system become less effective? There is also a concern it could be abused by law enforcement. Four years ago, the Lowell Police Department was looking to purchase a similar monitoring system and received objections from the American Civil Liberties Union of Massachusetts’ Technology for Liberty Project.
“People should be able to criticize government in a free society without some cop somewhere writing down everything they say,” said director Kade Crockford to the Lowell Sun at the time. We’re concerned this is a slippery slope, but it is an option that should be explored.