The internet is a confusing place. It’s not always clear if that weird Twitter account spamming your mentions is a well-intentioned idiot or a malicious actor – unless you work for the social media giant itself, that is.
Speaking to a group of security researchers and journalists at the annual RSA conference in San Francisco, Twitter vice president of trust and safety Del Harvey explained that the company is remarkably good at distinguishing between accounts intentionally spreading misinformation and those just plain ignorant of basic facts. Essentially, Harvey explained, Twitter has a pretty good idea whether or not you believe the bullshit you’re spreading.
Harvey was speaking in response to a question from moderator Ted Schlein. Schlein, a general partner at Kleiner Perkins, had asked the assembled panel — which included Facebook head of cybersecurity Nathaniel Gleicher — about the difficulties of moderating content on the guests’ respective platforms.
“Is there a difference between someone who intentionally uses false information for manipulative purposes versus someone who truly believes [what] they’re propagating, but what they’re propagating is just false?” asked Schlein. “Do you look at that as two different scenarios? Those are behaviors that actually kind of look the same.”
Harvey jumped in to let Schlein know that he’d be surprised to learn what things actually look like from Twitter’s perspective.
“I would actually say that if you really go into the behavior they don’t actually look that similar,” she explained.
Essentially, she continued, there are various tells that Twitter’s security team knows to look for to determine — at least in its mind — the motivations behind certain posts.
“Because, if you’re talking about disinformation, you are deliberately and knowingly spreading information that is untrue,” Harvey continued. “Then, in order to do that, there are certain things that you’re likely going to be doing. You’re going to be targeting certain networks — you may not actually have a natural home within that network — so you’re going to have to try to either work your way into it through some sort of social engineering, or you’re going to try to amplify your content through the use of other accounts.”
This, according to Harvey, is a far cry from some random chumps posting anti-vaccination content in their spare time. She described our theoretical guileless disseminator of harmful garbage as “somebody who is perhaps more a native of that network, who comes across and believes that information and then circulates it out to their network.”
Other warning signs include how other people on the platform initially engage with the tweeted falsehood. “One of them looks very different in terms of even how others initially respond to it, and how it sort of enters into the conversation,” Harvey concluded.
Twitter execs, of course, have long spoken about identifying and removing “coordinated inauthentic behavior,” and this panel discussion did not make clear how much of an exact science the process really is (or isn’t). But, at least according to Harvey, the company is pretty good at distinguishing between accounts intentionally screwing with us and the ones that just don’t know any better.
If only the larger American public was equally savvy.