
This blog shares its title with a project I completed last year for my Information Policy course. It remains one of the most interesting projects I’ve ever done, though its subject matter is anything but bright and cheery. This project asked us to examine an instance of an information technology gone wrong, and with our current week’s session on ethical use of social media, I see no better opportunity to discuss it.
The issue of privacy on social media looms perennially large. Is it even possible to remain truly private once we enter the digital world, full of innumerable and rapidly-evolving connections? All it takes is one post or image posted out of context for a misplaced narrative to form. Couple that with the way social media often prioritizes clicks and views over truth, and you’re got a recipe for disaster. This is precisely what happened on Reddit following the Boston Marathon Bombings in 2013.
A quick overview: On April 15, 2013, the 117th annual Boston Marathon was tragically interrupted when a pair of pressure cooker bombs ignited near the finish line. The explosions killed 3 and injured countless others. A firestorm soon ignited across social media–a mass crowdsourcing attempt in which thousands of people gathered together to analyze footage surrounding the bombing in an attempt to identify the culprits. This was perhaps most prominent on Reddit, which created an entire subreddit, /r/findbostonbombers, dedicated to locating the bombers.
Certainly, many of these users had the best intentions in mind. I’m sure many genuinely wanted to contribute whatever they could to bringing the culprits to justice. And the subreddit did have rules against leaking personal information and making decisions based on personal bias. It couldn’t account for unconscious bias, however–and that’s where everything fell apart.
It began with the “Bag Men” controversy, in which Reddit identified two young men, Salaheddin Barhoum and Yassine Zaimi, as "potential culprits” based on a supposedly suspicious shape in Barhoum’s backpack. The media was quick to run with this–the New York Post soon posted an issue entitled “Bag Men”, claiming that the authorities were searching for the two. This was, in fact, not the case, and only served to subject both of these men to undue emotional stress, which they eventually wound up suing the Post for. (They were fortunately cleared of any wrongdoing).
But things didn’t stop there. /r/findbostonbombers soon ran rampant with the “identification” of another potential suspect: 22 year old Brown University student Sunil Tripathi, who had gone missing almost a month before the bombings. This speculation was based entirely on his supposed resemblance to pictures of the suspected culprits that the FBI had released. Sunil’s family, who had set up a Facebook page dedicated to helping find him, was suddenly beset by a tirade of messages asking whether or not their son was a terrorist. In fact, Sunil had absolutely nothing to do with the bombings–his body was sadly found just a few days later, having died of suicide. While the messages stopped after the FBI released the identity of the true culprits, the brothers Tsarnaev, his already grieving family now had to deal with the fact that his memory would forever be connected to the bombings.
There’s so much that went wrong here that I’d need at least an hour to discuss it. The crux of the issue lies, in my opinion, in both Reddit’s structure and context collapse. Reddit is structured upon an upvote/downvote system, similar to a like/dislike system. The more upvoted a comment is, the higher on a page it appears, making it more visible. And it’s all too easy to assume that upvotes=accuracy. This meant that information was being circulated based on popularity rather than truth. Couple that with context collapse–we had thousands of people on this subreddit working to identify the culprit(s). While many did have the best intentions in mind, can we safely assume that their motives were entirely unbiased? I mean, let’s be real: would any of these men have been falsely accused had they not been young and brown?
This is only my take on it, of course. It does go to show, however, how fragile the notion of privacy on social media is. Two men’s lives were nearly destroyed simply because they had the misfortune of being caught on surveillance footage; a family’s grief was intruded upon in the worst way possible simply due to digital rumors. It’s difficult to establish a code of ethics for a site frequented by thousands, though the subreddit’s moderators, to their credit, did at least try. In any case, a situation like this highlights just how imperative it is to pay mind to the ways in which our information can–wittingly or unwittingly–be thrust out into the open on sites that often prioritize clicks over conscience.
Thank you so much for this post! It's absolutely horrifying that these men's lives were destroyed and they were targeted because of their race and because of misinformation. I couldn't agree more that the "upvote/downvote" structure of Reddit can be dangerous. Many understand that the system operates on popularity and is the equivalent of a like but I agree that some people might equate it to accuracy and think a top post = fact. It's a tricky balance when social media is used for serious issues, and if people take justice into their own hands and jump on a bandwagon, it could all turn into a witch hunt. People could have the best intentions, but their interference can cause more harm than good. Your point about unconscious bias rings true. I remember in one of my other classes we talked about how even AI is full of bias, so it's hard for not only humans but even computers/technology to truly be neutral and impartial.
ReplyDeleteI agree on the danger of "vigilante" justice, especially when conducted in an online capacity! It's too all easy for any random comment to be picked up by the algorithm and wind up gaining far more traction than usual, regardless of accuracy. You also raise a good point about AI being full of bias. It's being trained by humans, after all! That's why it's not easy to argue that AI could have provided a more "neutral" perspective in a situation like this. It can certainly be a useful tool for filing through massive amounts of information, but we have to be aware that it, too, reflects our own unconscious bias. One potential solution (that I in fact discussed in my project) is for tighter cooperation between law enforcement and online communities in crowdsourcing efforts like this. Faulty and delayed communication between both parties is part of what led to things spiraling out of control the way they did!
Delete