When the world essentially shut down in 2020, many of us instinctively flocked to social media, places like Facebook and YouTube, looking for answers, connection, and even a little distraction. Unfortunately, the same apps we used for comfort quickly turned into perfect breeding grounds for misinformation. A review by Ferreira Caceres et al. (2022) found that COVID-19 misinformation directly contributed to vaccine avoidance, mask refusal, and the use of unproven treatments, ultimately increasing illness and death worldwide. The researchers described this as an “infodemic", or an overabundance of false or misleading information that complicated public health efforts and deepened mistrust in science and government. When millions of people rely on social media as their primary source of information, this kind of distortion becomes not just a communications issue, but a public health crisis.
While my personal feed didn't showcase the worst of it, I definitely remember seeing countless comments under news articles pushing dangerous ideas like that the vaccine was a threat, how to detox from the vaccine if you are required to get it, home cures for COVID, etc. False content like that spreads like wildfire, often moving much faster than public health agencies can respond.
A large-scale Twitter study by Saha et al. (2023) found that people who shared COVID-19 misinformation experienced roughly twice the increase in anxiety as those who didn’t. The researchers describe a cycle where fear fuels the need for information—any information—and that urgency can make people more likely to believe or share false content. In moments of crisis, uncertainty feels unbearable, and misinformation can temporarily fill that void with the illusion of clarity. Much of this misinformation didn't come from bad actors; it came from ordinary people simply sharing what they believed would help. In that sense, the problem was as much social as it was technological. People often share falsehoods not because they want to deceive, but because they are trying to make sense of the world around them.
That raises an important question: what responsibility do platforms like Facebook and YouTube actually have for moderating dangerous misinformation? On one hand, these companies aren’t public institutions; they’re private businesses built on engagement, which means the more people react, comment, and share, the more profitable they become. Unfortunately, outrage and fear drive clicks far better than calm, factual updates. But when false information about vaccines, treatments, or public health spreads unchecked, it can cause real harm. At that point, it’s not just a matter of free speech—it’s a matter of public safety. I think these platforms do have a moral responsibility, if not a legal one, to step in when content has the potential to put lives at risk. Removing or labeling false claims isn’t censorship; it’s protecting people from misinformation that can literally be deadly.
For librarians and information professionals, this period was an actual turning point. Libraries became absolutely vital hubs for reliable health information, fighting the "infodemic" with curated online resources, virtual programs, and media-literacy workshops. The biggest lesson here isn't just about fact-checking; it's about empathy. Working at Sayville Library, I witnessed firsthand just how much our patrons relied on us for clarity and reassurance. When we first reopened in July 2020, the world was a changed place from when we left it in March 2020, and there was a lot of uncertainty and anxiety in the air. It felt like coming back to a post-apocalyptic world, honestly. The safety guidelines were constantly shifting, and our policies as to how long we quarantined returned items were continually changing based on new information. What I am trying to say here is that COVID made it clear that access to information isn't enough; people also need guidance and compassion when it comes to information.
That raises an important question: what responsibility do platforms like Facebook and YouTube actually have for moderating dangerous misinformation? On one hand, these companies aren’t public institutions; they’re private businesses built on engagement, which means the more people react, comment, and share, the more profitable they become. Unfortunately, outrage and fear drive clicks far better than calm, factual updates. But when false information about vaccines, treatments, or public health spreads unchecked, it can cause real harm. At that point, it’s not just a matter of free speech—it’s a matter of public safety. I think these platforms do have a moral responsibility, if not a legal one, to step in when content has the potential to put lives at risk. Removing or labeling false claims isn’t censorship; it’s protecting people from misinformation that can literally be deadly.
For librarians and information professionals, this period was an actual turning point. Libraries became absolutely vital hubs for reliable health information, fighting the "infodemic" with curated online resources, virtual programs, and media-literacy workshops. The biggest lesson here isn't just about fact-checking; it's about empathy. Working at Sayville Library, I witnessed firsthand just how much our patrons relied on us for clarity and reassurance. When we first reopened in July 2020, the world was a changed place from when we left it in March 2020, and there was a lot of uncertainty and anxiety in the air. It felt like coming back to a post-apocalyptic world, honestly. The safety guidelines were constantly shifting, and our policies as to how long we quarantined returned items were continually changing based on new information. What I am trying to say here is that COVID made it clear that access to information isn't enough; people also need guidance and compassion when it comes to information.
.jpg)
Your point about how COVID misinformation spread reminded me of our readings this week-- specifically the idea that the social media context itself, rather than a user's own leanings (political or otherwise) affects its spread. Context is a huge factor here--regardless of your personal opinion, it may become tempting to engage in the spread of retrospectively faulty information if everyone on your feed is doing it. A situation in which people are physically isolated, such as COVID, only inflames this--your social networks shift to primarily online sources, and echoing their sentiments becomes a way of seeking comfort and belonging in uncertain times.
ReplyDeleteThe need for comfort is understandable, of course. I think you hit a great point about how libraries can offer both comfort and reliable information. As information professionals, it's our duty to ensure that what we're telling people is, to the best of our knowledge, accurate. Digital literacy workshops hosted at the library can also equip individuals with the tools to dissect inaccurate or inflammatory news headlines and parse out potentially incorrect information.
Hi Rachel, thanks for this informative post! My next post is taking a look into YouTube's misinformation policy and how that functioned (and malfunctioned during COVID times). The term infodemic really fits and I could see how it all snowballed really fast and the misinformation piled up. I think people who were on the Internet and social media more were more anxious than those who stayed off it because even if the info could be correct, there was an abundance of it and it could be overwhelming and hard to tell what is fact and what is fiction. I also think Kristen's point about there being comfort in an echo chamber is true and it can be very dangerous to rely on other people instead of coming to conclusions yourself.
ReplyDeleteHello Rachel,
ReplyDeleteThe Covid-19 misinformation was rampant – I remember that very clearly. Remember when Trump touted bleach to cure Covid? People were so desperate that they believed it to be a valid treatment. What struck me from your blog is the study that showed that people who shared Covid misinformation experienced more anxiety than those who don’t. I can relate to that, as when I start doom scrolling information about how the government shutdown is affecting so many people, I just get more and more upset and anxious.
I absolutely agree that social media platforms have a responsibility to do some fact checking. But I often find that it falls upon the user to report spam or scams. I would need six sets of hands to count the amount of times that I have reported someone on Facebook, only for my complaint to be rejected. At the library where I work, we hold digital literacy classes to try to help patrons identify misinformation, disinformation, and malinformation. I hope that that will help raise awareness for all the ridiculous stuff out there!