Misinformation and disinformation may just be as old as our modern ways of storytelling. In recent years, we've had increased concern from government officials, private and non-profit entities, and other institutions that have attempted to hold individuals and organizations that perpetuate misinformation and disinformation accountable.
Unfortunately, this is easier said than done. With institutions largely distributed and no unifying voice, values system or policies in place that can govern our digital spaces — we are at a loss.
We've become increasingly reliant on these online networks and systems, have started to understand their impact, but have failed to prioritize solutions.
Our failure to prioritize solutions, I hypothesize, is a symptom of a few known conditions:
- The web's distributed nature: There is no singular owner of the internet. However, many large stakeholders, from private companies and internet providers, drive research and larger interest groups online.
- Competing interests of stakeholders: Of those who have significant stakes in our online activities, there are many times competing interests at play. From government regulation (or lack thereof) or capitalist-driven companies fueling false narratives — our stakeholders aren't only competing with one another and the average internet user.
- It's human: Throughout time and history, we have continued to perpetuate misinformation and disinformation — sometimes unknowingly. Half-truths, fake news, exaggeration, and satire have existed as long as storytelling has. To the police, some of these are to police human behavior, which one would wonder if it's even possible.
As we're on a precipice of change with the recent news of the Twitter Acquisition and repeated calls to revise Section 230 — I'm anxiously awaiting what comes next in terms of policy, protections, and platform decisions to ensure our safety as not just individuals but also our democratic process.