Misinformation and disinformation may just be as old as our modern ways of storytelling. Β In recent years, we've had increased concern from government officials, private and non-profit entities, and other institutions that have attempted to hold individuals and organizations that perpetuate misinformation and disinformation accountable.

Unfortunately, this is easier said than done. With institutions largely distributed and no unifying voice, values system or policies in place that can govern our digital spaces β€” we are at a loss.

We've become increasingly reliant on these online networks and systems, have started to understand their impact, but have failed to prioritize solutions.

Our failure to prioritize solutions, I hypothesize, is a symptom of a few known conditions:

  1. The web's distributed nature: There is no singular owner of the internet. However, many large stakeholders, from private companies and internet providers, drive research and larger interest groups online.
  2. Competing interests of stakeholders: Of those who have significant stakes in our online activities, there are many times competing interests at play. Β From government regulation (or lack thereof) or capitalist-driven companies fueling false narratives β€” our stakeholders aren't only competing with one another and the average internet user.
  3. It's human: Throughout time and history, we have continued to perpetuate misinformation and disinformation β€” sometimes unknowingly. Β Half-truths, fake news, exaggeration, and satire have existed as long as storytelling has. Β To the police, some of these are to police human behavior, which one would wonder if it's even possible.

As we're on a precipice of change with the recent news of the Twitter Acquisition and repeated calls to revise Section 230 β€” I'm anxiously awaiting what comes next in terms of policy, protections, and platform decisions to ensure our safety as not just individuals but also our democratic process.