Private online platforms (X, Meta, TikTok) moderate billions of content items daily. Their terms of service often include clauses allowing suspension or removal "at our sole discretion." In practice, automated systems flag content based on statistical risk scores. A user is not presumed innocent; rather, a post is presumed violative if it matches a pattern (e.g., certain keywords, account age, report frequency).
This is the purest inversion of the presumption: the burden shifts to the accused to prove their innocence in real-time, before an unbounded audience, with no rules of evidence, no right to silence, and no neutral arbiter. As noted by Citron (2014), "digital vigilantism operationalizes guilt until proven innocent." The speed and scale of social networks mean that even a later exoneration rarely restores the prior status quo. presumed innocent en ligne
Even within state-led criminal justice, the presumption erodes online. Consider digital evidence: chat logs, location data, browsing history. Law enforcement increasingly obtains this data before arrest via third-party records (e.g., under the Stored Communications Act in the U.S.). By the time of trial, the accused faces a "digital shadow"—a reconstructed profile that may be incomplete or misleading. Private online platforms (X, Meta, TikTok) moderate billions
Digital environments disrupt this logic in three fundamental ways. This is the purest inversion of the presumption:
Moreover, forensic tools (e.g., cell-site simulators, hacking warrants) operate opaquely. The presumption of innocence requires that the accused can challenge the integrity of evidence. But when the evidence is an algorithm’s output or a proprietary tool’s analysis, meaningful challenge is often impossible. This creates a de facto reversal: the accused must prove the technology erred, rather than the state proving its reliability.