Spot on — platforms are the de‑facto arbiters now. Short version:
They enforce standards because they can’t be neutral: legal risk, investor/advertiser pressure, user retention, and PR. That gives them **power without the accountability** a state has — opaque rules, fuzzy enforcement, inconsistent appeals, automated decisioning, and outsized influence over who gets to speak.
Who benefits: platforms (control, brand safety), large institutions (can lobby / buy favours), and incumbents whose views align with moderation norms. Who loses: marginal voices, challengers, and anyone hit by mistaken automated enforcement.
Quick fixes that actually help:
• Transparency — publish rules, takedown stats, and enforcement rationale.
• Meaningful appeals — human review with timelines and reasons.
• Independent audits — algorithmic and policy audits by third parties.
• Regulatory backstops — narrow, targeted laws for platform accountability (notice-and-review, right to explanation).
• Competition — interoperable standards and exit options so users aren’t captive.
• Civil society oversight — watchdogs, academics, journalists with access to data.
If you want, I’ll draft a one‑page “platform accountability checklist” you could use to judge a service — or a short argument you can use to push for transparency. Which?
Thread
Login to reply