Bluesky Tightens Moderation With Smarter Reporting System
Bluesky, the decentralized social platform, is rolling out new moderation features designed to improve transparency, accountability, and safety as it scales. The changes, introduced in version 1.110 of its app, include more detailed reporting options, a revamped strike system, and clearer feedback for users who break the rules.
One of the biggest updates is the expansion of in-app reporting categories: where there were previously six broad options, users now have nine more granular choices, including youth harassment or bullying, eating disorders, election misinformation, and human trafficking. These additions reflect both community demand and regulatory requirements – such as obligations under the UK’s Online Safety Act.
Behind the scenes, Bluesky has rebuilt its moderation infrastructure to consolidate all violations in a single, unified system. Rather than tracking infractions individually, the platform now logs them consistently, making enforcement more reliable.
When a piece of content is judged to be in violation, it’s assigned a “severity rating” – ranging from low risk to critical risk. High-risk content, such as posts that may incite real-world harm or present “critical risk,” can lead to a permanent ban. Users who are subject to enforcement will get clearer notifications, including which Community Guideline was violated, the severity level, how many times they’ve broken the rules, and how close they are to further action. Suspensions will include expected end dates, and appeals will be possible.
These moderation updates build on Bluesky’s earlier changes to its Community Guidelines, which were reworked around four principles—Safety First, Respect Others, Be Authentic, and Follow the Rules – to provide stronger protections and more clarity. The company says the new tools aim to balance accountability with fairness: lower-risk violations may receive warnings, while repeat or severe offenses face escalating penalties.
Bluesky has attributed this push to its rapid growth. According to its own moderation reports, the platform saw a 17× increase in moderation reports during 2024, prompting the need for updated policies and more robust enforcement.

