Skip to content

Bluesky's Safety Boost: New Tools and User Reporting

  • Bluesky introduces advanced automated tools for content moderation and plans mislabeled content reporting.
  • Users seek private accounts, follower removal, and stricter Community Guidelines enforcement.
  • Bluesky, still in private beta, faces criticism for moderation lapses, including failing to ban threatening accounts.

Bluesky, the startup that plans to create a decentralized social network to compete with Twitter/X, has announced that it has started implementing new safety measures to help moderate content on the platform through automation. 

Despite being in private beta, the company has faced criticism for its content moderation policies in recent months, including failing to ban a member who made death threats and not detecting accounts with racial slurs in their usernames. 

Bluesky has now revealed that it is launching more advanced automated tools to identify content that violates its Community Guidelines, which its moderation team will then review for a final decision.

The company announced that it will reintroduce the feature, allowing users to report their posts for mislabeled content. This will assist the moderation team in rectifying any incorrect labels. 

In addition to moderation, Bluesky is working on a new feature that X already has: the option to manage who can reply to your posts.

Despite the changes, some Bluesky users still want the option to make their accounts private. This is because Bluesky announced a public web interface that allows anyone to browse posts without an invitation. 

As a result, there is a demand for a private account type, similar to X, where only friends can see the posts. Some users also want the ability to remove followers and are urging Bluesky to ban accounts that violate the company's guidelines. 


Edited by Shruti Thapa

Latest