Bluesky talks honesty and safety about abuse, spam, and more
Social media startup Bluesky, which is creating a welcome alternative to X (formerly Twitter), gave an update on Wednesday about how it approaches various aspects of trust and security on its platform. The company is in various stages of developing and testing a range of programs that focus on dealing with bad actors, abuse, spam, fake accounts, video security, and more.
To deal with malicious users or those who abuse others, Bluesky says it is developing new tools that will be able to detect when many new accounts are created and managed by the same person. This can help reduce victimization, where a bad actor creates many different personas to target their victims.
Some new tests will help find “disrespectful” responses and report them to the server moderators. Similar to Mastodon, Bluesky will support a network where self-hosters and other developers can use their own servers that connect to the Bluesky server and others on the network. This alliance capability is still in its early stages. However, down the road, server moderators will be able to decide how they want to act against those who post malicious responses. Bluesky, on the other hand, will eventually reduce the visibility of these responses in its application. Repetitive green labels on content will also lead to account-level labels and suspensions, it said.
To limit the use of lists to harass others, Bluesky will remove individual users from the list if they block the list creator. Similar functionality has also recently been introduced in Starter Packs, a type of shareable list that can help new users find followers on the platform (check out the TechCrunch Starter Pack).
Bluesky will also scan lists for offensive words or descriptions to reduce the ability of people to harass others by adding them to a public list with a toxic or offensive word or description. Those who violate Bluesky’s Community Guidelines will be hidden from the app until the list owner makes changes to comply with Bluesky’s rules. Users who continue to create abusive lists will also be subject to further action, although the company did not provide details, adding that the list is still a place for active discussions and improvements.
In the coming months, Bluesky will also switch to managing test reports through its app using notifications, instead of relying on email reports.
To combat spam and other fake accounts, Bluesky is introducing a pilot that will try to automatically detect when an account is fake, fraudulent, or spam. Coupled with moderation, the goal is to be able to take action on accounts within “seconds of receiving a report,” the company said.
One of the more interesting developments involves how Bluesky will comply with local laws while still allowing free speech. It will use location-specific labels that allow it to hide a piece of content from users in a specific location to comply with the law.
“This allows Bluesky’s moderation service to maintain flexibility in creating a space for free speech, while also ensuring compliance with the law so that Bluesky can continue to operate as a service in those areas,” the company shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.”
To address potential issues of trust and security with video, which has recently been added, the team is adding features such as being able to turn off auto-play of videos, ensuring that video is labeled, and ensuring that videos can be reported. Still evaluating what else might need to be added, something that will be prioritized based on user feedback.
When it comes to abuse, the company says its overall framework “asks how often something happens versus how dangerous it is.” The company focuses on dealing with high-risk and high-frequency issues while “tracking cases that could lead to serious harm to a few users.” The latter, although it affects only a small number of people, causes enough “permanent harm” that Bluesky will take action to prevent abuse, it said.
User concerns can be raised through reports, emails, and mentions on the @safety.bsky.app account.
Source link