I was thinking about moderation in PieFed after reading @rimu@piefed.social mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.
One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.
On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between “users vs. mods,” and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.
Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.
Worth checking out this related discussion:
Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.
I haven’t used Discourse, but what you describe sounds like the way that Slashdot has been doing moderation since the late 90s, by randomly selecting users with positive karma to perform a limited number of moderation actions, including meta-moderation where users can rate other moderation decisions.
I always thought that this was the ideal way to do moderation to avoid the powermod problem that reddit and lemmy have, although I acknowledge the other comments here about neglecting minorities being a result of random sampling of the userbase, but it is likely that this also happens with self-selected moderation teams.
Within minority communities though, a plurality of members of that community will belong to that minority and so moderating their own community should result in fair selections. Another way to mitigate the exclusion of minorities might be to use a weighted sortition process, where users declare their minority statuses, and the selection method attempts to weight selections to boost representation of minority users.
A larger problem would be that people wanting to have strong influence on community moderation could create sock-puppet accounts to increase their chance of selection. This already happens with up/downvotes no doubt, but for moderation perhaps the incentive is even higher to cheat in this way.
I think a successful system based on this idea at least needs some strong backend support for detecting sock-puppetry, and this is going to be a constant cat and mouse game that requires intrusive fingerprinting of the user’s browser and behaviour, and this type of tracking probably isn’t welcome in the fediverse which limits the tools available to try to track bad actors. It is also difficult in an open source project to keep these systems secret so that bad actors cannot find ways to work around them.