Stack Exchange moderator strike
Yesterday, Stack Exchange Inc released a "network policy regarding AI Generated content". I encourage you to read it in full. A quick summary:
- According to internal analysis, moderators have misdiagnosed cases of users contributing AI-generated content.
- Internal evidence also leads the company to believe "there have been biases for or against residents of specific countries".
- Moderators have been given a strict standard (not spelled out publicly) for determining when a post has been "AI-authored".
- Most of the suspensions handed out by moderators would not have happened if these standards were already in place.
- GPT detectors (algorithms designed to detect these posts) are not considered reliable because of their high false positive rate.
- Moderators may continue to act on users with a pattern of low-quality posts.
- This policy did not follow the new moderator agreement policy change policy.
It's looking like at least some of the moderators on the Stack Exchange network are discussing a possible strike. While it's been relatively quiet since a number of moderators resigned three years ago, the company has once again shown it does not trust its volunteer moderators. This time moderators don't intend to resign, but rather stop engaging with the network.
It's also looking as if the volunteer-operated spam-protection systems are also going on strike. This would have a rather large and immediate impact on the network.
A key paragraph in the policy:
We recently performed a set of analyses on the current approach to AI-generated content moderation. The conclusions of these analyses strongly indicate to us that AI-generated content is not being properly identified across the network, and that the potential for false-positives is very high.
Having consulted on an investigation of a user who (probably) used GPT to generate a good deal of content, not to mention many sock puppet investigations, I have a hard time imagining an analysis that would result in this level of confidence in this conclusion. I can easily imagine looking at an individual suspension and having a low confidence that the moderator correctly diagnosed the situation.
We know that huge numbers of people are currently experimenting with ChatGPT and similar tools. It would be really surprising if there weren't a large number of people applying these tools to answer questions on Stack Overflow. After the CEO of the company pumped AI on the blog, well, I'd be shocked, shocked to find that ChatGPTing is going on in here.
But individual cases are . . . messy. Maybe that user who suddenly writes so well has discovered Grammarly. Perhaps the bizarre logic errors were just the result of answering too quickly. That made up "fact"? Hadn't they said they read it somewhere, but don't recall the source? Even people who are using ChatGPT edit their answers to fix errors and isn't that a sign they are using it as a tool to provide faster responses? Unless you are sitting right next to them as they type out the answers, it's really hard to be certain.
And this is where human (and dare I say expert) judgment comes into play. It's not just moderator judgment either. Regular users flag suspicious posts and there's a whole review system. In my experience, multiple moderators are involved before suspensions are handed out. It's not as simple as it looks on the surface and it's troubling that upper leadership on Stack Overflow still fails to understand what they have.
AOL (of disc-in-the-mail fame) had a Community Leader program that was investigated by the Department of Labor. The company eventually settled a class-action lawsuit. The question was not settled, but there is a chance companies who treat volunteer moderators as employees could be required to pay them. It's a concern I have with College Confidential moderators too. I've had moderators resign because they disagree with top-down policies, but that's never my preference. Communities only have value when their members are given agency.
There are legitimate concerns about moderation surrounding ChatGPT and similar technologies. This particular policy seems heavy-handed and ill-conceived. It's always better to talk with moderators directly about these problems rather than impose a policy on short notice. There's perhaps time for the company to correct this, but not much time. Things are moving quickly and I somehow suspect the company won't be ready for what comes next.
If you want to talk with me about community management schedule a meeting!