Abstract

I'm signing an open letter to Stack Overflow because the volunteers who contribute, curate and moderate online communities provide value whether or not it can be monetized. Stack Exchange Inc. has fallen short of its obligations to the communities it hosts. While I'm no longer active on the network, I support those who are going on strike.

See also:

History

If you want an extensive timeline, I highly recommend The Stack Exchange Timeline maintained by Samuel Liew. From my perspective, the key events are:

So there never was a moderator strike and I give a lot of credit to Teresa Dietrich. By this point I'd left Stack Overflow. Over the next three years it seems as there was a slow, but persistent return to a state of trust.

And then, somewhat abruptly, Teresa Dietrich left the company. That was April 21 and on May 10, the company laid off 10% of its workforce. Only it wasn't 10% across the board as 30% of engineering was laid off. The latest blog post from the CEO says:

Approximately 10% of our company is working on features and applications leveraging GenAI that have the potential to increase engagement within our public community and add value to customers of our SaaS product, Stack Overflow for Teams.

So the CEO has placed a sizable bet on AI. I think it's misguided and won't save the company. But that's not why I'm signing. That's just the backdrop.

My reasons for signing

One of the concerns moderators had three and half years ago was that they were suddenly being required to enforce rules without being given an opportunity to provide feedback. As a part of the process of regaining trust, the moderators and company agreed to a series of policy changes designed to prevent mass resignations. The new policy regarding AI-generated content violated the spirit, if not the letter, of those agreements.

I gotta take a giant detour to compare two models of moderation. Most social media companies use paid moderators. Companies set rules about what's allowed and what isn't1 then hire moderators to enforce their standards. Some companies outsource those jobs to places like the Philippines and India where English-speaking labor is inexpensive. The job is frequently horrifying.

Other companies (notably Reddit) rely on volunteer moderators. Invariably these people begin as members of a narrow community (such as a subreddit) and moderate as part of the time they are already spending on the site. Companies do make rules they expect moderators to enforce, but usually moderators concur with them and may even create their own rules specific to their communities. These moderators aren't compensated, but are free to leave at any time.

Stack Overflow and Stack Exchange moderators are volunteers. There is a paid Community Team, but they do very little moderation. Instead they support the volunteer moderators of the 180 communities on the network. Generally speaking this is a win for the company (who doesn't have to pay moderators), the community (which gets high quality moderation from people who understand it) and moderators themselves. The one catch is that the company must build trust with its volunteers.

So here's where things get weird: the official help center contains a page about the answer rate limit that mentions GPT:

This waiting period is currently set at 30 minutes because of the influx of GPT-generated answers that have caused information that is objectively wrong to make its way onto the site.

An article about deleted answers says:

Finally, answers may also be removed if they are copied in whole or in part from another unattributed source, including other answers on Stack Overflow. Answers copied from language learning models such as ChatGPT may also be removed even if they are attributed correctly.

Both link to a GPT policy which states:

Moderators are empowered (at their discretion) to issue immediate suspensions of up to 30 days to users who are copying and pasting GPT content onto the site, with or without prior notice or warning.

It's a remarkably well crafted policy. Given it's likely to change soon, I'm going to quote it in full at the end of this post. Compare that to newer policy:

In order to help mitigate the issue, we've asked moderators to apply a very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user. This standard of evidence excludes the use of moderators' best guesses based on users' writing styles and behavioral indicators, because we could not validate that these indicators are actually successfully identifying AI-generated posts when they are written. This standard would exclude most suspensions issued to date.

The actual standard is not public, but according to moderators who have seen it, the standard makes taking action against AI-generated content nearly impossible. In addition, the policy publicly calls out moderators for bias:

Through no fault of moderators' own, we also suspect that there have been biases for or against residents of specific countries as a potential result of the heuristics being applied to these posts.

Maybe moderators are over-reacting to the potential problem and are biased. There's no reason to take this public. It doesn't rise to the level of calling out an individual moderator with a reporter, but it's unnecessary. If you have evidence that moderators are biased, it's best to resolve the problem in private to avoid violating their trust.

My unchecked notifications

I'm not really active on Stack Exchange these days. Going on strike will mean exactly nothing for me. I don't imagine I have a lot of pull with management either. But I do want to see companies treat volunteer moderators with the respect they are due. More than that, I want to see volunteer moderators thrive so that online communities can thrive. Therefore, I'm adding my name to the letter.

A note to the Community Team

In the Fall of 2019, I got a taste of what I expect you are going through now. We were stuck between our duties as employees and our duty to manage our communities with integrity. Being unable to navigate those waters led me to leave Stack Overflow. I had the luxury of a job offer, but it was hard to leave the job I loved. You are in a tough spot and that's underselling it.

One encouragement comes from Machavity, a Stack Overflow moderator, who wrote:

SE Staff has come a long way from the great Monica debacle of three years ago. In fact, they've been doing some very good things in this vein (I have repeatedly sung the praises of the Staging Ground project, which has had excellent community-to-staff engagement). It's greatly disappointing that we have, in the space of a week, regressed to a point where Staff is firmly at odds with the community that represents their product. None of the moderation team likes having to threaten a moderation strike to get answers, but here we are.

The current situation has all the hallmarks of an overconfident CEO pushing a policy without bothering to understand the downstream consequences. (Add in a dash of desperation from a rapidly declining business to make things especially spicy.) A huge part of community management consists in preparing the ground for change, so this level of urgency is counterproductive. It also leads to regrettable and entirely understandable mistakes. I know you all are doing your best with a bad lot.

Over and over again I see employees struggle when they lack executive sponsorship. If you aren't represented where decisions are made, you get stuck with bad decisions uninformed by reality. From what I can tell, Teresa Dietrich was that person and it doesn't seem as if that place has been filled. I'm pulling for you all, but it looks like it's another hard road ahead. :-(


Stack Overflow's GPT Policy as of June 4, 2023:

Why posting GPT and ChatGPT generated answers is not currently acceptable

This Help Center article provides insight and rationale on our policy regarding the usage of GPT and ChatGPT on Stack Overflow. While this is the position of Stack Overflow staff, it’s meant to support the prior work done by moderators (namely, the temporary policy issued to ban contributions by ChatGPT).

Stack Overflow is a community built upon trust. The community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers. The system relies on users to verify and validate contributions by other users with the tools we offer, including responsible use of upvotes and downvotes. Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment. This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.

The objective nature of the content on Stack Overflow means that if any part of an answer is wrong, then the answer is objectively wrong. In order for Stack Overflow to maintain a strong standard as a reliable source for correct and verified information, such answers must be edited or replaced. However, because GPT is good enough to convince users of the site that the answer holds merit, signals the community typically use to determine the legitimacy of their peers’ contributions frequently fail to detect severe issues with GPT-generated answers. As a result, information that is objectively wrong makes its way onto the site. In its current state, GPT risks breaking readers’ trust that our site provides answers written by subject-matter experts.

Moderators are empowered (at their discretion) to issue immediate suspensions of up to 30 days to users who are copying and pasting GPT content onto the site, with or without prior notice or warning.

Note: The final paragraph was invalidated (according to moderators) by the internal policy change on May 29, 2023. Suspending an account without warning for 30 days is outside of the network norm and sounds very much like a product of not knowing how to handle a flood of ChatGPT content. I don't think most moderators are fighting for that provision and it's not clear to me how many people were suspended under that part of the policy.


  1. For a rather fascinating look into those rules, I recommend two Radiolab episodes: Post No Evil and Facebook's Supreme Court.