This week, two open letters addressing Substack’s growing Nazi problem began circulating. I first saw
‘s piece, “Substack shouldn’t decide what we read,” and signed on. Then I found the “Substackers Against Nazis” letter through and and signed that one too.The second now has over 200 signatories and counting, and has made the news in multiple outlets. The first has over 80 initial signatories and many more in Notes, and has been supported by Substack staff.
I’m interested in the contrast between the two letters, because I think in the space between them we find an area ripe for growth as we work to build new worlds in the shell of the old.
Preamble (tl;dr)
I’m guessing a lot of folks won’t read this entire essay, so I want to start by being clear about what I want to see happen.
Substack is currently making profit off of fascist rhetoric. This is unconscionable to me. My hope is that, at the very least, the largest Nazi publications are demonetized. This is not what either letter explicitly calls for, but I wonder what might be possible moving forward. I don’t expect Substack leadership to act responsibly on their own, but I do anticipate they might respond to precisely applied pressure from their writers.
In an ideal world, there would be no space for Nazis to organize. Not online, not in real life. Their political movement would be relegated to the history books, because we simply wouldn’t permit it. In the real world, Nazis are organizing on an increasingly large scale all over the United States and Europe. They are a current and growing threat, not a relic of the past. We need to respond creatively and collectively to the problems they present.
I value Substack as a platform and have no plans to leave. I think their design benefits from a lack of a centralized feed, and ample moderation tools for writers to curate their own experience. These beginnings are not enough to ensure a safe and healthy online environment, but they show a promising start. We should not overlook things that can help us imagine building the kind of world we want to live in.
Lastly, it did not feel good to see both these letters bandied around as tools to signal virtue. I don’t fault the authors — I think they’ve been soundly misrepresented by a surprising number of people. But the discourse that erupted over this discussion was the first time I felt like I was back on Twitter instead of on Substack.
I did not sign either letter to broadcast my position on Nazis or free speech. I don’t care whether people think I’m virtuous or not. I try not to let morality affect my thinking, or make my decisions based on principles.
I value freedom of speech because I know what it’s like not to have it. I oppose Nazis because they want me and the people I love dead.
That’s why I signed both letters: because I want to see what we can do together to make this platform safer and healthier for all of us.
Aren’t the two letters in contradiction? How can you sign both with integrity?
They aren’t in contradiction, they address different things entirely. There is space to hold multiple truths at once.
Elle Griffin’s essay is above all a reflection on the Substack platform as a piece of technology, and the kinds of social interactions it inherently encourages:
Let the writers and readers moderate, not the social media platforms. And don’t have one big town square we all have to be exposed to, have a bunch of smaller ones that we can choose to be part of.
Her focus is on what makes Substack different from other platforms, namely the absence of a centralized feed that every user is plugged into, and the moderation controls each of us have on our content. On Substack you read what you subscribe to, and on Notes you interact with those who orbit those same publications.
This is already much better than other major platforms, whose centralized feeds and engagement-driven algorithms quickly pull content down to the lowest common denominator (and make it difficult to curate your experience to the level Substack allows).
These elements are baby versions of how things work on the fediverse, a “federated universe” of independent platforms running on open source software (the most well-known example is probably Mastodon). Each server, or “instance,” is run separately and has its own terms of service, and users sign up for individual instances, not the platform as a whole.
By default every instance can interact with each other, but often instances will “de-federate,” or block wholesale, other instances they find offensive. There are Nazi instances on the fediverse, but every other instance is free to de-federate from them, thus automatically blocking all of their users and content.
I think it’s valuable to expand on templates like these to establish a freer, healthier internet. Elle’s piece is about technology and social interaction, not a blind defense of freedom of speech regardless of consequence. She isn’t saying Nazis are welcome, she’s saying we don’t need the Substack team to get rid of them — we can keep them out of our publications, Notes, comments, and feeds ourselves.
That’s why, alongside the incredible writers who have signed this letter below, I am not advocating for a lack of moderation, I’m advocating for community moderation. I’m advocating for decentralized moderation.
Now, reasonable people can and do disagree on this point. The “Nazi bar” problem suggests that passive or piecemeal approaches are inadequate when faced with fascist creep. It’s also worth noting that several who signed Elle’s letter are not so much interested in free speech as they are in shielding their own bigotry from criticism.
But for those of us interested in building a world without masters of any kind, isn’t the prospect of doing things ourselves instead of appealing to authority precisely the kind of empowering change we want to be seeing in our digital landscape?
The question then becomes: can we really solve the Nazi problem on our own, or do we need some help from the staff behind the software?
To ban or not to ban — fascism is the exception
Most of my readers probably already understand this, but fascist rhetoric is not like other speech. Fascists don’t value freedom of speech. They don’t value freedom of any kind (even their own). They value power and authority, and they use deception and violence to get what they want.
They don’t engage in good faith, they don’t approach arguments with sincerity, they don’t think rationally or reasonably, they don’t care about civility or decency, they don’t care about changing hearts and minds, they’re not here to make the world a better place.
They will turn around and yank away our freedom of speech the moment they are able to.
This is why it’s not enough to simply curate our own individual spaces. Every platform operating on the same servers is vulnerable to infiltration and proliferation (which is why the fediverse model offers a way out, but Substack can’t exactly follow their lead without folding as a business).
So to me, it would make sense to ban Nazi accounts and say they violate the platform’s terms of service by incitement to violence, since that is what fascist rhetoric does. I’d probably do that if I were running the platform myself.
People would say I have a double-standard — I’d say yes, that’s true. People would point out content moderation is expensive and unprofitable. They’d say it’s impossible to root out absolutely every far-right publication. I’d say that’s true, and we should try anyway. (I would not make a good business leader.)
I’m wary of opening that door, however, because I don’t trust Substack not to then start banning anarchist accounts like mine, also for incitement to violence (although they’d be right to do so for Nazis, and wrong to do so for me). I trust my own judgement, but I can’t expect other people to, and I certainly don’t trust the politics or principles of current Substack leadership.
It’s always anti-capitalists, queer folk, people of color, and anti-authoritarians who are the first to get censored. I’m in that group and I don’t want to face the same uphill battle against censorship that I do on all other platforms.
So is that what we want Substack to do?
As it turns out, not even the original writers of the “Substackers Against Nazis” letter are clear on that point.
They say “it is unfathomable that someone with a swastika avatar… could be given the tools to succeed on your platform” (which I completely agree with), not that it’s unfathomable that someone with a swastika could have an account at all.
On the other hand,
from Platformer states in no unclear terms:“The correct number of newsletters using Nazi symbols that you host and profit from on your platform is zero.”
He includes “host” here alongside “profit from,” suggesting that simply demonetizing these accounts would not be enough.
It’s very hard to reach a completely zero number of fascists on any platform — in fact,
wrote an excellent piece to that effect, using the “mouse-shit in cereal boxes” analogy to make the point that striving for near-zero Nazi presence is good, but pushing for full-zero is not worth the effort.Furthermore, in Jonathan Katz’s original Atlantic article, he says simply banning accounts or removing posts runs the risk of feeding into the narrative about liberal censorship of conservative voices:
Experts… caution that simply banning hate groups from a platform—even if sometimes necessary from a business standpoint—can end up redounding to the extremists’ benefit by making them seem like victims of an overweening censorship regime.
This may or may not matter to folks. It doesn’t really matter to me. Those pushing the narrative that the intolerant left is oppressing the poor right are being disingenuous — they’re full of hot air. The real censorship happening in the States right now is precisely the opposite: legislation against “critical race theory” and any and all LGBT topics, including book bans, is an attempt to rewrite history and erase us from public life. Doesn’t get more government censorship than that.
Again though, my hesitation in demanding Substack ban Nazi accounts is that I don’t trust them to stop there. If once they let up on their (currently contradictory) stance on content moderation, my publication might soon be on the chopping block.
Right now our letter is only asking for a conversation. Will Substack leadership respond to our concerns? Will they address this growing problem? If not, many who helped pen the missive are prepared to leave the platform. (For the record, I’m staying.)
There are no calls for sweeping censorship, or mass banning of accounts. There is enough wiggle room in the letter itself and the context around it for multiple interpretations of what our methods should be.
The promotion problem
The one thing everyone co-signing the “Substackers Against Nazis” letter is clear on is that Substack should not actively promote far-right publications.
Elle asks, about her subscribers who only read via email not the app, “How exactly would they come across hateful content on Substack?” to which Jonathan Katz replied in Notes, “Because Substack corporate is promoting it!”
Which is true. Those of us who signed the “Against Nazis” letter emphasize that “there’s a difference between a hands-off approach and putting your thumb on the scale,” pointing to Substack’s alarming record of actively promoting far-right pundits on their official pages.
It’s now beyond the benefit of the doubt that Substack leadership is comfortable promoting the worst kinds of speech. This is clear in The Atlantic article, but it was clear to some of us already, who noticed things like who was given a shoutout in the On Substack piece about “leaning into politics,” and the episode of The Active Voice that featured Richard Hanania, who used that opportunity to push even more extreme far-right commentators.
We ask why Substack chooses to “promote and allow the monetization of sites that traffic in white nationalism,” when at the same time they enforce their content moderation policy against “spam sites and newsletters written by sex workers.” Priorities could not be clearer.
has also shown how Substack leadership is broadly ignorant about the history and purpose of content moderation and its effectiveness (either that, or they’re being willfully obtuse). Spotty application of existing rules does not portend well, when platforms like Facebook and Twitter got worse precisely when they let up on moderation practices.Why this inconsistency? If you follow the money, as Jonathan Katz did in a follow-up essay to his Atlantic piece, the picture is pretty bleak. Substack is making at least thousands if not tens of thousands of dollars a year on explicitly Nazi content. They are also pushing racists and reactionaries who, while not card-carrying Nazis, help shift political discourse in Nazism’s favor, making Substack money while they’re at it.
This is completely unjustifiable. Staff should immediately demonetize known Nazi accounts, especially those carrying a “Bestseller” badge. They should not host far-right figures on The Active Voice, give them shoutouts on official pages, promote them on “Featured this week,” or display them on leaderboards or listings.
As Parker Malloy suggested, another way to counteract their apparent bias is by amplifying more progressive, liberal, and leftist voices on those same channels. To be clear, this does not mean “never promote conservatives and always promote progressives.” It means “since you’ve already promoted racists and bigots who support Nazis, it might help to also promote more liberals and leftists, so that people don’t think you support Nazis.”
All these are things Substack could do without banning a single account or removing any content.
Transparency and the big picture
I’m honest with my readers about what I want. I want the government and capitalism to collapse. I want a social and cultural revolution that will completely rewrite our relationships. I’m upfront about these things because I think they’re good for everyone. I want us to flourish as individuals, as communities, as a species, and as a world. I think anarchy is the way there.
This transparency is more than you’ll get from fascists (and, apparently, from Substack leadership). They have something to hide — namely, not so good intentions. My ability to express these convictions, which could be considered extremist by a wide range of people, is dependent on platforms provided by those committed to freedom of speech.
These open letters and their attendant discourse are themselves examples of collective efforts towards decentralized moderation. They’re also, both of them, not enough. Our ability to keep ourselves and our communities safe, and our ability to express our beliefs without suppression, should be squarely in our hands. Right now that is only partially the case.
I think we should imagine together how we can better organize our digital life towards that end.
Thank you for reading Anarchy Unfolds! This publication is entirely supported by my readers. If you liked this post, if you value my work, consider becoming a free or paid subscriber, so I can devote more of my time to crafting a toolbox of rest and resistance.
In my humble opinion, the writers who are nazis come to this platform mainly because it’s a simple solution to make money. Youtube has been cracking down on hate or triggering speech and sometimes it can be very silly but at least it’s clear they’re working at making it consumer friendly.
This is another point i rarely hear brought up: consumer friendliness. I’m a consumer. I don’t want to see any nazi rhetoric on this app, in comments, promoted and featured anywhere. It’s not fun, it’s not engaging, it’s not good business. The more consumers are aware of this, the more likely they will stop subscribing.
Glad to find someone who signed both letters! I found stuff to agree with in both but my approach was to sign neither, although that may change once I officially start my substack. I also noticed both Elle and Jonathan, both of whom I follow, being misrepresented. And I've only been here for a while, but it did suddenly feel like Twitter!