Twitter's health metrics

It looks like Twitter is now starting to do what I predicted they would a year or two ago, and that's to use technocratic methods to try to regulate discussions on their site. Creating conversational health metrics is the first step towards more automated censorship based around those indicators.

I suggested something a little like this for the fediverse over a year ago. The idea was to do sentiment analysis and maybe use colours to indicate how hostile a timeline was becoming in an attempt to allow for self-regulation based on visual feedback.

Somewhat appropriately I got some hostile replies to that, and on reflection I'm inclined to agree that such attempts at automated conversational regulation could introduce more problems than they solve.

Especially for things like debate on contemporary issues, algorithms aren't going to understand the context and will be trying to regulate in a context-free manner which might be entirely inappropriate. Even in principle if the algorithms were AI-complete they might still not share the same human situatedness and so not understand the discussion enough to provide meaningful feedback. You can see the same kinds of fundamental misunderstandings between people from differing affinity groups, and I expect that exactly the same will apply to arbitrarily sophisticated AI systems.

So what's the solution?

If I were to try to condense Twitter's problems into a paragraph I'd describe it as:

Twitter is a pseudo-democratic space, but lacking the necessary precursor for real democracy which is control of communities by communities (self-rule). The centralized nature of the site, with potentially the whole world in one database and judged by a single universal standard, creates an irreconcilable contradiction which has no room for cultural or historical context.

Having been on the interwebs for a long time and observed forums, IRC, XMPP, mailing lists, blogs when they still had lively comment sections and so on, a conclusion is that the only good way to regulate a community is manually. Humans regulating other humans. Establishing trust and accountability the old skool way. If you try to do this in an automated manner it never works out well and usually ends with bad or weird outcomes and desertion by the original inhabitants.

This is a fundamental problem for the centralized social network sites. They are ramping up their human censorship effort, but they won't be able to continue hiring more and more censors without breaking their business model. Even if they amass a vast army of censors they will still be applying a single policy upon all of the world's cultures, and that's always going to be a bad fit. It's why I think the federated model is better. A federation isn't a one-size-fits-all. It can accommodate different cultures without forcing them into a space where they can't avoid clashing.