If the fediverse gets large enough then what are the strategies which the incumbents will use against it? We can have a pretty good idea of what this might look like, because similar things have happened in the past.
Fear, Uncertainty and Doubt (FUD)
They'll claim that the fediverse is an evil place packed with evildoers, deviants, crooks, baby-killing terrorists and other Bad Cybers. Generating fear as a method of deterring people from even trying to use the system. Microsoft used this strategy in the 2000s against GNU/Linux. It was also used by various governments against TOR project. The music industry used the same methods against "home taping" in the 1980s and later against peer-to-peer file sharing systems after Napster.
Embrace, Extend, Extinguish
A method used by both Microsoft and Google. They would be enthusiastic about the fediverse and make a closed source ActivityPub server. It would be hyped as much as possible to attract the maximum number of users and create a single giant instance. A few fediverse stallwarts would probably be hired as a way of gaining community confidence. Once they had enough users they'd then begin going beyond ActivityPub by adding new features "for greater convenience" or "better integrated with proprietary system XYZ". These new features would begin breaking federation with other instances. After a while you're back to one big silo which is closed and incompatible with anything else.
Google did this with Gmail, and to some extent XMPP (they abandoned it and transitioned users to a system which they fully control). Microsoft did it with their non-standard C++ extensions and have been trying to do it with GNU/Linux more generally.
They might try to take legal actions against developers or instance admins. Companies which have monopoly status can afford to buy legislation, so they might try to get something onto the books which criminalizes running a fediverse instance.
Perhaps they would say that running social networks "must be regulated to prevent abuse or ensure cooperation with authorities".
Maybe "running an unregulated social network" becomes a crime.
Perhaps they might try to introduce a licensing scheme with prohibitively high costs such that only large companies can afford them.
Net non-neutrality would also be a possible counter-strategy if they can ensure that ISPs block fediverse traffic.
Sponsor instance admins
It's generally true that instance admins are not rich. By the standards of large tech companies tiny amounts of money could be used to bribe them. This strategy would be like that weird fungus which takes over the brain of its host and makes it do something against its usual behavior pattern.
The deal would be like this: if we sponsor you then you have to meet our targets for ads inserted into the local timeline and agree to allow us to algorithmically adjust the local timeline. Maybe they make it as simple as adding some script to the software which enables remote control over content.
This would be a very low investment strategy which still brings in similar levels of advertising revenue. Why fight the opposition when you can just coopt them?
The only down side here would be lack of "real names", but perhaps enforcing that would be part of the sponsorship deal.
If you can't beat 'em, join 'em
This would be similar to what Pixiv did with Mastodon, and would be the best case scenario. Maybe some new features are added, but they're under AGPL and federation continues.
However, this would mean that they won't have exclusive control over timelines and delivery of ads. If it's the best deal they can manage though then they might do this.
There are signs indicating that centralized silos are socially unsustainable in the long run and so the incumbents could just realize that the game is over and try to salvage as much of their position as is possible instead of trying to maintain a failing monopoly.
Diversity of tactics
Most likely they would do a combination of all of the above, hoping that at least one of them succeeds.
I think this is something which ought to be obvious but hasn't become fully so to a lot of "people in tech". We ought to be designing systems which make it easy for online communities to manage themselves, with a minimum of algorithmic follies.
For silo systems like Twitter and Facebook there are two modes of governance being followed:
The old way: centralized moderation You hire some censors, put them in an office and get them to spend all of their time going through flagged content and removing things. It's a high stress job with a rapid staff turnover, and the censorship policies are all made by a central committee. A central committee which governs for the whole planet. This is obviously unworkable because it can never understand local context, but it has been the Facebook way for at least a decade. In the last few years the limitations of this have become clearer and the cracks in the edifice are now showing.
The new way: algorithmic governance This is what Facebook is now pursuing. They know that they can't hire enough censors to implement more comprehensive human content moderation and so AI is their go-to solution. There's a magical belief that AI is going to solve the governance problem. But of course it isn't, and it may make matters worse, because ultimately algorithms don't understand the context of social situations. Without wisdom it's extremely hard to screen out algorithmic bias, and no ethics committee or big data mining solution is going to be able to make appropriate decisions on behalf of all the world's communities.
The future of the internet isn't going to be either of these things. It's going to be human community governance at a human scale. Not one committee per planet. One committee per community. Systems need to facilitate assignment of roles, setting of governance rules and ways to enforce the rules. They may also need to allow for ways to transact between communities. This is what self-governance means.
Another of the features I'd wanted to add to Freedombone for a long time was server notifications via XMPP, and now that has been added. This is for things like notification that an upgrade or security test has failed or that the tripwire has been triggered. Previously those notifications were only via email, but I'm not very obsessive about email and rarely check it, whereas instant messages are much more likely to get my attention.
The security policy for XMPP chat was previously set such that end-to-end security was required, but it was difficult to automatically send out an OMEMO encrypted message from the server and so I've had to downgrade end-to-end security to being optional. This is not ideal, but the tradeoff between having to deal with folks trying to send me plaintext messages and being promptly alerted if something has failed on the server is probably worth it. Longer term I'd like to figure out if I can automatically generate OMEMO messages and then I can return to a better security policy.
The main factor which delayed the implementation of this was the question of needing to generate a separate XMPP account on the server to push out notifications. I didn't really want there to be a permanent separate account with a password lingering around somewhere which could become a possible security vulnerability. The solution to this was to generate an ephemeral account purely for the purpose of sending a single message. A new notification XMPP account gets created with a random password, sends the message and then about one second later the account is deleted. Even if the account credentials were to leak during the sending of a plaintext message they can't subsequently be useful to a potential adversary.
Another addition to the notifications system is being able to send a webcam photo if the USB canary is triggered. The purpose of that is to answer the paranoid question "Is anyone trying to mess with the server while I'm not at home?" if you're out shopping or at work. The particular threat model is known as evil maid. If you're running Freedombone on an old laptop and have a secondary webcam plugged it it will preferentially use that, so that you can set up the field of view appropriately. Not many people will need this level of physical device security, but it's nice to have the option. Also if you have the Syncthing app installed then any USB canary photo will be synced to the admin account.
Recently the keyboard I use most of the time, a full sized Unicomp, began developing dead keys. Sometimes they would contact and sometimes not. This rapidly became an untennable situation and so I pulled off the relevant keys to see if anything was obviously amiss. The springs themselves looked ok, so I assumed that the rocker which they're mounted on had broken. With the passage of enough time plastic becomes brittle and can break, especially when there's a lot of vibration going on as will happen during typing.
Opening up the casing with a 5.5mm socket I noticed a lot of small round bits of plastic falling out. At first I thought they might be some vital components, but on close inspection they were all irregularly shaped and didn't look like anything machine manufactured. I'd never deconstucted this type of keyboard previously, and searching for more information it turned out that these were the plastic heads of the rivets which hold the metal backplane on, many of which had fallen off. So what had happened was that the plastic had become old and brittle and the summer heat had probably caused the backplane to warp and break them off. With the backplane no longer properly held on there was nothing other than some plastic and rubber for the buckling springs to hit against, causing the keyboard to "go mushy".
So this was going to be a bigger job than I had thought. Fortunately there are quite detailed howtos online for how to remedy this type of calamity.
Being fairly expensive you might think that the manufacturing quality of the model M type keyboards is top of the line. But actually it's not. The Unicomp keyboards I use are closely based on the original IBM keyboards from the first generation of personal computers in the early 1980s. They were built to be mass market items, mostly sold to businesses. As such the build quality is not all that different from the Commodore Amiga which I was using at the end of that decade. Although it's quite thick the casing it's not all that solid and makes a lot of creaky noises if you carry the keyboard around (just like the Amiga did) and using plastic rivets is also decidedly cheapskate.
The way to fix my problem was to completely deconstruct the keyboard, drill out the plastic rivets and replace them with 8mm M2 bolts. Known in the trade as "a bolt job".
Content Warning: Explicit photos of keyboards follow.
With the casing removed the keyboard looked like this. I took photos at each stage mainly as a reference so that I could hopefully put things back together in the same order.
Pulling off the keys is straightforward and the metal backplane could then be removed by using a soldering iron to melt away the few remaining rivet heads. Also the USB cable was unplugged and its ground lead unsoldered. After that the small control board can be unscrewed and pulled out. The plastic matrix and its rubber covering can then be easily removed. I also carefully removed all the key springs. Those are ultra delicate.
So then you have the plastic key holder - for want of a better term - which is the thing which needs drilling. Ideally I would have used a small hand held drill but I didn't have one of those and instead used my usual large and heavy industrial grade one. This makes the drilling unweildy, but with some amount of patience it works.
Reassembling the Unicomp keyboard with 8mm bolts is a very fiddly operation at first. The key springs are exceptionally easy to disturb, and if any of them are missaligned then the corresponding key won't work and the repairs would have been in vain. For this you need a very steady hand, so avoid drinking a lot of coffee before you do it.
The result then looks like this. For reference there's another Unicomp below. It's the smaller "space saver" type.
And the nuts on the backplane look like this:
I didn't drill out the rivets on the bottom row, because the plastic lip along the bottom was no thicker than the 1.6mm drill bit, so it was pointless trying to drill into it. Hopefully there should be enough bolts to secure the keyboard though.
When adding bolts to the backplane I rocked it back and forth and if the key switches are working normally then the springs should also rock up and down. If there were any springs which weren't rockin' they could be twiddled (that's a technical term) with "the chopstick of death" (in my case the whittled end of a jostick) until they snapped into position.
Then it's a matter of laboriously pushing on the keys again, reconnecting the control board and resoldering the USB cable ground lead.
And amazingly it all worked. No more duff keys.
These days it's unusual for any consumer electronics to be repairable. This is one of those rare examples where it's still possible to mend it yourself in a quite straightforward way if you know how and are prepared to handle some fiddlyness.
Before you go all Stallman on me, I actually do mean Linux the kernel, not the whole operating system. I've mentioned problems with Linux on a few occasions in the fediverse, but for sake of posterity (or whatever) I'll summarise them here.
Also I'm not really an active kernel developer at this point, merely a user, observer, and maintainer of a book. So things might look different from other perspectives.
The development process is cumbersome
It probably isn't if you've been doing it for 20 years and your neomutt configuration is tweaked to perfection to handle all of the threading and patches, but for anyone who started kernel hacking in the last few years sending patch series to mailing lists is archaic and totally out of step with current software development practices.
Those Linux Foundation megacorps with their giant piles of cash should just set up a Gitlab (independently, not on gitlab.com) and start running the mainline development from that. Then all of that git format-patch stuff can go away.
The governance model is inadequate
The bad governance model makes the sort of toxicity for which LKML is legendary into an inevitability. When you're a small project, like the ones I maintain, then there isn't any alternative to BDFL. Linux is not a small project. In terms of numbers of developers and rate of development it's one of the biggest software projects there is.
It's time to admit that Linux has a governance problem and to move to something other than BDFL. I don't know exactly what the model should be, but that should be up for debate. Try to avoid having the same maintainers in the same positions for long periods of time.
People say "but it has worked for 20+ years, so it must be ok". This is just an old man argument. In reality, I think BDFL has held back innovation and helped to maintain poor working practices. The project continues despite these factors, not because of them, due to its overall usefulness.
Lack of up to date documentation
The kernel source ships with its own documentation, but the documentation is not always very helpful and is often a long way out of date. There doesn't seem to be a lot of maintenance effort on the documentation. I maintain a book called The Linux Kernel Module Programmer's Guide and as far as I can tell this is the most up to date documentation on how to make kernel modules. Other books out there are a decade or more behind. With all of the megabucks of the Linux Foundation's sponsors you'd think that they could do a top notch job of maintaining high quality and relevant documentation for practical engineering. But apparently not. This poses potential problems for training the next generation of hackers, and it might be that Linux continues for only as long as the old guard remain.
I had been hoping to begin distributing image files for the Freedombone project via the Dat protocol earlier in the year, before the 3.1 release, but at the time other things were more of a priority. Recently I returned to investigating how to do that, and there is now Dat version of the website and downloadable images. If you have the dat command installed then downloading an image is just a matter of doing a "dat clone [link]", similar to cloning a git repo.
The peer-to-peer nature of dat means that this method of distributng large files (typically about 3GB each in compressed format) is a lot more scalable than just directly downloading from one not very powerful server. Like git, the data is content addressable and can be seeded by arbitrary numbers of peers so it doesn't have to reside in any one place. The more peers there are the faster downloads can happen, and being distributed provides some amount of censorship resistance.
So dat will be the preferred distribution method for image files in future. The previous method worked, but my server did struggle sometimes and I had to stop downloads of images happening via the onion address because it was saturating Tor's bandwidth limits.
There are other interesting possibilities for dat which I'll also try. I don't know if git repos can be hosted via dat, but doing remote backups using it would be quite feasible and perhaps better than the existing implementation in Freedombone.