Another of the features I'd wanted to add to Freedombone for a long time was server notifications via XMPP, and now that has been added. This is for things like notification that an upgrade or security test has failed or that the tripwire has been triggered. Previously those notifications were only via email, but I'm not very obsessive about email and rarely check it, whereas instant messages are much more likely to get my attention.
The security policy for XMPP chat was previously set such that end-to-end security was required, but it was difficult to automatically send out an OMEMO encrypted message from the server and so I've had to downgrade end-to-end security to being optional. This is not ideal, but the tradeoff between having to deal with folks trying to send me plaintext messages and being promptly alerted if something has failed on the server is probably worth it. Longer term I'd like to figure out if I can automatically generate OMEMO messages and then I can return to a better security policy.
The main factor which delayed the implementation of this was the question of needing to generate a separate XMPP account on the server to push out notifications. I didn't really want there to be a permanent separate account with a password lingering around somewhere which could become a possible security vulnerability. The solution to this was to generate an ephemeral account purely for the purpose of sending a single message. A new notification XMPP account gets created with a random password, sends the message and then about one second later the account is deleted. Even if the account credentials were to leak during the sending of a plaintext message they can't subsequently be useful to a potential adversary.
Another addition to the notifications system is being able to send a webcam photo if the USB canary is triggered. The purpose of that is to answer the paranoid question "Is anyone trying to mess with the server while I'm not at home?" if you're out shopping or at work. The particular threat model is known as evil maid. If you're running Freedombone on an old laptop and have a secondary webcam plugged it it will preferentially use that, so that you can set up the field of view appropriately. Not many people will need this level of physical device security, but it's nice to have the option. Also if you have the Syncthing app installed then any USB canary photo will be synced to the admin account.
Recently the keyboard I use most of the time, a full sized Unicomp, began developing dead keys. Sometimes they would contact and sometimes not. This rapidly became an untennable situation and so I pulled off the relevant keys to see if anything was obviously amiss. The springs themselves looked ok, so I assumed that the rocker which they're mounted on had broken. With the passage of enough time plastic becomes brittle and can break, especially when there's a lot of vibration going on as will happen during typing.
Opening up the casing with a 5.5mm socket I noticed a lot of small round bits of plastic falling out. At first I thought they might be some vital components, but on close inspection they were all irregularly shaped and didn't look like anything machine manufactured. I'd never deconstucted this type of keyboard previously, and searching for more information it turned out that these were the plastic heads of the rivets which hold the metal backplane on, many of which had fallen off. So what had happened was that the plastic had become old and brittle and the summer heat had probably caused the backplane to warp and break them off. With the backplane no longer properly held on there was nothing other than some plastic and rubber for the buckling springs to hit against, causing the keyboard to "go mushy".
So this was going to be a bigger job than I had thought. Fortunately there are quite detailed howtos online for how to remedy this type of calamity.
Being fairly expensive you might think that the manufacturing quality of the model M type keyboards is top of the line. But actually it's not. The Unicomp keyboards I use are closely based on the original IBM keyboards from the first generation of personal computers in the early 1980s. They were built to be mass market items, mostly sold to businesses. As such the build quality is not all that different from the Commodore Amiga which I was using at the end of that decade. Although it's quite thick the casing it's not all that solid and makes a lot of creaky noises if you carry the keyboard around (just like the Amiga did) and using plastic rivets is also decidedly cheapskate.
The way to fix my problem was to completely deconstruct the keyboard, drill out the plastic rivets and replace them with 8mm M2 bolts. Known in the trade as "a bolt job".
Content Warning: Explicit photos of keyboards follow.
With the casing removed the keyboard looked like this. I took photos at each stage mainly as a reference so that I could hopefully put things back together in the same order.
Pulling off the keys is straightforward and the metal backplane could then be removed by using a soldering iron to melt away the few remaining rivet heads. Also the USB cable was unplugged and its ground lead unsoldered. After that the small control board can be unscrewed and pulled out. The plastic matrix and its rubber covering can then be easily removed. I also carefully removed all the key springs. Those are ultra delicate.
So then you have the plastic key holder - for want of a better term - which is the thing which needs drilling. Ideally I would have used a small hand held drill but I didn't have one of those and instead used my usual large and heavy industrial grade one. This makes the drilling unweildy, but with some amount of patience it works.
Reassembling the Unicomp keyboard with 8mm bolts is a very fiddly operation at first. The key springs are exceptionally easy to disturb, and if any of them are missaligned then the corresponding key won't work and the repairs would have been in vain. For this you need a very steady hand, so avoid drinking a lot of coffee before you do it.
The result then looks like this. For reference there's another Unicomp below. It's the smaller "space saver" type.
And the nuts on the backplane look like this:
I didn't drill out the rivets on the bottom row, because the plastic lip along the bottom was no thicker than the 1.6mm drill bit, so it was pointless trying to drill into it. Hopefully there should be enough bolts to secure the keyboard though.
When adding bolts to the backplane I rocked it back and forth and if the key switches are working normally then the springs should also rock up and down. If there were any springs which weren't rockin' they could be twiddled (that's a technical term) with "the chopstick of death" (in my case the whittled end of a jostick) until they snapped into position.
Then it's a matter of laboriously pushing on the keys again, reconnecting the control board and resoldering the USB cable ground lead.
And amazingly it all worked. No more duff keys.
These days it's unusual for any consumer electronics to be repairable. This is one of those rare examples where it's still possible to mend it yourself in a quite straightforward way if you know how and are prepared to handle some fiddlyness.
Before you go all Stallman on me, I actually do mean Linux the kernel, not the whole operating system. I've mentioned problems with Linux on a few occasions in the fediverse, but for sake of posterity (or whatever) I'll summarise them here.
Also I'm not really an active kernel developer at this point, merely a user, observer, and maintainer of a book. So things might look different from other perspectives.
The development process is cumbersome
It probably isn't if you've been doing it for 20 years and your neomutt configuration is tweaked to perfection to handle all of the threading and patches, but for anyone who started kernel hacking in the last few years sending patch series to mailing lists is archaic and totally out of step with current software development practices.
Those Linux Foundation megacorps with their giant piles of cash should just set up a Gitlab (independently, not on gitlab.com) and start running the mainline development from that. Then all of that git format-patch stuff can go away.
The governance model is inadequate
The bad governance model makes the sort of toxicity for which LKML is legendary into an inevitability. When you're a small project, like the ones I maintain, then there isn't any alternative to BDFL. Linux is not a small project. In terms of numbers of developers and rate of development it's one of the biggest software projects there is.
It's time to admit that Linux has a governance problem and to move to something other than BDFL. I don't know exactly what the model should be, but that should be up for debate. Try to avoid having the same maintainers in the same positions for long periods of time.
People say "but it has worked for 20+ years, so it must be ok". This is just an old man argument. In reality, I think BDFL has held back innovation and helped to maintain poor working practices. The project continues despite these factors, not because of them, due to its overall usefulness.
Lack of up to date documentation
The kernel source ships with its own documentation, but the documentation is not always very helpful and is often a long way out of date. There doesn't seem to be a lot of maintenance effort on the documentation. I maintain a book called The Linux Kernel Module Programmer's Guide and as far as I can tell this is the most up to date documentation on how to make kernel modules. Other books out there are a decade or more behind. With all of the megabucks of the Linux Foundation's sponsors you'd think that they could do a top notch job of maintaining high quality and relevant documentation for practical engineering. But apparently not. This poses potential problems for training the next generation of hackers, and it might be that Linux continues for only as long as the old guard remain.
I had been hoping to begin distributing image files for the Freedombone project via the Dat protocol earlier in the year, before the 3.1 release, but at the time other things were more of a priority. Recently I returned to investigating how to do that, and there is now Dat version of the website and downloadable images. If you have the dat command installed then downloading an image is just a matter of doing a "dat clone [link]", similar to cloning a git repo.
The peer-to-peer nature of dat means that this method of distributng large files (typically about 3GB each in compressed format) is a lot more scalable than just directly downloading from one not very powerful server. Like git, the data is content addressable and can be seeded by arbitrary numbers of peers so it doesn't have to reside in any one place. The more peers there are the faster downloads can happen, and being distributed provides some amount of censorship resistance.
So dat will be the preferred distribution method for image files in future. The previous method worked, but my server did struggle sometimes and I had to stop downloads of images happening via the onion address because it was saturating Tor's bandwidth limits.
There are other interesting possibilities for dat which I'll also try. I don't know if git repos can be hosted via dat, but doing remote backups using it would be quite feasible and perhaps better than the existing implementation in Freedombone.
An article on Medium puts forth the proposition that FOSS is really just capitalism by another name. I agree with some parts, but mostly disagree with this idea.
FOSS is neither pro nor anti-capitalism. Although it can be used to help marginalised people there's nothing in the four freedoms which says that's its primary goal. Really it's just a development methodology which encourages sharing and collaboration rather than competition and secrecy. The sharealike nature of copyleft licenses do bias this type of development away from the exclusionary nature of market competition within a capitalist economy, but there are still plenty of very capitalistic companies using software with copyleft licenses and even sometimes developing new software under those licenses.
work is not acknowledged under capitalism unless it is measurably productive and benefiting someone who is already wealthy
This I agree with. At any point in time there's plenty of work to be done and things to be fixed, but unless doing those things personally benefits some rich person then typically there are zero funds available. Notice the badly maintained roads and railway infrastructure or city parks, for example.
But the rest is mostly wrong in my estimation. Even the first line:
Free and Open Source Software (FOSS) positions itself as being apart from capitalism
isn't really true. The original GNU manifesto mentions capitalism, but implies that it's good when the market competition is fair and that being non-proprietary helps to ensure fairness.
FOSS is exactly the same as capitalism in this way [that non-coders can't fork and continue a project], but with no greater governing body to create and enforce anti-discrimination laws. It is therefore safer for marginalised people to use centralised software under large companies that are accountable to the law.
Marginalised people are often marginalised precisely because the law doesn't work in their favor, or that laws are selectively applied and that if you're marginalized then you don't get any chance of legal representation. As a practical example, in recent years access to legal aid has been greatly reduced in the UK and this means that even if the law is on their side many people have no access to it.
Also the centralised systems have usually been the worst offenders when it comes to the rights of marginalised people. Over the last five years a lot of immigration into the fediverse has been precisely due to people being blacklisted by centralised systems for purely arbitrary or discriminatory reasons.
In companies the law ensures that marginalised people are treated appropriately, and progress is slow but we’re getting there. In FOSS the only tool we have is user pressure, and it’s not working. All the power is with the developers, who have the time and/or money to be able to code because they’re in a privileged group. In FOSS as in capitalism, power begets power, and those at the top don’t share.
This is a characterisation of companies which I don't recognise. Unless you're working within a cooperative or are self-employed, companies are structured in a feudal manner. The law provides very little effective protection of employees, again also because few people have the money to be able to make use of legal services. If the boss breaks employment law usually there are no repercussions. You only stand a chance if you have the equivalent of "user pressure", i.e. something like a union or trade organisation independent from the company. In the end the law is not a substitute for real solidarity.
Free Software developers have the power to make or fix software, but usually they don't have much other power or privileged access to resources. Like raising children and other kinds of domestic work, FOSS is often not recognised as being "work", is usually unpaid and mostly doesn't appear in the GDP figures. It might seem that FOSS developers are incredibly privileged if you take the employees of Google or Facebook as an example, but those people are really just a tiny number compared to the set of all active FOSS developers, and they're not even the most productive ones.
The complaint that FOSS developers don't share power is really a conflation between two different things. FOSS is about sharing software. It's not really about sharing software making skills and it doesn't imply any particular governance model. Individual engineers aren't obligated to design their software by vote, although in some projects that may happen. There certainly are problems with the "benevolent dictator for life" (BDFL) governance model, particularly when projects become large like the Linux kernel. Mastodon currently is also suffering from the limitations of BDFL.
The problem with BDFL is that nobody is really all that smart and that no matter how empathetic you are it's always difficult to know what other people's software requirements are unless you have those requirements yourself and in the same sort of context. Trying to design things in the high-minded belief that you know what's best for other people is how a lot of activism ends up being ineffective. It's why having a diverse software development team and some non-BDFL governance model is useful.
One important thing to keep in mind though is that most FOSS projects have only one or two developers and so never encounter the problem of governance at scale. Also production really is key. You can debate things long and hard, but in the end its action which matters.
What is the current state of self-hosting? What are the problems with it? Who does it, and why? But first of all what is it?
Self-hosting means running software which is designed for use over a network, which is usually the internet. The term applies to the client/server paradigm only, since if you're using peer-to-peer software then it makes little sense. The types of software which run over a network are typically things like email, blogs, wikis and social networks.
Hosting network software yourself on a server which you own or control typically takes a couple of forms.
Firstly the more traditional form is where you are renting a computer, or shared space on one, which is owned by some company and exists in a warehouse. In that manner you can run your own website and maybe some other systems, depending upon what the commercial service permits. In the first couple of decades of the history of the web running your own personal website in this type of manner was what comprised most of the web.
The second type of self-hosting is where you own some computer hardware and it runs in your own home. This is the old laptop in a closet or behind the sofa scenario. To do this you typically needed to have more technical knowledge.
What are the problems with this, and why did users begin moving to web 2.0 systems from the mid 2000s onwards? Maintaining your own server software was, and remains to some extent, quite tricky and requires some non-consumer level of technical knowledge. As the web grew it needed to become more accessible to a wider range of people, including those without detailed knowledge of how the technical side of things works.
A minimum knowledge requirement for self-hosting would be something like:
This is really systems administrator level knowledge which can take quite a lot of effort and time to obtain.
In the past mostly computer hobbyists, some software developers interested in devops and people doing systems administration as a job or who were trying to get into that line of business. In future hopefully this demographic can be expanded, but it depends upon the extent to which administration can be turned into a consumer-type user experience with minimal "friction".
In the last ten years it has usually been easier to use web 2.0 "software as a service" type silo systems. Those are centralized and usually supported by advertising. But over time it has become increasingly clear that this is a bad model which can have some very bad outcomes. A single silo trying to govern the whole world is obviously not going to work out well. Throw advertisers into that mix and things get even worse. People need to govern themselves, so it would be better if individuals or communities controlled their own network systems and services, then they get to decide what the rules should be and manage their own affairs democratically.
The most likely future in the next five years is something like a small box which plugs into your internet router, which can then be administered from a laptop or mobile phone. It would also be possible to have the internet router be a home server, but people only usually replace their router if it breaks so we should probably assume that a strategy based upon new types of routers is not likely to see much adoption.
In the past owning and running a server was fairly expensive. This isn't the case any longer. Many people have old unused computer hardware which would be good enough to run a network system. Even many cheap single board computers are capable of doing that, and they consume not much electrical power so having them run continuously is not much of a problem. So the cost barrier is going away.
Having a nice administration app which is simple to use is something that's needed for the future. There is currently a FreedomBox app for Android, but its functionality only provides one part of what's required. A realistic assumption in the next five years is that many people will only have mobile phones and that they may not own or have access to a laptop or desktop machine.
A limitation of single board computers in the past has been their relatively slow EMMC or microSD memory. Single board computers are beginning to emerge which have USB3, and with the rootfs on a USB3 drive I/O performance increases by an order of magnitude. So once USB3 becomes the standard on single board computers then that could be a game changer.
Using domain name systems other than the conventional one will also make self-hosting dramatically easier. If you host services on onion or I2P or SSB addresses then that gets around a lot of the cost and complexity of obtaining domain names or certificates, and also may help with NAT traversal issues. What's needed here is a slick way of giving your domain to other people if it's not human readable. Possibilities are some kind of pet names system, QR codes, NFC wearables or other short range signalling systems available on mobile phones (bluetooth, etc).