Distributing on Dat

June 17, 2018 - Reading time: 1 minutes

I had been hoping to begin distributing image files for the Freedombone project via the Dat protocol earlier in the year, before the 3.1 release, but at the time other things were more of a priority. Recently I returned to investigating how to do that, and there is now Dat version of the website and downloadable images. If you have the dat command installed then downloading an image is just a matter of doing a "dat clone [link]", similar to cloning a git repo.

The peer-to-peer nature of dat means that this method of distributng large files (typically about 3GB each in compressed format) is a lot more scalable than just directly downloading from one not very powerful server. Like git, the data is content addressable and can be seeded by arbitrary numbers of peers so it doesn't have to reside in any one place. The more peers there are the faster downloads can happen, and being distributed provides some amount of censorship resistance.

So dat will be the preferred distribution method for image files in future. The previous method worked, but my server did struggle sometimes and I had to stop downloads of images happening via the onion address because it was saturating Tor's bandwidth limits.

There are other interesting possibilities for dat which I'll also try. I don't know if git repos can be hosted via dat, but doing remote backups using it would be quite feasible and perhaps better than the existing implementation in Freedombone.

Is FOSS capitalist?

June 12, 2018 - Reading time: 5 minutes

An article on Medium puts forth the proposition that FOSS is really just capitalism by another name. I agree with some parts, but mostly disagree with this idea.

FOSS is neither pro nor anti-capitalism. Although it can be used to help marginalised people there's nothing in the four freedoms which says that's its primary goal. Really it's just a development methodology which encourages sharing and collaboration rather than competition and secrecy. The sharealike nature of copyleft licenses do bias this type of development away from the exclusionary nature of market competition within a capitalist economy, but there are still plenty of very capitalistic companies using software with copyleft licenses and even sometimes developing new software under those licenses.

work is not acknowledged under capitalism unless it is measurably productive and benefiting someone who is already wealthy

This I agree with. At any point in time there's plenty of work to be done and things to be fixed, but unless doing those things personally benefits some rich person then typically there are zero funds available. Notice the badly maintained roads and railway infrastructure or city parks, for example.

But the rest is mostly wrong in my estimation. Even the first line:

Free and Open Source Software (FOSS) positions itself as being apart from capitalism

isn't really true. The original GNU manifesto mentions capitalism, but implies that it's good when the market competition is fair and that being non-proprietary helps to ensure fairness.

FOSS is exactly the same as capitalism in this way [that non-coders can't fork and continue a project], but with no greater governing body to create and enforce anti-discrimination laws. It is therefore safer for marginalised people to use centralised software under large companies that are accountable to the law.

Marginalised people are often marginalised precisely because the law doesn't work in their favor, or that laws are selectively applied and that if you're marginalized then you don't get any chance of legal representation. As a practical example, in recent years access to legal aid has been greatly reduced in the UK and this means that even if the law is on their side many people have no access to it.

Also the centralised systems have usually been the worst offenders when it comes to the rights of marginalised people. Over the last five years a lot of immigration into the fediverse has been precisely due to people being blacklisted by centralised systems for purely arbitrary or discriminatory reasons.

In companies the law ensures that marginalised people are treated appropriately, and progress is slow but we’re getting there. In FOSS the only tool we have is user pressure, and it’s not working. All the power is with the developers, who have the time and/or money to be able to code because they’re in a privileged group. In FOSS as in capitalism, power begets power, and those at the top don’t share.

This is a characterisation of companies which I don't recognise. Unless you're working within a cooperative or are self-employed, companies are structured in a feudal manner. The law provides very little effective protection of employees, again also because few people have the money to be able to make use of legal services. If the boss breaks employment law usually there are no repercussions. You only stand a chance if you have the equivalent of "user pressure", i.e. something like a union or trade organisation independent from the company. In the end the law is not a substitute for real solidarity.

Free Software developers have the power to make or fix software, but usually they don't have much other power or privileged access to resources. Like raising children and other kinds of domestic work, FOSS is often not recognised as being "work", is usually unpaid and mostly doesn't appear in the GDP figures. It might seem that FOSS developers are incredibly privileged if you take the employees of Google or Facebook as an example, but those people are really just a tiny number compared to the set of all active FOSS developers, and they're not even the most productive ones.

The complaint that FOSS developers don't share power is really a conflation between two different things. FOSS is about sharing software. It's not really about sharing software making skills and it doesn't imply any particular governance model. Individual engineers aren't obligated to design their software by vote, although in some projects that may happen. There certainly are problems with the "benevolent dictator for life" (BDFL) governance model, particularly when projects become large like the Linux kernel. Mastodon currently is also suffering from the limitations of BDFL.

The problem with BDFL is that nobody is really all that smart and that no matter how empathetic you are it's always difficult to know what other people's software requirements are unless you have those requirements yourself and in the same sort of context. Trying to design things in the high-minded belief that you know what's best for other people is how a lot of activism ends up being ineffective. It's why having a diverse software development team and some non-BDFL governance model is useful.

One important thing to keep in mind though is that most FOSS projects have only one or two developers and so never encounter the problem of governance at scale. Also production really is key. You can debate things long and hard, but in the end its action which matters.

Cover Image

The state of self-hosting in 2018

June 12, 2018 - Reading time: 5 minutes

What is the current state of self-hosting? What are the problems with it? Who does it, and why? But first of all what is it?

A Definition

Self-hosting means running software which is designed for use over a network, which is usually the internet. The term applies to the client/server paradigm only, since if you're using peer-to-peer software then it makes little sense. The types of software which run over a network are typically things like email, blogs, wikis and social networks.

The Status Quo

Hosting network software yourself on a server which you own or control typically takes a couple of forms.

Firstly the more traditional form is where you are renting a computer, or shared space on one, which is owned by some company and exists in a warehouse. In that manner you can run your own website and maybe some other systems, depending upon what the commercial service permits. In the first couple of decades of the history of the web running your own personal website in this type of manner was what comprised most of the web.

The second type of self-hosting is where you own some computer hardware and it runs in your own home. This is the old laptop in a closet or behind the sofa scenario. To do this you typically needed to have more technical knowledge.


What are the problems with this, and why did users begin moving to web 2.0 systems from the mid 2000s onwards? Maintaining your own server software was, and remains to some extent, quite tricky and requires some non-consumer level of technical knowledge. As the web grew it needed to become more accessible to a wider range of people, including those without detailed knowledge of how the technical side of things works.

A minimum knowledge requirement for self-hosting would be something like:

  • Knowing the basics of what IP addresses are
  • Knowing what domain names are and how to obtain one
  • Knowing how to configure a domain name to point to your network software, either with a fixed IP address or via dynamic DNS
  • Knowing how to obtain and install TLS certificates
  • Being aware of NAT traversal issues and port forwarding

This is really systems administrator level knowledge which can take quite a lot of effort and time to obtain.

Who does it?

In the past mostly computer hobbyists, some software developers interested in devops and people doing systems administration as a job or who were trying to get into that line of business. In future hopefully this demographic can be expanded, but it depends upon the extent to which administration can be turned into a consumer-type user experience with minimal "friction".

Why self-host?

In the last ten years it has usually been easier to use web 2.0 "software as a service" type silo systems. Those are centralized and usually supported by advertising. But over time it has become increasingly clear that this is a bad model which can have some very bad outcomes. A single silo trying to govern the whole world is obviously not going to work out well. Throw advertisers into that mix and things get even worse. People need to govern themselves, so it would be better if individuals or communities controlled their own network systems and services, then they get to decide what the rules should be and manage their own affairs democratically.

What's the future for self-hosting?

The most likely future in the next five years is something like a small box which plugs into your internet router, which can then be administered from a laptop or mobile phone. It would also be possible to have the internet router be a home server, but people only usually replace their router if it breaks so we should probably assume that a strategy based upon new types of routers is not likely to see much adoption.

In the past owning and running a server was fairly expensive. This isn't the case any longer. Many people have old unused computer hardware which would be good enough to run a network system. Even many cheap single board computers are capable of doing that, and they consume not much electrical power so having them run continuously is not much of a problem. So the cost barrier is going away.

Having a nice administration app which is simple to use is something that's needed for the future. There is currently a FreedomBox app for Android, but its functionality only provides one part of what's required. A realistic assumption in the next five years is that many people will only have mobile phones and that they may not own or have access to a laptop or desktop machine.

A limitation of single board computers in the past has been their relatively slow EMMC or microSD memory. Single board computers are beginning to emerge which have USB3, and with the rootfs on a USB3 drive I/O performance increases by an order of magnitude. So once USB3 becomes the standard on single board computers then that could be a game changer.

Using domain name systems other than the conventional one will also make self-hosting dramatically easier. If you host services on onion or I2P or SSB addresses then that gets around a lot of the cost and complexity of obtaining domain names or certificates, and also may help with NAT traversal issues. What's needed here is a slick way of giving your domain to other people if it's not human readable. Possibilities are some kind of pet names system, QR codes, NFC wearables or other short range signalling systems available on mobile phones (bluetooth, etc).

The not so rose tinted past

June 8, 2018 - Reading time: 3 minutes

In this podcast they mention that people over 40 get nostalgic about the home computing of the 1980s but forget the bad parts.

I'm old enough to remember it. So what were the bad parts? There were plenty.

Software on cassette tapes

Cassette tapes are almost forgotten as a technology now, but were commonly used with the home computers in the first half of the 1980s. Cassettes were awful for storing programs in a similar way that they're also awful for storing music. The ferrous coating would wear off over time. Tapes would sometimes stretch. You would need to fast forward or rewind to the correct point if there were multiple programs, and that was a very hit-or-miss affair. It was easy to accidentally overwrite part of the tape that you didn't mean to. Also tapes would occasionally break.

Obtaining information useful for programming was hard

Sure you could learn the elementary parts of BASIC programming from magazines, but progressing your knowledge beyond "hello world" or other simple programs into more advanced areas was difficult. Even large libraries usually only had a few books on computers and they were always the "coffee table" type with nice photos but not of much practical use. Adults knew nothing, so it wasn't as if you could ask your parents or teachers. Chances are that you'd learn a lot more from other children who also had home computers, and in practice this is how I learned most things in the early days. Access to information in general was far more restricted since this was before the web existed.

Home computers themselves had hardware reliability issues

The quality of manufacture was not nearly as good as it is now. In the late 1980s I had an Amiga 500. These machines frequently had dodgy motherboards which would cause inexplicable crashes. The standard advice was to raise the computer and then physically hit the underside with some amount of force, bending the motherboard and hopefully jolting whatever components back into place.

Using televisions as monitors was a bad user experience

Of course if you're a kid you don't get to use the TV that your parents watch with your home computer. You have an old or second hand, typically "portable" one. Maybe out of a caravan or salvaged from a dumpster. Old TVs has all sorts of problems. In TV soap operas of that period or earlier you would sometimes see the characters hitting their TVs to make them work, and that wasn't just fiction. Not only were TVs often unreliable but the coax connection between the TV and your computer was often also rather dodgy. You might have to spend some amount of time twiddling with it to get a clear picture. And then there's tuning. Old TVs usually didn't have a remote. More typically they had a physical (analog) tuning dial and you would need to turn this to exactly the right frequency.

Software delivery took much longer than a couple of hours

In the podcast they say that in the 1980s obtaining software might take a couple of hours visiting the local computer shop. Actually it was much worse than that. Computer shops usually only stocked a few games or other software. The number of titles might be less than your number of fingers. Typically the software you wanted was not available in your local shop, so you would have to mail order it, and that typically took at least a couple of weeks.

Keyboard quality varied a lot

Many home computers had good keyboards - perhaps even better than most today - but some like the Sinclair Spectrum had keyboards which were almost unusable for typing. In the 1980s wordprocessing was a huge and smoking hot technology, so having a good keyboard really mattered.

Cover Image

Goodbye Github

June 4, 2018 - Reading time: 3 minutes

I think I've done most of the necessary preparation now in readiness for getting out of Github. I had been claiming that I was going to do that for the last three years and its recent purchase by Microsoft provided me with a slam-dunkin reason to finally do it. I've slated closure of my account for 11th of June 2018, added as a TODO item in org-agenda. That will give enough time for any Freedombone systems out there to automatically update the repos to the new location.

It has been fun being on Github. I stayed there because it mostly seemed to not be evil, despite being another proprietary silo. People sometimes had a dig at me for the hypocrisy of promoting self-hosting while having the repos in a silo, and they were entirely justified in doing so. The main advantage of Github was just visibility and searchability. Apart from that other systems such as Gogs, Gitea and Gitlab have very similar features. It's the same kind of reason why video creators post to YouTube rather than just their own sites. You can self-host videos, but who is going to discover them? Similar applies with software projects.

Like many folks from the FOSS side of the world I could rant at length about Microsoft. In recent years during the post-Ballmer era they say they've become a lot more friendly towards open source. They fund the Linux Foundation and some of their software is released under copyleft licenses. But not much of it is. Their desktop and server/cloud operating systems, and their office productivity software remains, as far as I know, closed source. They also still extort people using software patents. This indicates to me that Microsoft's new found love of FOSS is just superficial and that fundamentally their interests or corporate ethos have not changed. Having some FOSSy things enables them to recruit bright young engineers and makes for some nice sounding publicity material at conferences. They will no doubt claim that buying Github proves - via putting their cash where their mouth is - that they are an honest player on the FOSS scene. But a couple of years down the line I expect that Github, if it still exists at all, will be pushing ads, might have some login which requires being on a Microsoft platform and will be very specific to Windows, Microsoft APIs and Visual Studio. Anything else which can't be locked in will be sidelined and then dropped like a coyote over a cliff edge.

But in a way Microsoft has done me a favor. In buying Github it made a decision which was a fuzzy trade-off of social and technical factors into a clear line in the sand and a question of which side you're on. Their acquisition will cause some amount of dissruption to FOSS development, and I'm sure they must have been aware of that as a desired outcome when they made the deal. Lots of installed things point to Github repos and if there are enough projects leaving Github then those things will start breaking.

So what's the way forward? We have quite good replacements for Github, but they lack the community element. Perhaps the fediverse protocols can help here, adding a single sign-on and a federated way of publishing changes and discussing features and bugs. Another possibility would be to create a software directory on a distributed hash table. I think the developer community has the capability and now the motivation to begin tackling these problems. Producing another giant silo like Github isn't the answer.

Cover Image

The FreedomBox app

May 28, 2018 - Reading time: 2 minutes

There has been an app on F-droid for the FreedomBox system for a while now and I was wondering whether I could do something similar for the Freedombone project. I didn't know anything about how the FreedomBox app worked, so I assumed it was an Android version of the Plinth user interface and was trying to figure out how the connection between the box and the app worked.

But it later turned out that wasn't what the FreedomBox app was doing at all. It's not a userops app but instead is more of a helper app to make it easy to install and run the client apps needed to work with the box. This is the type of thing which you might tell other members of your household/apartment/dorm/hackspace/commune to install so that they can use the services of the local server. It will guide them to the relevant sections of F-droid or the Play store, or open the relevant apps or web pages.

It was suggested that provided Freedombone can talk the right json protocol then the FreedomBox app would also work with it. This would avoid needing to re-invent the wheel. Communication between the box and the app is one way only, so there aren't any security implications and no credentials or other private information is involved. The most an adversary on your local network could learn is what systems are available on your server.

Some amount of wrangling later I got it working. An additional patch to the app was needed so that web pages for particular Freedombone apps, typically installed onto subdomains, can be opened from the FreedomBox app and that has since been merged upstream.

This makes Freedombone a little more consumer friendly, but the main administration remains only via ssh and so unlike FreedomBox this isn't really a consumer grade system yet. Anyone who has used a Raspberry Pi without a screen should be able to use Freedombone though. I'm not yet decided on whether to go all the way and try to do something equivalent to Plinth. It might be a lot of work, and I'm really not much of a front end developer. If I can find a low maintenance and minimalistic interface which can also be secured then I might give it a go, but I don't want to get distracted into maintaining a giant django app or anything comparable.


The blog of Bob Mottram, a Free Software hacker and maintainer of the Freedombone project.

Web site

Email/XMPP: bob@freedombone.net

Matrix: @bob:matrix.freedombone.net