The not so rose tinted past

June 8, 2018 - Reading time: 4 minutes

In this podcast they mention that people over 40 get nostalgic about the home computing of the 1980s but forget the bad parts.

I'm old enough to remember it. So what were the bad parts? There were plenty.

Software on cassette tapes

Cassette tapes are almost forgotten as a technology now, but were commonly used with the home computers in the first half of the 1980s. Cassettes were awful for storing programs in a similar way that they're also awful for storing music. The ferrous coating would wear off over time. Tapes would sometimes stretch. You would need to fast forward or rewind to the correct point if there were multiple programs, and that was a very hit-or-miss affair. It was easy to accidentally overwrite part of the tape that you didn't mean to. Also tapes would occasionally break.

Obtaining information useful for programming was hard

Sure you could learn the elementary parts of BASIC programming from magazines, but progressing your knowledge beyond "hello world" or other simple programs into more advanced areas was difficult. Even large libraries usually only had a few books on computers and they were always the "coffee table" type with nice photos but not of much practical use. Adults knew nothing, so it wasn't as if you could ask your parents or teachers. Chances are that you'd learn a lot more from other children who also had home computers, and in practice this is how I learned most things in the early days. Access to information in general was far more restricted since this was before the web existed.

Home computers themselves had hardware reliability issues

The quality of manufacture was not nearly as good as it is now. In the late 1980s I had an Amiga 500. These machines frequently had dodgy motherboards which would cause inexplicable crashes. The standard advice was to raise the computer and then physically hit the underside with some amount of force, bending the motherboard and hopefully jolting whatever components back into place.

Using televisions as monitors was a bad user experience

Of course if you're a kid you don't get to use the TV that your parents watch with your home computer. You have an old or second hand, typically "portable" one. Maybe out of a caravan or salvaged from a dumpster. Old TVs has all sorts of problems. In TV soap operas of that period or earlier you would sometimes see the characters hitting their TVs to make them work, and that wasn't just fiction. Not only were TVs often unreliable but the coax connection between the TV and your computer was often also rather dodgy. You might have to spend some amount of time twiddling with it to get a clear picture. And then there's tuning. Old TVs usually didn't have a remote. More typically they had a physical (analog) tuning dial and you would need to turn this to exactly the right frequency.

Software delivery took much longer than a couple of hours

In the podcast they say that in the 1980s obtaining software might take a couple of hours visiting the local computer shop. Actually it was much worse than that. Computer shops usually only stocked a few games or other software. The number of titles might be less than your number of fingers. Typically the software you wanted was not available in your local shop, so you would have to mail order it, and that typically took at least a couple of weeks.

Keyboard quality varied a lot

Many home computers had good keyboards - perhaps even better than most today - but some like the Sinclair Spectrum had keyboards which were almost unusable for typing. In the 1980s wordprocessing was a huge and smoking hot technology, so having a good keyboard really mattered.


Cover Image

Goodbye Github

June 4, 2018 - Reading time: 3 minutes

I think I've done most of the necessary preparation now in readiness for getting out of Github. I had been claiming that I was going to do that for the last three years and its recent purchase by Microsoft provided me with a slam-dunkin reason to finally do it. I've slated closure of my account for 11th of June 2018, added as a TODO item in org-agenda. That will give enough time for any Freedombone systems out there to automatically update the repos to the new location.

It has been fun being on Github. I stayed there because it mostly seemed to not be evil, despite being another proprietary silo. People sometimes had a dig at me for the hypocrisy of promoting self-hosting while having the repos in a silo, and they were entirely justified in doing so. The main advantage of Github was just visibility and searchability. Apart from that other systems such as Gogs, Gitea and Gitlab have very similar features. It's the same kind of reason why video creators post to YouTube rather than just their own sites. You can self-host videos, but who is going to discover them? Similar applies with software projects.

Like many folks from the FOSS side of the world I could rant at length about Microsoft. In recent years during the post-Ballmer era they say they've become a lot more friendly towards open source. They fund the Linux Foundation and some of their software is released under copyleft licenses. But not much of it is. Their desktop and server/cloud operating systems, and their office productivity software remains, as far as I know, closed source. They also still extort people using software patents. This indicates to me that Microsoft's new found love of FOSS is just superficial and that fundamentally their interests or corporate ethos have not changed. Having some FOSSy things enables them to recruit bright young engineers and makes for some nice sounding publicity material at conferences. They will no doubt claim that buying Github proves - via putting their cash where their mouth is - that they are an honest player on the FOSS scene. But a couple of years down the line I expect that Github, if it still exists at all, will be pushing ads, might have some login which requires being on a Microsoft platform and will be very specific to Windows, Microsoft APIs and Visual Studio. Anything else which can't be locked in will be sidelined and then dropped like a coyote over a cliff edge.

But in a way Microsoft has done me a favor. In buying Github it made a decision which was a fuzzy trade-off of social and technical factors into a clear line in the sand and a question of which side you're on. Their acquisition will cause some amount of dissruption to FOSS development, and I'm sure they must have been aware of that as a desired outcome when they made the deal. Lots of installed things point to Github repos and if there are enough projects leaving Github then those things will start breaking.

So what's the way forward? We have quite good replacements for Github, but they lack the community element. Perhaps the fediverse protocols can help here, adding a single sign-on and a federated way of publishing changes and discussing features and bugs. Another possibility would be to create a software directory on a distributed hash table. I think the developer community has the capability and now the motivation to begin tackling these problems. Producing another giant silo like Github isn't the answer.


Cover Image

The FreedomBox app

May 28, 2018 - Reading time: 2 minutes

There has been an app on F-droid for the FreedomBox system for a while now and I was wondering whether I could do something similar for the Freedombone project. I didn't know anything about how the FreedomBox app worked, so I assumed it was an Android version of the Plinth user interface and was trying to figure out how the connection between the box and the app worked.

But it later turned out that wasn't what the FreedomBox app was doing at all. It's not a userops app but instead is more of a helper app to make it easy to install and run the client apps needed to work with the box. This is the type of thing which you might tell other members of your household/apartment/dorm/hackspace/commune to install so that they can use the services of the local server. It will guide them to the relevant sections of F-droid or the Play store, or open the relevant apps or web pages.

It was suggested that provided Freedombone can talk the right json protocol then the FreedomBox app would also work with it. This would avoid needing to re-invent the wheel. Communication between the box and the app is one way only, so there aren't any security implications and no credentials or other private information is involved. The most an adversary on your local network could learn is what systems are available on your server.

Some amount of wrangling later I got it working. An additional patch to the app was needed so that web pages for particular Freedombone apps, typically installed onto subdomains, can be opened from the FreedomBox app and that has since been merged upstream.

This makes Freedombone a little more consumer friendly, but the main administration remains only via ssh and so unlike FreedomBox this isn't really a consumer grade system yet. Anyone who has used a Raspberry Pi without a screen should be able to use Freedombone though. I'm not yet decided on whether to go all the way and try to do something equivalent to Plinth. It might be a lot of work, and I'm really not much of a front end developer. If I can find a low maintenance and minimalistic interface which can also be secured then I might give it a go, but I don't want to get distracted into maintaining a giant django app or anything comparable.


RSS Adventures

May 21, 2018 - Reading time: 3 minutes

RSS is an old technology, but still useful. It's disappointing that it's not better supported by browsers but that's understandable given that it's a system which tends to make it easy to view content without adverts or other distracting links/images. I'm pretty sure that's the main reason why Google stopped supporting it when they controversially dropped their own reader.

I've been running Tiny Tiny RSS for a long time. The server software is modified so that it grabs new feeds via Tor, and this helps to avoid having any third parties gaining knowledge of what you're reading. The last I read, the maintainer of TT-RSS still has no interest in supporting proxying via Tor.

TT-RSS is great, but it's also quite complicated and the reading experience on mobile has been frustratingly flaky. As time passes I'm starting to read RSS feeds more on mobile devices and less on desktop operating systems with large screens. I've tried most (possibly all) of the various TT-RSS mobile readers for small screens, and didn't like any of them very much. So I was wondering if I could replace TT-RSS with something simpler which would work both on desktop and mobile. Searching around on Github I found an existing project with not much code and adapted it until it was sufficiently usable. The result is a new RSS reader called Smol RSS.

Smol RSS, as the name suggests, is a very small php system which allows you to select a feed from a drop down list and then view it. The way that it works is such that the feed is grabbed locally within your browser and not by the server. So installing to an onion address means that when you're browsing through feeds there isn't an easy way for a third party to know what you're reading. Orbot now supports version 3 onion addresses, so this also gives a good level of confidentiality. Unless you give out the onion address it would be hard for anyone to know what it is or that you are associated with it.

Smol RSS is an extremely minimal reader, so there are no advanced features like in TT-RSS. But for my purposes this is good enough and a lot more convenient. It's now an app within Freedombone, and I added both light and dark themes which are selectable via app settings in the admin control panel.

why not just use a native mobile app for RSS? I could do that, but having something on a server means that there's more convenience when using multiple mobile devices. I can set up the feeds list in a single place and then that applies no matter where I access it from.


Cover Image

PGP NFC

May 15, 2018 - Reading time: 2 minutes

I think NFC (near field communication) is now quite a common feature on mobile phones. It's the thing where you need to be in physical contact, or within a couple of millimetres, to read the code. Similar to RFID, but with more data storage capacity.

I've been using an elliptic curve PGP/GPG key since last year and the size of the keys is much smaller than the RSA ones I used to use in the olden days. So I was wondering whether I could get the exported public key onto an NFC tag as another way of doing face-to-face key signing events. The traditional PGP key signing protocol involves a ceremony including tea and sometimes cake and government ID documents and key fingerprints printed out on tiny slivers of paper. You're then supposed to download the public key from a keyserver, check that the paper fingerprint matches and sign it. It's all a bit cumbersome and tedious (well, apart from the cake), and it's for those sorts of reasons why PGP never gained much popularity.

The largest type of NFC tag currently available can store 888 bytes of data. An elliptic curve public key exported from GPG is 640 bytes. So you can get the public key, plus some extra text such as name and email address, onto a single tag. Even though these are high end by NFC standards if you buy them as stickers then they're really cheap.

So I made a few of these and stuck them on my laptop and the back of my phone above the internal NFC sensor so that they don't conflict. If other people did the same then just tapping phones together would be enough to do the exchange. Far simpler than the established procedure.

If proper verification of encryption keys is to go mainstream then it needs to be something like this which is extremely simple and quick to do, and doesn't necessarily involve third parties like keyservers. The other obvious way to do it is with QR codes. I also experimented with doing it that way and it's entirely feasible to store an elliptic curve public key within a QR code and have it readable with a phone camera.


Cover Image

efail and other failures

May 15, 2018 - Reading time: 3 minutes

Yesterday there was a security flap over possible PGP bugs. There's some legitimate concern behind it, but the biggest feature of this disclosure was that the way that it was communicated was really poor and alarmist. There have been a series of high profile software bugs over the last five years and the trend seems to be to always create a bigger razzle than the previous one.

As far as I can ascertain the security implications for the Freedombone system are zero, even though GPG is used quite extensively within it. Freedombone uses the Mutt and Mailpile email clients and neither of those are very affected. Mutt has S/MIME turned off and Mailpile doesn't display HTML in emails by default. There are a few issues with Mailpile which are described here. The TL;DR is that unless you are reading encrypted mail from 10+ years ago and you're being actively targeted then there isn't likely to be a problem.

In reality efail is a low impact security issue with some email clients or some people running extremely old versions of GPG. These issues had also mostly been known about for a long time.

This wasn't how the problem was publicised though. Instead the advice from EFF was to immediately turn off PGP encryption or uninstall PGP software. Even if you are running an email client which has one of the efail problems the EFF advice was not an appropriate or proportionate response. It suggested to me that the people at EFF were not thinking in terms of threat models. Out there in the wild how likely is the exploit to happen and what are the chances of that weighed against the security features provided by GPG/PGP email encryption? Most users of PGP are not fools, and it has been widely understood for a long time that including HTML within encrypted email is bad opsec which can expose you to a bigger range of threats.

"Recommendations to disable PGP plugins and stop encrypting emails are completely unwarranted and could put lives at risk. The correct response to vulnerable PGP implementations should not be to stop using PGP, but to use secure PGP implementations"

In the years after the Snowden revelations quite a lot of effort went into getting more people - especially those in "at risk" categories - to encrypt their email. To then encourage all that to be thrown away was reckless and I think there is still a debate to be had about what constitutes "responsible disclosure". What information was EFF supplied with and why did they report in the manner that they did? Was it just clickbait one-upmanship, or something else? Nothing on the internet is perfect and security software which has some bugs under some conditions is probably still better than transmitting communications in the clear where it can be trivially read by any intermediary.