Freedombone Blog

Freedom in the Cloud

The ethical technology of 2019

I'm reading this article and agree with the overall aim of trying to produce more ethical technology. I've you are a Free Software person then you've always been interested in ethics.

But there's lots of cringe-inducing things in the article, especially when they describe Mozilla.

"When building a product, designers should make the default settings the ones that will be best for users. Firefox is a good example. After completing more usability tests, the browser will start blocking third-party trackers, which collect your data as you surf the internet, by default."

Well that's great, but also by default Firefox collects data about exactly how you use the browser as you surf the internet and sends it back to Mozilla. You're not informed about that at all. That data could easily be used to create tracking fingerprints, and Mozilla Corporation's business relationships with search engines - particularly Google - are sufficiently opaque that it's an easy supposition to make that a scandal could be brewing.

Will sending software engineers on ethics courses fix the problems of the tech industry? No. But it would make the engineers more aware of bad ethics in business practices, and they'll be less happy while working.

Will having a "Trustable Technology Mark" certification improve things? Maybe, but probably not. The closest I can think of would be FSF's Respects Your Freedom certification so it might not be an entirely worthless exercise but much would depend on the details. FSF has the list of four freedoms which are a fairly concise criteria against which to check any given product, whereas "trustable technology" could be a lot harder to define.

Making users owners is the best advice from the article.

"make your users the owners of your platform–not venture capitalists, not shareholders"

This requires business models to change and for advertising to no longer be the primary revenue stream. Working against this though is the web 2.0 consensus, which concluded that nobody will pay for web services. The next billion internet users are not likely to have spare cash hanging around with which to invest in startups or pay for subscription services. They're going to be using low end smartphones and the margins will be wafer thin.

One possible way to go with ownership of technology is the Guifinet model, in which you have a foundation which sets some rules and perhaps runs crowdfunding campaigns, but the resulting network is owned and run by the users. When that's the case then the interests are aligned and you're not likely to see the kinds of large scale abuses that the tech silos currently impose.

Architects of AI

Martin Ford interviews Geoffrey Hinton about the wider problems of AI. It's all very well agreeing that the government should do something, maybe like Basic Income or more regulation, but since Hinton is a part time Googler what should Google do?

Well, it could:

Pay its taxes

Nobody likes paying taxes, and tax money is often squandered bunging backhanders to the rich, but it is one sort of redistribution.

Reverse the demonetization trend

Make sure that YouTubers with not many viewers could still make a sustainable living. This is not the ideal way that I'd prefer the internet goes (I have a generally low opinion of advertising, and block ads), but again it's another type of redistribution which doesn't depend on the government.

Change it's hiring policy

To 50% women or non-binary. Wouldn't solve all problems, but would be a start.

Those are things which the directors of Google/Alphabet could implement right now with a single meeting and a few keypresses, but of course they won't.

There's also a problem with the narrative about lack of talent.

"There’s an enormous talent shortage in AI and everyone’s hiring"

It's the old "unskilled workers" argument in disguise. There isn't really a talent problem, there's a gatekeeping problem. This includes things such as companies like Google only hiring men with computer science degrees from ivy league universities. It shouldn't be surprising that there are a small number of people in that category which really amounts to "guys with rich parents", but this is not the same as a talent shortage.

Not only is there a gatekeeping problem but there's also an ethics problem with a lot of contemporary AI being deployed by companies like Google. The now abandoned Maven project is an easy example, and so it's probable that part of the shortage is just about people with AI knowledge not wanting to get recruited into ethically dubious projects.

Matrix on Python 3

Also another development is that the Matrix app on Freedombone now runs on Python 3. This improves its performance and makes it more suitable for running on ARM single board computers with 1GB of RAM. In tests while running a room with 20 users and subscribing to a few rooms on other homeservers, some of which are quite high volume, Synapse on Python 3 only uses 200MB of RAM. So this makes it similar to an XMPP server in terms of resource use.

XMPP still has advantages, such as the ability to proxy through Tor on mobile (the Android Riot app currently can't do that, hence exposing metadata) but the competition is getting closer. Really the idea of competition is the wrong frame here though, because bridging between Matrix and XMPP is improving and so in the end choice of chat software will just be down to personal preference.

Also on the topic of chat systems I noticed that OTR version 4 was announced on day 3 of 35C3. It doesn't support multi-user encrypted chat though, so this is an encryption standard which is dead upon arrival. Yes, a lot of private chat is one-to-one, but in the last few years private group chat has become a major phenomena and to ignore that in your security model is a gigantic oversight. So an easy prediction is that OTR will continue to decline in popularity in 2019.

The Dark Matrix

While listening to some 35C3 talks I've managed to get the Matrix and Riot apps for Freedombone working on onion addresses. I don't think there were any fundamental barriers preventing this from happening earlier, and so my previous statements about Matrix being tied to TLS and not compatible with Tor were probably just wrong. Since RiotWeb is composed of client side javascript if you're running it within a Tor compatible browser it doesn't care whether the domains being used are clearnet or onion ones.

I expect that federated onion homeservers, forming a "dark Matrix", will work but that there will be issues with federating onion and clearnet homeservers. This isn't unusual, and the same applies to fediverse instances.

Running on onion addresses does provide some security advantages, but also it means that you don't need to buy a clearnet domain, you don't need to forward any ports and so could be behind a hostile internet router and you don't need to care about obtaining TLS certificates. There was a talk on the first day of 35C3 about TLS1.3 which also described the many issues with TLS and what a dumpster fire it is. In a lot of ways using onion addresses is more convenient and with better security properties, so long as you don't mind the long random strings or QR codes.

2018: When giants stumble

It started to ramp up in 2017 but this year has been the first time that "big tech" has had some serious pushback probably since the Microsoft antitrust case in the 1990s over the now abandoned Internet Explorer browser. In mainstream publications criticism of Facebook and Google has been relentless.

I think what will happen in 2019 is that they will launch a PR offensive, so expect to see heartwarming promotional videos about how Google has transformed lives for the better and more sponsorship of "good works" type charities (along the lines of Gates' "philanthropy"). They'll denounce critics as elitist and claim that criticisms of them are "unrealistic", perpetuated by "utopians" or "ivory tower" Free Software people unconcerned with delivering at scale. You'll probably see slogans like this being casually dropped into conference talks.

Both Google and Facebook are sure to continue with their satellite based plans for connecting the next billion people to the internet (i.e. to their walled garden systems). I'm pretty sure they don't want a guifinet style model, and new types of low Earth orbit satellites with higher communications bandwidth and fancy phased array coverage will be able to deliver internet which is not especially fast but maybe good enough for basic services in areas of the world which currently have zero telecomms infrastructure.

In 2019 I also expect that Microsoft will begin doing things to monetize Github and get some ROI going, and that this will cause at least one scandal in which a bunch of projects leave that platform.

This year "decentralization" has become a buzzword, though it has been somewhat muddied by blockchain companies trying to claim the term as their own. Possibly in 2019 FreedomBox might start shipping on hardware, or as an officially endorsed hardware kit, and hopefully this might begin to spread some real decentralization in places where internet coverage is unreliable or non-existant. Also I think growth of fediverse social network systems has now gotten beyond critical mass and so next year we might begin to see some pushback or cooption of that by Big Tech and maybe also by governments.

Why do social networks succeed?

One thing which has been obvious to me for a long time is that the social networks which dominate today didn't succeed because they were technically better than the opposition. For my sins I am still on Facebook, and using that system is quite a battle. The interface is one of the worst I've ever encountered. The Twitter user interface is not quite as bad, but it still lacks features which other comparable systems have.

So why did Facebook and Twitter become the main social network systems?

First mover advantage

I think this is by far the biggest reason. Being first and being the thing which people get habituated to conveys an enormous advantage. Once habituated, even if the user interface is full of foibles, anything else with a different workflow will appear to be wierd, awkward, "not normal" and "hard to use".

Software is complicated and often there's a non-trivial amount of learning to get fully up to speed with how it works. That's a real cost in terms of time and effort, and not something that most people want to do often, or have the free time to do. Also the harder the learning curve the more likely you are to become highly committed to using a particular system, due to the sunk cost.

Mean time to profile twiddling

There's a rule of thumb that I have for judging whether social network software is going to get mainstream adoption and that's the amount of time between thinking "I want to join this system" and having an account and beginning to twiddle with your profile settings (uploading a photo, filling out the bio, etc). If that time interval is more than a couple of minutes then 99.9% of people are not going to bother.

If your system has a name which is hard to locate in a search engine, such as "Red", then this guarantees that the time to twiddling is going to be much longer on average.

If it's hard to find an instance to join or if you have to install an instance yourself then this also increases the time to twiddle by a big factor.

This means that the onboarding process is fairly critical and optimising that for searchability, minimum number of clicks, minimum cognitive workload and so on can have a big effect.

Network effect

This is the thing which everyone knows about. No matter how jazzy the features you're not likely to join a system which your friends aren't on. The only time you are likely to do that is if you don't have other options. If you get purged in one of the many Facebook mass expulsion events, for example, then you're in a situation where you have to try out new things and find a new crowd to hang around with. But most of the time humans are herd animals and will stick with their familiar group.