March 19th, 2016

Protonmail, a secure mail system, is now up and running for public use. I’ve just opened an account and it looks just like any other webmail to the user. Assuming everything is correctly implemented as they describe, it will ensure your email contents are encrypted end-to-end. It will also make traffic analysis of metadata much more difficult. In particular, at least when they have enough users, it will be difficult for someone monitoring the external traffic to infer who is talking to whom and build social graphs from that.  Not impossible, mind you, but much more difficult.

If you want to really hide who you’re talking to, use the Tor Browser to sign up and don’t enter a real “recovery email” address (it’s optional), and then never connect to Protonmail except through Tor. Not even once. Also, never tell anyone what your Protonmail address is over any communication medium that can be linked to you, never even once. Which, of course, makes it really hard to tell others how to find you. So even though Protonmail solves the key distribution problem, you now have an address distribution problem in its place.

But even if you don’t go the whole way and meticulously hide your identity through Tor, it’s still a very large step forwards in privacy.

And last, but certainly not least, it’s not a US or UK based business. It’s Swiss. 

John Oliver on the Apple/FBI thing

March 16th, 2016

If you for some reason missed John Oliver’s explanation of the Apple vs FBI thing, do watch it now.


March 12th, 2016

Developing the Metaverse: Kickstarter project. Yes, I’m a little bit a part of it, and I’ve signed up, too. It would be pretty awesome if it could be done.

The FBI in full Honecker mode

March 12th, 2016

Consider this:

Obama: cryptographers who don’t believe in magic ponies are “fetishists,” “absolutists”

…and even worse, this:

Surprise! NSA data will soon routinely be used for domestic policing that has nothing to do with terrorism

Let’s consider this for a bit. In particular the “going dark” idea. The idea that cryptography makes the governments of the world lose access to a kind of information they always had access to. That idea is plain wrong for the most part, since they never had access to this stuff.

Yes, some private information used to be accessible with warrants, such as contents of landline phone calls and letters in the mail, the paper kind that took forever to get through. But there never was much private information in those. We didn’t confide much in letters and phone calls.

But the major part of the information we carry around on our phones were never within reach of the government. Most of what we have there didn’t even exist back then, like huge amounts of photographs, and in particular dick pics. We didn’t do those before. Most of what we write in SMS and other messaging such as Twitter and even Facebook, was only communicated in person before. We didn’t write down stuff like that at all. Seriously, sit down and go through your phone right now, then think about how much of that you ever persisted anywhere only 10 or 15 years ago. Not much, right? It would have been completely unthinkable to have the government record all our private conversations back then, just to be able to plow through them in case they got a warrant at some future point in time. 

So, what the government is trying to do under the guise of “not going dark” is shine a light where they never before had the opportunity or the right to shine a light, right into our most private conversations. The equivalence would be if they simply had put cameras and microphones into all our living spaces to monitor everything we do and say. That wasn’t acceptable then. And it shouldn’t be acceptable now, when the phone has become that private living space.

If they get to do this, there is only one space left for privacy, your thoughts. How long until the government will claim that space as well? It may very well become technically feasible in the not too distant future.

Patterns: authentication

March 5th, 2016

The server generally authenticates to the client by its SSL certificate. One very popular way of doing this is to have the client trust a well known authority, a certificate authority (CA) and then verify that the name of the server’s certificate matches expectations, and that the server’s certificate is signed by the CA and not yet expired. This is an excellent method if previously unknown clients do drive-by connections to the server, but has no real value if all the clients are pre-approved for a particular service on a group of particular servers. In that case, it’s just as easy, much cheaper, and arguably more secure, to provide all the clients with the server’s public certificate ahead of time, during installation, and let them verify the server certificate against that public key. 

But how do we authenticate a client to the server? Well, we provide the server with the client’s public key during installation and then verify that we have the right client on the line during connections.

We could let the client authenticate against the server using the built-in mechanisms in HTTPS, but the disadvantages of that are numerous, not least of which is getting it to work, and maintaining it over any number of operating system updates. “Fun” is not the name of that game. Another major disadvantage is that the protection and authentication that comes from the HTTPS session are ephemeral; they don’t leave a permanent record. If you want to verify after the fact that a particular connection was encrypted and verified using HTTPS, all you have to go on are textual logs that say so. You can’t really prove it.

What I’m describing now is under the precondition that you’ve set up a key pair for the server, and a key pair for the client, before time and that both parties have the other party’s public key. (Exactly how to get that done securely is the subject of a later post.)

Step 1

The client creates a block consisting of:

  • A sequence number, which is one greater than the last sequence number the client has ever used.
  • Current date and time
  • Client identifier
  • A digital signature on the above three elements together, using the client’s private key

The client sends this block to the server.

Step 2

The server (front-end) then uses the client identifier to retrieve the client’s public key, verifies the signature, and checks that the sequence number has never been used before. The server also checks that the date and time is not too far in the past or the future. 

If everything checks out fine, the server records the session in the database and updates the high-water mark (last used sequence number from this client).

Step 3

The server creates a block consisting of:

  • The client’s sequence number
  • Current date and time
  • Client identifier
  • Server identifier
  • A digital signature on those elements together, using the server’s private key
This block is then sent to the client, allowing the client to verify the signature, and save the record to its own local database. Since the block contains the client’s sequence number, which needs to match, it cannot be a playback.


Doing it this way creates a verifiable record in the database about the authentication. The signature is saved and can be verified again at any time. This allows secure non-repudiation.

Creating the authentication as a signed block also means that the client does not necessarily need to communicate directly with the server. If a client needs to deliver documents to another system, which in turns forwards them, it can also deliver the authentication block the same way. The forwarding system does not need to hold any secrets for the actual client to be able to do that. This allows us any number of intermediate message storages, even dynamically changing paths, with maintained authentication characteristics.

I should also note that doing the authentication this way decouples the mechanism from the medium. If you replace HTTPS connections by FTP, or even by media such as tape or floppy disks (remember those?), this system still works. You can’t say the same of certificate verification using HTTPS.

Patterns: sacrificial front-end

March 1st, 2016

Over the years, I’ve borrowed and invented a number of design patterns for projects of all kinds. Most, if not all, were doubtlessly already invented and used, but mostly I didn’t know that then. Most of my uses of these patterns are at least 15 years ago, often 20, but I’m seeing more and more of them appear in modern frameworks and methodologies. So this is my way of saying, “I told you so”, which is vaguely satisfying. To me.

Forgive the names; I have a hard time coming up with suitable labels for them.

Blackboard architecture

The Blackboard architecture is well-known. Or should be, except it seems I always need to explain it when it comes up. It has a lot of great aspects and results in effective and extremely decoupled designs. I’ll most certainly come back to it several times.

A blackboard is a shared data source. Some processes write messages there, while other processes read them. The different processes never need to talk to each other directly or even know of each other’s existence.

From this flows a number of advantages, a few of which are:

  • Different cooperating services can be based on entirely different languages and platforms.
  • The interaction is usually one way (compare to Facebook’s Flux and React), greatly simplifying interactions.
  • If done right, the data structures are immutable, elimination contention problems.Receiving messages

Hop, skip, and jump

Let’s get to my first blackboard-based pattern, namely how to protect a front-end machine from compromise. In this design, the front-end machine is an Internet facing computer receiving medical documents from a number of clients around the net. The documents arrive individually encrypted using the server’s public key. The front-end machine is assumed to be hacked sooner or later and we don’t want such a hack to lead to the ability of the hacker to get at decrypted documents or other secrets.

So, what we do is we let the front-end machine take each received and encrypted message and store it in an SQL database located in its own network segment. The message ends up in a table that only holds encrypted messages, nothing else.

Another machine on that protected network segment picks up the encrypted messages from the database, decrypts them, and stores the decrypted messages in another table.


The “front-end” machine is exposed to the internet, so let’s assume it is completely compromised. In that case, the hacker has access to all the secrets that are kept on that machine and has root. This would allow the hacker to do anything on that machine that any of my programs are allowed to do. 

The first role of the front-end machine is to authenticate clients that connect. We can safely assume that the hacker won’t need to authenticate anymore. 

The second role of the front-end machine is to receive messages from the client. These messages are then sent on to the database through a firewall that only allows port 1433 to connect to the database. The login to the database for the front-end machine is kept on the front-end machine so the hacker can use that authentication. However, the only thing this user is permitted is access to a number of stored procedures tailored specifically to the needs of the front-end. Among these stored procedures, there are procedures to deliver messages to the database, but not much more. There is most definitely no right granted for direct table access. In other words, the hacker can deliver messages to the incoming message table, but nothing else.

Behind the firewall there is another machine that has no connection to anything except the database. That machine has access to a small number of stored procedures tailored for its use, among which are procedures to pick up new incoming messages and deliver decrypted messages back to the database.

These crypto servers first verify that the message it picks up from the database carries a valid digital signature from a registered user system, and only then does it decrypt the message with its private key. If the hacker on the front-end had delivered fake messages, these would be detected during signature verification and discarded.

With this design, the hacker on the front-end has just a very narrow channel through which to jump from the front-end to the database, namely through the port 1433 and the SQL server software itself. But let’s assume he succeeds, somehow. If we’re really paranoid, we’d split the database into two instances on different machines, completely isolated from each other and only bridged by the crypto servers as in the next image.


To get at the plain text content of the messages, the hacker in this case, if coming from the internet at least, needs to:

  1. compromise the front-end
  2. crawl through 1433 and compromise the database (or compromise the firewall, then the database server)
  3. via a tailored message, compromise the crypto routines on the crypto server
  4. get at the secure database

The crypto server does not have any communication to the internet whatsoever, so even if it ever got compromised, it could only be controlled through messages passing through the database, and would need to exfiltrate that same way. Not impossible, but hardly easy, either. The hacker would probably choose some other way to get at the secure database.

So, what about outgoing?

The outgoing messages follow the exact opposite path. Do I really have to draw you a picture?

Privacy shield…

March 1st, 2016

Completely worthless. Same pig, different name. We can’t trust the EU.

5 things you need to know about the EU-US Privacy Shield agreement | Macworld

What social networks can become

December 18th, 2015

Really scary shit straight outta China. What actually stops our common social networks from becoming that same thing?


November 18th, 2015

One of the primary targets of Islamist terrorism is the vast majority of moderate Muslims. Sometimes physically, but always psychologically. And they want the rest of us to do their dirty work for them. A prime goal of these acts is to engineer a schism between Muslims and Western cultures. To create alienation, and to make Muslims a target of fear and anger. The resulting exclusion, xenophobia, suspicion, and implicit or explicit segregation is a tool of radicalisation.


Do yourself a service and read the whole thing.

iPod Pro: it really is something else

November 16th, 2015

I’ve had the iPad Pro and the Logitech Create keyboard now for a couple of days and it’s really very, very different from what iPads used to be. I’m coming from the first iPad Retina, so it’s been a couple of generations in between. 

I’ve never before succeeded in writing anything more that emails with a short “yes” or “no”, or maybe a sentence, from any iPad or iPhone. It simply never was worth the pain. Now, I’m writing this very blog post on the iPad Pro. Using the Logitech keyboard, of course (there are limits; I’m still not prepared to attempt using an on-screen keyboard).

I’m using 1Password for all my logins, and it used to be that any login would be an “oh, no, not again” moment, since it would require switching to 1Password, logging in to it slowly and painstakingly, painfully copying the password, memorising the user name, switching back to the original app, manually entering the user name, painfully (usually takes two or three tries) getting the password “paste” option, then pasting the password, then finally logging in. Now I can slide in the screen from the right, select 1Password there, open it with my thumbprint (YES!), select the username, copy it using cmd-C (!), switch back to Safari (or whatever app I’m in) with cmd-tab, select the password field (if it isn’t still selected) and hit cmd-V. Just like on a desktop or laptop. Most of the keyboard shortcuts we use on a laptop work, like cmd-tab, cmd-X/C/V, cmd-space for search. You’ve got cursor keys on the Logitech keyboard. They’ve also implemented cmd-arrow to go to the beginning and end of lines, and top and bottom of the document. Free at last!

My productivity on the iPad has gone up tenfold, from almost zero to near desktop level. It’s for all practical purposes as productive as a laptop, but with the added ability to be comfortably used for reading, and drawing/annotations with a pen (which I haven’t gotten yet).

I’m missing only a few apps on the iPad, most notably Apple Remote Desktop. I’m not seeing all that much justification, except for this, for keeping a Macbook Air. Especially since the Air’s screen is atrociously bad compared to the iPad Pro’s screen.

So, no, this isn’t just another iPad, this is a game changer.