Now I hate Microsoft even more

April 17th, 2016

Went to start up my iMac to Bootcamp Windows 10. This is what happened:

IMG 2783

No question if I wanted to upgrade, no warning, no option to cancel, no effing nothing. What a total dick move. After 10 minutes, it has gotten to 10%, so if this goes on at the same rate, I’m looking at between one and two hours of this. Interrupting it probably bombs the whole thing. MS effectively hijacked the machine without my permission. I had something I wanted to do, but MS clearly doesn’t give a flying shit about that. If they’d done this at shutdown, I could have, maybe, somehow, a little bit, lived with it. But at startup? Are they completely out of their minds?

And, if you wonder, this is a paid full version of Windows 10, not the free upgrade kind.

I wonder what this huge update is for. No idea. Windows 95?

Update: it took a total of around 50 minutes, then another 10 minutes to update Apple’s Bootcamp video driver. The “copying of files” took about 30 minutes of that time, which probably corresponds to downloading time. Why doesn’t Windows download this stuff beforehand? 

All this on a 20 Mbit/s download ADSL and on a pretty darn fast machine (i7, 4GHz, 16 GB RAM, and a 1TB SSD). What this would be on an average machine, I can only have nightmares about.

Horrible little law

April 15th, 2016

Feinstein-Burr senate bill, it’s getting crazier by the day:

No, this slippery little act says that when a company or person gets a court order asking for encrypted emails or files to be handed over and decrypted, compliance is the law.

How compliance actually happens isn’t specified. They don’t care how user security was broken (or if it were nonexistent), and the senators are making it clear that from now on, this isn’t their problem.

Being a werewolf

April 9th, 2016

Very interesting game with implications for understanding of secure protocols and compromise detection.

Geoblocking your pastries

April 7th, 2016

If geoblocking was done on Main street… hilarious.

Enemy number one

March 24th, 2016

The US gov is quickly turning into corporate threat number one:

Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter. 

If this is really the case, if the US govt is tapping servers like this at any significant scale, then having Apple implementing encryption end-to-end in most of their products must mean that the govt is losing a hell of a lot more data catches than just the data they could get with a warrant. 

The ability to recover data with a warrant is then just a marginal thing. The real problem is that their illegal taps stop working. Which means that the FBI case is a sham on a deeper level than it appears. The real panic is then about the server compromises failing. 

And, of course, the end-to-end encryption with no keys server-side is also the solution for Apple. Implants in the servers then have relatively little impact, at least on their customers. The server-to-client communications (SSL) would be compromised, but not the content of the messages inside.

If the govt loses this battle, which I’m pretty sure they will, the next frontier must be the client devices. Not just targeted client devices, which can already be compromised in hardware and software, but we’re talking massive compromises of *all* devices. Having modifications in the chips and firmware of every device coming off the production lines. Anything less than this would mean “going dark” as seen from the pathological viewpoint of the government.

Interestingly, Apple has always tended to try to own their primary technologies, for all kinds of reasons. This is one reason more. As they’re practically the only company in a position to achieve that, to own their designs, their foundries, their assembly lines, with the right technology they could become the only trustworthy vendor of client devices in the world. No, they don’t own their foundries or assembly lines yet, but they could.

If this threat becomes real, or maybe is real already, a whole new set of technologies are needed to verify the integrity of designs, chips, boards, packaging, and software. That in itself will change the market significantly.

The opportunity of taking the high road to protect their customers against all evildoers, including their own governments, *and* finding themselves in almost a monopoly situation when it comes to privacy at the same time, is breathtaking. So breathtaking, in fact, that it would, in extremis, make a move of the whole corporation out of the US to some island somewhere not seem so farfetched at all. Almost reasonable, in fact.

Apple could become the first corporate state. They would need an army, though.

As a PS… maybe someone could calculate the cost to the USA of all this happening? 

Even the briefest of cost/benefit calculations as seen from the government’s viewpoint leads one to the conclusion that the leadership of Apple is the most vulnerable target. There is now every incentive for the government to have them replaced by more government-friendly people.

I can think of: smear campaigns, “accidents”, and even buying up of a majority share in Apple through strawmen and have another board elected.

Number one, defending against smear campaigns, could partly explain the proactive “coming out” of Tim Cook.

After having come to the conclusion that the US govt has a definite interest in decapitating Apple, one has to realize this will only work if the culture of resistance to the government is limited to the very top. If eliminating Tim Cook would lead to an organisation more amenable to the wishes of the government.

From this, it’s easy to see that Apple needs to ensure that this culture of resistance, this culture of fighting for privacy, is pervasive in the organisation. Only if they can make that happen, and make it clear to outsiders that it is pervasive, only then will it become unlikely that the government will try, one way or the other, to get Tim Cook replaced.

Interestingly, only the last week, a number of important but unnamed engineers at Apple have talked to news organisations, telling them that they’d rather quit than help enforce any court orders against Apple in this dispute. This coordinated leak makes a lot more sense to me now. It’s a message that makes clear that replacing Tim Cook, or even the whole executive gang, may not get the govt what it wants, anyway.

I’m sure Apple is internally making as sure as it possibly can that the leadership cadre is all on the same page. And that the government gets to realize that before they do something stupid (again).


March 19th, 2016

Protonmail, a secure mail system, is now up and running for public use. I’ve just opened an account and it looks just like any other webmail to the user. Assuming everything is correctly implemented as they describe, it will ensure your email contents are encrypted end-to-end. It will also make traffic analysis of metadata much more difficult. In particular, at least when they have enough users, it will be difficult for someone monitoring the external traffic to infer who is talking to whom and build social graphs from that.  Not impossible, mind you, but much more difficult.

If you want to really hide who you’re talking to, use the Tor Browser to sign up and don’t enter a real “recovery email” address (it’s optional), and then never connect to Protonmail except through Tor. Not even once. Also, never tell anyone what your Protonmail address is over any communication medium that can be linked to you, never even once. Which, of course, makes it really hard to tell others how to find you. So even though Protonmail solves the key distribution problem, you now have an address distribution problem in its place.

But even if you don’t go the whole way and meticulously hide your identity through Tor, it’s still a very large step forwards in privacy.

And last, but certainly not least, it’s not a US or UK based business. It’s Swiss. 

John Oliver on the Apple/FBI thing

March 16th, 2016

If you for some reason missed John Oliver’s explanation of the Apple vs FBI thing, do watch it now.


March 12th, 2016

Developing the Metaverse: Kickstarter project. Yes, I’m a little bit a part of it, and I’ve signed up, too. It would be pretty awesome if it could be done.

The FBI in full Honecker mode

March 12th, 2016

Consider this:

Obama: cryptographers who don’t believe in magic ponies are “fetishists,” “absolutists”

…and even worse, this:

Surprise! NSA data will soon routinely be used for domestic policing that has nothing to do with terrorism

Let’s consider this for a bit. In particular the “going dark” idea. The idea that cryptography makes the governments of the world lose access to a kind of information they always had access to. That idea is plain wrong for the most part, since they never had access to this stuff.

Yes, some private information used to be accessible with warrants, such as contents of landline phone calls and letters in the mail, the paper kind that took forever to get through. But there never was much private information in those. We didn’t confide much in letters and phone calls.

But the major part of the information we carry around on our phones were never within reach of the government. Most of what we have there didn’t even exist back then, like huge amounts of photographs, and in particular dick pics. We didn’t do those before. Most of what we write in SMS and other messaging such as Twitter and even Facebook, was only communicated in person before. We didn’t write down stuff like that at all. Seriously, sit down and go through your phone right now, then think about how much of that you ever persisted anywhere only 10 or 15 years ago. Not much, right? It would have been completely unthinkable to have the government record all our private conversations back then, just to be able to plow through them in case they got a warrant at some future point in time. 

So, what the government is trying to do under the guise of “not going dark” is shine a light where they never before had the opportunity or the right to shine a light, right into our most private conversations. The equivalence would be if they simply had put cameras and microphones into all our living spaces to monitor everything we do and say. That wasn’t acceptable then. And it shouldn’t be acceptable now, when the phone has become that private living space.

If they get to do this, there is only one space left for privacy, your thoughts. How long until the government will claim that space as well? It may very well become technically feasible in the not too distant future.

Patterns: authentication

March 5th, 2016

The server generally authenticates to the client by its SSL certificate. One very popular way of doing this is to have the client trust a well known authority, a certificate authority (CA) and then verify that the name of the server’s certificate matches expectations, and that the server’s certificate is signed by the CA and not yet expired. This is an excellent method if previously unknown clients do drive-by connections to the server, but has no real value if all the clients are pre-approved for a particular service on a group of particular servers. In that case, it’s just as easy, much cheaper, and arguably more secure, to provide all the clients with the server’s public certificate ahead of time, during installation, and let them verify the server certificate against that public key. 

But how do we authenticate a client to the server? Well, we provide the server with the client’s public key during installation and then verify that we have the right client on the line during connections.

We could let the client authenticate against the server using the built-in mechanisms in HTTPS, but the disadvantages of that are numerous, not least of which is getting it to work, and maintaining it over any number of operating system updates. “Fun” is not the name of that game. Another major disadvantage is that the protection and authentication that comes from the HTTPS session are ephemeral; they don’t leave a permanent record. If you want to verify after the fact that a particular connection was encrypted and verified using HTTPS, all you have to go on are textual logs that say so. You can’t really prove it.

What I’m describing now is under the precondition that you’ve set up a key pair for the server, and a key pair for the client, before time and that both parties have the other party’s public key. (Exactly how to get that done securely is the subject of a later post.)

Step 1

The client creates a block consisting of:

  • A sequence number, which is one greater than the last sequence number the client has ever used.
  • Current date and time
  • Client identifier
  • A digital signature on the above three elements together, using the client’s private key

The client sends this block to the server.

Step 2

The server (front-end) then uses the client identifier to retrieve the client’s public key, verifies the signature, and checks that the sequence number has never been used before. The server also checks that the date and time is not too far in the past or the future. 

If everything checks out fine, the server records the session in the database and updates the high-water mark (last used sequence number from this client).

Step 3

The server creates a block consisting of:

  • The client’s sequence number
  • Current date and time
  • Client identifier
  • Server identifier
  • A digital signature on those elements together, using the server’s private key
This block is then sent to the client, allowing the client to verify the signature, and save the record to its own local database. Since the block contains the client’s sequence number, which needs to match, it cannot be a playback.


Doing it this way creates a verifiable record in the database about the authentication. The signature is saved and can be verified again at any time. This allows secure non-repudiation.

Creating the authentication as a signed block also means that the client does not necessarily need to communicate directly with the server. If a client needs to deliver documents to another system, which in turns forwards them, it can also deliver the authentication block the same way. The forwarding system does not need to hold any secrets for the actual client to be able to do that. This allows us any number of intermediate message storages, even dynamically changing paths, with maintained authentication characteristics.

I should also note that doing the authentication this way decouples the mechanism from the medium. If you replace HTTPS connections by FTP, or even by media such as tape or floppy disks (remember those?), this system still works. You can’t say the same of certificate verification using HTTPS.