Solution: less need for standards

Around 1996 I was part of the CEN TC251 crowd for a while, not as a member but as an observer. CEN is the European standards organization, and TC251 is “Technical Committee 251”, which is the committee that does all the medical IT standardization. The reason I was involved is that I was then working as a consultant for the University of Ghent in Belgium and I had as task to create a Belgian “profile” of the “Summary of Episode of Care” standard for the Belgian market. So I participated in a number of meetings of the TC251 working groups.

For those that are in the know, I must stress that this was the “original” standards effort, all based on Edifact like structures and before the arrival of XML on the stage. I’ve heard from people that the standards that were remade in XML form are considerably more useful than the stuff we had to work with.

I remember this period in my life as a period of meeting a lot of interesting people, having a lot of fun, but at the same time being excruciatingly frustrated by overly complex and utterly useless standards. The standards I had to work with simply didn’t compute. For months I went totally bananas trying to make sense of what was so extensively documented, but never succeeded. After a serious talk with one of the chairpersons, a very honest Brit, I finally realized that nobody had ever tried out this stuff in reality and that most, maybe even all, of my complaints about inconsistencies and impossibilities were indeed real and recognized, but that it was politically impossible to publicly admit to that. Oboy…

I finally got my “profile” done by simply chucking out the whole lot and starting over again, writing the entire thing as I would have done if I’d never even heard of the standards. That version was immediately accepted and I was recently told it still is used with hardly any changes as the Belgian Prorec standard, or at least a part of it.

The major lesson I learned from the entire CEN debacle (it was a debacle for me) is that the first rule in standardization of anything is to avoid it. Don’t ever start a project that requires a pre-existing standard to survive. It won’t survive. The second rule is: if it requires a standard, it should be small and functional, not semantic. The third is: if it is a semantic standard, it should comprise a maximum of a few tens of terms. Anything beyond a hundred is useless.

It’s easy to see that these rules hold in reality. HTML is a hugely successful standard since it’s small and has just a few semantic terms, such as GET, PUT, etc. XML: the same thing holds. Snomed CT: a few hundred thousand terms… you don’t want to hear what I think of that, you’d have to wash your ears with soap afterwards.

From all my years of developing software, I’ve never ever encountered a problem that needed a standard like Snomed CT, that couldn’t just as well be solved without it. During all those years, I’ve never ever seen a project requiring such a massive standards effort as Snomed CT, actually succeed. Never. I can’t say it couldn’t happen, I’m only saying I’ve never seen it happen.

The right way to design software, in my world, is to construct everything according to your own minimal coding needs, but always keep in mind that all your software activities could be imported and exported using a standard differing from what you do internally. That is, you should make your data simple enough and flexible enough to allow the addition of a standard later. If it is ever needed. In short: given the choice between simple or standard, always choose simple.

Exactly how to do this is complex, but not complex in the way standards are, only complex in the way you need to think about it. In other words, it requires that rarest of substances, brain sweat. Let me take a few examples.

If you need to get data from external systems, and you do that in your application in the form of synchronous calls only, waiting for a reply before proceeding, you severely limit the ability of others to change the way you interact with these systems. If you instead create as many of your interactions as possible as asynchronous calls, you open up a world of easy interfacing for others.

If you use data from other systems, try to use them as opaque blocks. That is, if you need to get patient data, don’t assume you can actually read that data, but let external systems interpret them for you as much as possible. That allows other players to provide patient data you never expected, but as long as they also provide the engine to use that data, it doesn’t matter.

Every non-trivial and distinct functionality in your application should be a separate module, or even better, a separate application. That way it can be easily replaced or changed when needed. As I mentioned before, the interfaces and the module itself, will almost automatically be of better quality as well.

The most useful rule of thumb I can give you is this: if anyone proposes a project that includes the need for a standard containing more than 50 terms or so, say no. Or if you’re the kind of person who is actually making a living producing nothing but “deliverables” (as they call these stacks of unreadable documents), definitely say yes, but realize that your existence is a heavy load on humanity, and we’d all probably be better off without your efforts.

Solution: improved specifications

The quality of our IT systems for health-care is pretty darn poor, and I think most people agree on that. There have been calls for oversight and certification of applications to lessen the risk of failures and errors. In Europe there is a drive to have health-care IT solutions go through a CE process, which more or less contains a lot of documentation requirements and change control. So by doing this, the CE process certifies a part of the actual process to produce the applications. But I dare claim this isn’t very useful.

If you want to get vendors to produce better code with less bugs, there is only one thing you can do to achieve that: inspect the code directly or indirectly. Everything else is too complicated and ineffective. The only thing the CE process will achieve is more bureaucracy, more paper, slower and more laborious updates, fewer timely fixes, and higher costs. What it also will achieve, and this may be very intentional, is that only large vendors with massive overhead staff can satisfy the CE requirements, killing all smaller vendors in the process.

But back to the problem we wanted to solve, namely code quality. What should be done, at least theoretically, is actual approval of the code in the applications. The problem here is that very few people are actually qualified to judge code quality correctly, and it’s very hard to define on paper what good code should look like. So as things stand today, we are not in a position where we can mandate a certain level of code quality directly, leaving us no other choice than doing it indirectly.

I think most experienced software developers agree that the public specifications and public APIs of a product very accurately reflect the inner quality of the code. There is no reason in theory why this needs to be the case, but in practice it always is. I’ve never seen an exception to this rule. Even stronger, I can assert that a product that has no public specifications or no public API is also guaranteed to be of poor quality. Again, I’ve never seen an exception to this rule.

So instead of checking paperbased processes as CE does, let’s approve the specifications and APIs. Let the vendors subject these to approval by a public board of experts. If the specs make sense and the APIs are clean and orthogonal and seem to serve the purpose the specs describe, then it’s an easy thing to test if the product adheres to the specs and the APIs. If it does, it’s approved, and we don’t need to see the original source code at all.

There is no guarantee that you’ll catch all bad code this way, but it’s much more likely than if you use CE to do it. It also has the nice side effect of forcing all players to actually open up specs and APIs, else there’s no approval.

One thing I can tell you off the bat: the Swedish NPÖ (National Patient Summary) system would flunk such an inspection so hard. That API is the horror of horrors. Or to put it another way: if any approval process would be able to let the NPÖ pass, it’s a worthless approval process. Hmmm…. maybe we can use NPÖ as an approval process approval test? No approval process should be accepted for general use unless it totally flunked NPÖ. Sounds fine to me.