Problem: lack of connection to clinical guidelines

I’m at point 2 in the list of problems we need to solve. You can also find this text, possibly improved, on the iotaMed wiki.

As new discoveries are made in medicine, we need to get these out to “the factory floor”, so they are applied in practice. If there’s a new more efficient diagnostic method, or a treatment that cures more people with less side effects, we want the medical authorities to review this new knowledge as soon as possible, weigh costs against benefits, and then have it applied to clinical practice as soon as possible. This review then results in recommendations in a form that many prefer to call “clinical guidelines”. These guidelines strive to be a practical implementation of current knowledge, including diagnostic criteria, recommended diagnostic methods, recommended treatments, caveats, differential diagnoses (other possible diagnoses that should be considered), etc. The best clinical guidelines are regionalized and contain telephone numbers to individuals to consult, links to forms to use and more. Finally, we have to find a way to distribute these clinical guidelines in a way that they are effectively used in practice.

A recent review showed, however, that the average time between the research expenditure of a new medical discovery and the effective health benefits of it, is 17 years [1]. An important part of that is the delay between publication and actual application of that knowledge by physicians.

The classic solution to the problem of disseminating new discoveries is training, or CPEs (Continuing Professional Education). But that is highly inefficient for doctors for a number of reasons, including:

  • There are more relevant subjects to be trained in for a medical doctor than there are opportunities to get training, so it’s largely a crap shoot if you will be trained in something you’ll need often in practice
  • Knowledge fades if not used, so it’s even more of a crap shoot if you’ve been recently trained in a subject you need today

Now, even if I got trained last week for how to treat heart decompensation, I’m pretty certain to miss one or more steps in the fairly complicated clinical guideline if I got a patient today and did not have a copy of the guideline at hand. So there is also the list of details problem, the same problem airline pilots solved with checklists. No matter how many times they do landings, they’re bound to forget some little fatal switch sooner or later if they don’t run through pre-landing checklists. (For an excellent treatment of this subject as applied to surgery, do read Atul Gawande’s “The Checklist Manifesto”.)

Of course there are clinical guidelines everywhere, and that’s also a problem. They’re everywhere, except where you need them. When I see a patient with diabetes, I can’t take a trip to the library to read up on the clinical guidelines, there’s no time for that. And it would scare the sh*t out of the patient. I can, though, find the guideline at the local government site, if it’s there, or over there at the other site, maybe, um, nope, that one is old, or here at the… darn… what was that URL again? or… darn it, I know diabetes, I don’t need it. So maybe next time Uncle Bob comes along, I’ll remember to do that retinogram I should have done but forgot about since I didn’t find the guideline and couldn’t remember exactly how often it should be done. It changes, you know.

Let’s assume you do find the right clinical guideline and follow it. In that case, you want to continue to follow that same guideline when the patient shows up the next time, even if he is seen by somebody else. But there is no way to signal in the medical notes which guideline you are following. You could, of course, just paste in the URL right there, which I often do. There are two problems, though:

  • The URL isn’t generally clickable due to our poor EHR implementations, and they wrap, making a mess that the average doctor isn’t likely to want to understand or use
  • Some clinical guideline sites have the bad taste to still use IFRAMEs, so there is no URL available to go to the right guideline

Sometimes I simply copy in text from the guideline right in to the field “planning” in the notes, but it’s ugly. And it will soon become unread history due to the scrolling nature of the medical record. In other words, nobody will read it anyway.

Finally, I want you to consider how much suffering and needless expense we incur by treating patients using methods that are 17 years out of date and even then forgetting every so many steps of that diagnosis or treatment because we do it all from memory. It’s scary, isn’t it? Imagine if airlines worked like this. Terrorists would have nothing to do.

References
  1. Medical Research: What’s it worth? Estimating the economic benefits from medical research in the UK. Wellcome Trust, Medical Research Council, The Academy of Medical Sciences. Briefing November 2008. Short form and long form.

OSX Mail and IMAP tamed

Oh, boy, this wasn’t easy. I’ve been trying for years to get my email life organized. The problem is this:

  • I’ve got almost ten different email accounts
  • I subscribe to tens of mailing lists
  • I want mailing lists to be automatically moved to dedicated folders
  • I want the same folder setup on different machines
  • I want messages read on one machine to be marked read on all the others
  • I used three different Macs to read mail on
  • I want to be able to get at both inboxes and discussion list folders even through webmail
  • I want all mail to be available offline as well
  • I don’t want to depend on my web host to not lose my saved mail
  • My regular inbox should only contain stuff I need to act on, everything else is either deleted or moved to a single (or possibly a few) archive folder
  • That archive folder must automatically be available and updated on all my machines and webmail client
  • Oh, I almost forgot: everything should be available on the iPhone as well, of course

Hey, that’s not too much to ask, is it? But until now, there was always something screwing it up. Now I think I’ve got it beat. Since I didn’t find all that much on the ‘net about this, but I did find little scraps here and there, I figured I ought to collect my notes here for posterity. Everything that follows was done on Snow Leopard, both client and server side.

Step 1 – Make it all IMAP

First things first. Change all your accounts in OSX Mail to IMAP. There’s no way to do this with POP3 access. I’m not going to describe how to do this, since it’s not rocket science and no secret tricks are involved. Take care so you don’t lose messages, though. (Not that I know why you should, but I felt I didn’t want to take responsibility if you found a way to screw up the only storage you had of those priceless emails.)

Step 2 – Get yourself an OSX Server

Not as bad as you might imagine. The Apple Mini OSX Server is pretty cheap. Set it up as safe as possible, using mirrored drives. If you have a NAS with RAID, you could set up iSCSI to that. I’m not going into this here either, it’s a separate subject, but I will assume you have an OSX Server at least. Most of what I’m telling you below can also be done on a webhost, but I didn’t want to have maybe gigabytes of mail storage entrusted to some cheapo webhost out there. But you decide, of course.

Step 3 – Enable webmail on your OSX Server

Since OSX Mail doesn’t seem to allow folder management under IMAP, that is, there is no way I can see that allows you to add new IMAP folders server-side, you need to do that using webmail. You only need it when adding or removing folders, not a daily thing.

Assuming you have your user account on your OSX Server, that your DNS is set up right and that you can access your email account on the OSX server over IMAP, you now have to enable webmail on that server. That isn’t in the most obvious place, so I took a screenshot to help you find it (click image for full size):

Server Admin screenshot for setting web services

You have to check the “Mail” checkbox. Actually, I didn’t do that here, I selected the “default” on port 443 (upper pane) and checked it there, so that webmail is only available over HTTPS, not plain HTTP:

Step 4 – Create your folders

Via a browser, log in to your webmail account on the OSX Server. It does come with SquirrelMail already installed and running (if you did step 3 at least). Once in SquirrelMail, click the “Folders” link and you get the following screen:

The first field lets you create a folder. Leave the “as a subfolder of” set to “INBOX”, that’s a pretty good choice. It may not seem all that intuitive, but the OSX Mail client will not show these contents as part of the regular collection “Inbox” even though it’s a subfolder, so leave it set that way. As you create subfolders, you see them in the left listbox down below.

When you return to the main screen in SquirrelMail and refresh the folders (click “check mail”), you’ll see this:

…which doesn’t seem right. It looks as if your new folders are children under “Sent Messages”, but that’s just an interface bug in SquirrelMail. Don’t worry about it.

Step 5 – Back to OSX Mail

Just quit OSX Mail and restart it, it’ll find the new folders. Now, if it doesn’t, check the IMAP Path Prefix field, which you can find if you go to preferences, accounts, select the account, then go to the “Advanced” tab:

If it says “INBOX” in that field, just empty it and try saving, quitting and restarting OSX Mail. I’m not sure if it should or should not have the “INBOX” set there, but try either way if you have a problem.

Step 6 – Rules

Now comes the fun part. You can set up mail rules in the OSX Mail client that move messages into one of the new subfolders you created, even though the original mail came in on another mail account. Think about this for a while until it sinks in. The rules let you move messages from one IMAP server to another, not just between folders on the same server or you client machine.

So, I’ve got mail rules that sorts mail coming in on several different IMAP accounts. As they match different rules, these messages are effectively moved from the original IMAP server somewhere in the USA to my own IMAP server in the backroom, and all those sorted messages are now available in real time from all my Macs and my iPhone (and iPad once I have one). I do have public IPs, so my OSX server is available to me from anywhere, which helps, of course.

You can still easily see where the messages originated, since the “To:” field does not change when you move the messages. If you open a message in the common archive and click “Reply”, OSX Mail client will automatically select to reply from the account the message was originally sent to, not from the account that your archive is in. Exactly as I’d want it to.

Oh, wait, there’s more

I can easily add another IMAP account that is shared with coworkers, and move or copy messages there, manually or automatically, say for support or some mailinglist I want to share with them all. Think about that for a sec.

It becomes even neater if you have MobileMe and you have set mail accounts and mail rules to synch across your machines. I don’t even need to set up the rules as they change. I change them on any one of the machines, and the other machines update the rules. I may, occasionally, have to enable the rules (I don’t know why), but their content is updated.

This is so cool.

Update: You can actually create the IMAP folders just as easily from inside OSX Mail. Just go to the right IMAP account in the side panel, then click the “+” down on the left, select “New Mailbox” and if you scroll far enough, you’ll find your IMAP accounts if one is not already selected. Select one of those and you can create a folder on the IMAP server. It was too obvious for me, I guess.

Problem: lack of overview

Let’s expand on the first item in the list I made in the previous post. I called this item “Lack of overview of the patient”, and that’s actually a pretty serious understatement of the problem. What we get in most electronic healthcare records systems is an evenly thick layer of prose stretching from a variable point in the far past to some point in the near past, without any bumps or changes of scenery. It’s like a collection of badly written novels, intensely unreadable and intensely boring. And somewhere in there is that one fact you really need to know about, but since you don’t know what you’re looking for, you’ll never find it.

There is no overarching structure to it. You’d think that if a patient’s life, from a doctor’s viewpoint, is a sequence of medical problems that either are solved and thus in the past, or ongoing and still present in the future, that is what we’d see, but no, that is not what we see. Instead we see, for some reason only a government bureaucrat could fathom, a tree structure of how healthcare is organized in your area. That one administrative or political statement (what else can it be?) follows us around all through the medical record system, occupying prime user interface area, presumably to make sure you never once forget how many departments there actually are in your organization, and what they are called. It’s as if the most critical knowledge you need while treating uncle Bob’s diabetes is how many kinds of ear, nose and throat departments we have and what their names are and under what other larger department they reside. It could be useful for the postman, though.

(Click on image for full size.)

Just look at this screenshot from Cosmic. Most other EHR systems have the same layout, so Cosmic isn’t any worse than any others in this respect. As you can see, the workflow seems to be:

  1. Select a department
  2. Read everything that happened to the patient at that department in reverse chronological order
  3. Repeat from step 1 until you’ve read it all (unlikely) or you grow old and die (more likely)

Or, possibly, select “All physician notes” and then try to guess which departments were not included (yes, there are some, but they’re not going to tell you which ones unless you beg for mercy). This is obviously not the way we want to learn about a patient medical history. The only reason I can think of why EHR systems are built this way is because some bureaucrat who habitually views healthcare as a collection of departments, but has no idea of how a patient is actually diagnosed and treated, got the ultimate say in how the system was designed. Whatever the case may be, it’s a perfectly useless structure of an EHR.

What it should have been is a view of diseases, and then an outline of criteria, treatment options, and followup according to current science. We’ll get to that when discussing solutions later.

Getting organized

As the interest in iotaMed and the problems it is intended to solve clearly increases, we need to get our ducks in a row and make it simple to follow and to argue. Let’s do it the classic way:

  1. What is the problem?
  2. What is the solution?
  3. How do we get there?

Let’s do these three points, one by one.

What is the problem?

The problem we try to solve is actually a multitude of problems. I don’t think the below list is complete, but it’s a start.

  1. Lack of overview of the patient
  2. No connection to clinical guidelines
  3. No connection between diseases and prescriptions, except very circumstantial
  4. No ability to detect contraindications
  5. No archiving or demoting of minor or solved problems, things never go away
  6. Lack of current status display of the patient, there is only a series of historical observations
  7. In most systems, no searcheability of any kind
  8. An extreme excess of textual data that cannot possibly be read by every doctor at every encounter
  9. Rigid, proprietary, and technically inferior interfaces, making extensions with custom functionality very difficult

What is the solution?

The solution consists of several parts:

  1. The introduction of a structural high-level element called “issues”
  2. The connection of “issues” to clinical guidelines and worksheets
  3. The support of a modular structure across vendors
  4. The improvement of quality in specifications and interfaces
  5. The lessening of dependence on overly large standards
  6. Lessening of the rigidity of current data storage designs
  7. The opening of the market to smaller, best-of-breed entrepreneurs

How do we get there?

Getting there is a multiphase project. Things have to be done in a certain order:

  1. Raising awareness of the problems and locating interested parties (that is what this blog is all about right now)
  2. Creating a functioning market
  3. Developing the first minimal product conforming to this market and specs
  4. Evolve the first product, creating interconnections with existing systems
  5. Demonstrate the advantages of alternate data storage designs
  6. Invite and support other entrepreneurs to participate
  7. Invite dialig with established all-in-one vendors and buyer organizations
  8. Formalize cooperation, establish lean working groups and protocols

Conclusion

None of this is simple, but all of it is absolutely necessary. Current electronic health care systems are leading us on a path to disaster, which is increasingly clear to physicians and nurses working with these systems. They are, in short, accidents waiting to happen, due to the problems summed up in the first section above. We have no choice but to force a change to the design process, deployment process, and not least the purchasing process that has led us down this destructive path.

I’ll spend another few posts detailing the items in these lists. I may change the exact composition of the lists as I go along, but you’ll always find the current list on the iotaMed wiki.

If you want to work on the list yourself, register on the iotaMed wiki and just do it. That’s what wikis are for. Or discuss it on the Vård IT Forum.

SQL is dead, long live RDF

…um, at least as far as medical records go. SQL remains useful for a lot of other things, of course. But as far as electronic medical records are concerned, SQL is a really bad fit and should be taken out back and shot.

Medical records, in real life, consist of a pretty unpredictable stack of document types, so some form of graph database is very obviously the best fit for storage. Anything with rows and columns and predeclared types, is a very poor fit, except maybe for the patient demographics and lab data. Or maybe not even that.

The problem so far was the lack of viable implementations so instead of doing the right thing and creating the right database mechanism, most of us (me included) forced our data into some relational database, often sprinkling loose documents around the server for all those things that wouldn’t fit even if hit with a sledgehammer. All this caused mayhem with the data, concurrency, integrity, and not least, security.

I have to add here that I never personally committed the crime of writing files outside of the SQL, but squeezed them into the database however much effort it cost, but from the horrors I’ve encountered in the field, it seems not many were as driven as I was. I have, though, used Mickeysoft’s “structured storage” files for that, a bizarre experience at best.

You have to admit this is ridiculous. It leads to crazy bad databases, bad performance, horrible upgrading scenarios, and, adding insult to injury, high costs. Object-relational frameworks don’t help much and without going into specifics, I can claim they’re all junk, one way or the other, simply because the idea itself sucks.

From now on, though, there’s no excuse for cramming and mutilating medical records data into SQL databases anymore. Check out RDF first, to get a feel for what it can do. It’s part of the “semantic web” thing, so there’s a buzzword for you already.

A very good place to start is the rdf:about site, and right in the first section, there’s a paragraph that a lot of people involved in the development of medical records really should pause and contemplate, so let me quote:

What is meant by “semantic” in the Semantic Web is not that computers are going to understand the meaning of anything, but that the logical pieces of meaning can be mechanically manipulated by a machine to useful ends.

Once you really grok this, you realize that any attempts to make the computer understand the actual contents of the semantic web is meaningless. Not only is it far too difficult to ever achieve, but there is actually nothing to be gained. There’s nothing useful a computer can do with the information except present it to a human reader in the right form at the right time. What is important, however, is to let the computer understand just enough of the document types and links to be able to sort and arrange documents and data in such a way that the human user can benefit from accurate, complete, and manageable information displays.

It is almost trivial to see that this applies just as well to medical records. In other words, standardizing the terms of the actual contents of medical documents is a fool’s errand. It’s a pure waste of energy and time.

If a minimal effort would be expended to standardize link types and terms instead, we could fairly easily create semantic medical records, allowing human operators to utilize the available information effectively. All it would take for the medical community to realize this would be to raise their gaze and check out what the computer science community is doing with the web and then copy that. At least, that’s what we are aiming to do with the iotaMed project and I hope we won’t remain alone. What is being done using RDF on the web makes a trainload of sense, and we’re going to exploit that.

In practice, this means that you need to express medical data as RDF triples and graphs. This turns out to be nearly trivial, just as it is very easy to do for the semantic web. It’s a lot harder, and largely useless, for typical accounting data, flight booking systems, and others of that kind, but those systems should really keep using SQL as their main storage technique.

We also need a graph database implementation and we’re currently looking into Neo4j, an excellent little mixed license product that seems to fill most, if not all, requirements. But if it turns out it doesn’t, there are others out there, too. After all the years I’ve spent swearing at SQL Server and inventing workarounds for the bad fit, Neo4j and RDF is a breath of fresh air. The future is upon us, and it’s time to leave the computing middle ages behind us as far as electronic medical records are concerned.

Design for updates

When designing new system architectures, you really must design for updating unless the system is totally trivial. This isn’t hard to do if you only do it systematically and from the ground up. You can tack it on afterwards, but it’s more work than it needs to be, but it’s still worth it.

I’ll describe how it’s done for a system based on a relational database. It does not matter what is above the database, even if it’s an object-relational layer, the method is still applicable.

I have a strong feeling that the problem is trivial on graph databases, since the nodes and relations themselves allow versions by their very nature. I haven’t designed using these graph databases yet, so I’m not 100% sure, though.

The reason I’m going into this now is that the iotaMed system must be designed with upgrades in mind, avoiding downtime and big bang upgrades. Just this weekend, the Cambio Cosmic system in our province (Uppsala, Sweden) is undergoing yet another traumatic upgrade involving taking the entire system offline for a couple of days. Very often it doesn’t come up on time or without serious problems after these upgrades, putting patients at risk entirely unnecessarily. The only reason these things need to happen is because of poor system design. A little forethought when building the system could have made a world of difference. The PMI communication system I built and which is used in Sweden has never (as far as I know) been taken down for upgrades, even though upgrading of both client and server systems is an easy and regular activity, and Cosmic could relatively easily have been build the same way. But it wasn’t, obviously. It’s not rocket science, exactly, just see for yourself in what follows.

The problem

The first step is to understand why the database structure is so tightly bound to the code layers, necessitating a simultaneous upgrade of the database structure and the code that accesses it. In the common and naive form, the data access layer is made up of application code outside the database itself which directly accesses tables for reads and writes. If there are several data access layer modules, all of them contain direct knowledge of table structures and all of them need to update simultaneously with the table structures. This means that if you change table structures you need to do all the following in one fell swoop, while all users are taken offline:

  1. Change table structures
  2. Run through all data and convert it with scripts
  3. Replace all application code that directly accesses table data
  4. And while you’re at it, compound the disaster by also replacing business layer code and user interface code to use new functionality added in the data access layer and database while removing old application code that won’t work anymore with the new data structures

This is what we call a “big bang” upgrade and it practically always goes wrong (that’s the “bang” part). It’s extremely hard to avoid disasters, since you have to do very detailed testing on secondary systems and you’ll never achieve full coverage of all the code or even all the little deviant data in the production database that will inevitably screw up the upgrade. And once you’re in the middle of the upgrade, you can’t back out, unless you have full downgrade scripts ready, that have been fully tested as well. The upgrade scripts, hard as they are to get right, are actually simpler than the emergency downgrade scripts, since the latter must take into consideration that the downgrade may be started from any point in the upgrade.

The result is that these upgrades are usually done without any emergency downgrade scripts. It’s like the high wire without a safety net. The only recourse is a total restore from backups, taking enormous amounts of time and leaving you shamefaced and nervous back at the spot you started after a long traumatic weekend. Since backing down is a public defeat, you’re under emotional pressure to press ahead at almost any cost, even in the face of evidence that things will probably go badly wrong, as long as there’s a chance of muddling through. Your ego is on the line.

This is no way to upgrade critical medical systems, but this is how they do it. Shudder.

The solution

The solution is to isolate the database structures from the application code. If different data access layers in application code can coexist, using each their own view of how the database is structured, while still accessing the same actual data within transactions, you can let two versions of application code stacks coexist while accessing the same data. If you can swing this, you’re free to change the database table structure without the application code in the data access layer even noticing. This implies that you can simply add a new view on the database that you need for a new version of the application, add in the new application code and start running it without removing the old application code or clients. New version clients, business layers, and data access layers run in parallel to old versions, so you can let the upgrade be slowly distributed out over the user population, allowing you to reverse the rollout at any point in time. Let it take weeks, if you wish. Or even leave some old clients there forever, if there’s a need for that.

To achieve the needed insulation, simply disallow any direct access to any tables whatsoever from application code. All accesses must be done through stored procedures or views.

SQL VIEWs where actually designed to achieve exactly this: different views on the table data, removing direct dependency of the application code on the tables, so the problem was clearly defined, known, and solved even before SQL hit the scene, so why are we even arguing this now? As an aside, I never use VIEWs, only stored procedures, since I can achieve the same effect with less constraints, but that does not detract anything from the argument that the problem was both recognized and solved ages ago.

Let’s assume you need to change the name of a table for some reason. (I never do that, I’m just taking a really radical example that ought to make your hair stand on end.) First, you edit the stored procedures that access the table to check if the old table name still exists, and if not, start using the new table name. Then you create the new table. Then you create trigger code on the old table that updates the new table with any changes. Then you use the core of that trigger code to run as a batch to transfer all the old table contents that aren’t accessed to the new table. You check a couple of times during actual use if the tables keep matching in contents. You drop the old table (or rename it if you’re chicken, and drop it later). Finally, you remove the check for the old table name in the stored procedures that access the table. Yes, this is somewhat “exciting”, but I’ve done this a number of times on critical systems and it works. And if you can do this, you can do anything.

A much simpler scenario is if you add columns to a table for a new version of your app. If the new column can safely remain invisible to the old version, just add it on the fly, using the right default values so any constraints don’t fire. Add a new stored procedure that is used for the new version of the application, implementing the parameters and functionality the new version needs. The old stored procedure won’t even know the column is there. If the new column must be set some particular way depending on old values, add a trigger for that and batch update the new column using the core of that trigger in a batch command. Again, there is absolutely no need to take down the database or even kick out users while doing all this. Once the new stored procedure is there, you can roll out new applications and have them come on line one by one, leaving old versions running undisturbed.

You can dream up almost any change in table structure, including new tables, splitting one table into two, combining and splitting columns, creating or removing dependencies between columns and tables, and I can catch all of that using a fairly simple combination of stored procedures, triggers, and the occasional user function. And all of it can be done on a live production database with very low risk (caveat: if you know what you’re doing).

To keep things easy and clean, I always use a set of stored procedures per application, where the application prefix is a prefix in the naming of the stored procedure. That lets you see at a glance which app is using which stored procedure. I never let two different apps use the same procedure (you can always let both stored procedures call a common third procedure to reduce code duplication). Versions of a procedure are named with a numeric postfix so they can coexist. Versions are only created if they have different behaviours as seen from the application code. So a procedure to find a patient, used by an iotaPad app, could be named “IP_FindPatient_2” if it was the third version (the first is without postfix, the second version with _1, etc).

Finally, since you only use stored procedures from application code, no application need have any read or write access at all to your database, only execute access to all the stored procedures with a prefix matching the app. This makes for a very easily verified set of GRANTs and a secure database.

Why this scheme isn’t used in products like Cambio Cosmic is a mystery to me. It can’t be laziness, since they’re working their butts off trying not to annihilate all the medical records we have, while stressing out both medical staff and their own programmers to the point of cardiac arrest everytime something goes wrong. A little forethought would have saved so much time and problems it’s almost unreal.

But on one point you can be certain: that little forethought is a definite requirement for the iotaMed project. It wouldn’t hurt if the other guys tried it as well, though.

Darn neat stocks feature on iPhone

Just read this in a forum post (Swedish, sorry ’bout that), and simply had to tell you.

In the Stocks app that comes with the iPhone, you can add in a number of other things than the most obvious, like this:

Add the exchange as a suffix. For example TLSN.ST results in the share price of TeliaSonera on the Stockholm exchange. The price and graph will be in Swedish Crowns.

The OMX index can be added as: ^OMXSPI

Even currency exchange rates can be added with the base currency followed by the target and then the string “=X”. Examples:

USDSEK=X
GBPSEK=X
JPYSEK=X
EURUSD=X

The first one will show the current exchange rate as Swedish Crowns per US Dollar. The others follow the same pattern.

Kudos to signature sebastian_r on the 99.se forum for this info. It’s just so damn neat…

iotaDB

iotaMed is intended as an organizing overlay to existing information. It adds the guideline layer, including knowledge transfer and checklist functionality, to the classic electronic health care record. The information needed cannot be stored in the existing EHR systems, simply because they lack the necessary concepts. In other words, we do need our own database to contain the iotaMed information, even though we may leave all the “old” information where it already resides in the legacy EHR. Even the new information added through iotaMed will result in new information to be added to legacy EHR systems, even though that information, by necessity, is as poor as the information currently stored there. There simply is no way to turn this particular toad (the legacy EHR) into a prince.

So, what do we choose for an iotaMed database, that is for the iotaDB? It’s painfully obvious to most that a relational database is very unsuited to this purpose and has always been. Using relational databases for medical information has never been a good idea due to the unstructured or semi-structured nature of medical data. Every EHR I’ve seen that uses relational databases has been a Frankenstein’s monster of inefficiency and inflexibility. It the wrong choice, pure and simple.

Lately, graph databases are becoming popular, and seem to be a much better fit for this purpose. Entities used in medical databases are practically always of variable structure, with a huge number of possible value types, but with only a small number of value types used in each instance. If implemented in a relational database, this results in sparse tables, or more often in blatant misuse of the database where one column is used to define the value type of another column, so that just about anything can be dumped into a single table. It’s horrible.

The “issue”, “issue worksheet”, “clinical guideline”, “item”, and “observation” objects clearly form a graph that would be difficult to squeeze into a relational DB, but would fit nicely into a graph database.

Currently, I’m looking at Neo4j, since it seems to be one of the more evolved open source graph databases.There’s a nice intro to Neo4j here. Neo4j does not seem to have a language binding for Objective-C yet, and it needs Java to run, but since I’m planning on deploying iPads logically tethered to an OSX machine, the Neo4j can run there. The interface between Objective-C and Java would then effectively be a server side proxy app. If somebody already did this, I’d appreciate hearing from you, though.

Use iPad to run Norway

Verbatim quote from the CNN iPhone app today, due to the Icelandic volcanic revenge:

“Jens Stoltenberg, who was in New York for President Barack Obama’s nuclear summit, is running the Norwegian government from the United States via his new iPad, his press secretary Sindre Fossum Beyer said.”

That’s one amazing gadget. Gotta have one.

A medication change

In the previous post I did a simple medication domain model. After a brief exchange of comments on my Swedish blog, I was quickly made to realize it was far too simple, so let’s see how it looks today:

In this diagram, I’ve replaced the “prescription” object with a “prescription worksheet” that works very similarly to an issue worksheet. It’s all very logical; each pharmacological therapy or product comes with a set of guidelines that include when it’s to be used, what steps to take before using it, what values to check, etc. All of this is very similar to a clinical guideline but centered around a prescription. So it makes a lot of sense to treat it almost exactly the same way as issues, except it isn’t kept at the same position in the object dependency tree. We don’t want the prescriptions up there commingling with diseases, but rather somewhere lower in the tree.

However, the items and the values of those items (“observation”) can overlap with those used in issues. For instance, the item “creatinine concentration in serum” can be used both in an issue and a prescription worksheet, if the prescription guideline says that the product should not be used, or used differently, in renal failure. So some items will be used in prescriptions, some in issues, and some in both.