Google Health – Dead on Arrival due to duff data quality?

It would seem that poor quality information has caused some definitely embarassing and potentially risky outcomes in Google’s new on-line Patient Health Record service. The story has featured (amongst other places) :

  • Here (Boston.com, the website of the Boston Globe)
  • Here  (InformationWeek.com’s Global CIO Blog)

‘Patient Zero’ for this story was this blog post by “e-patient Dave” over at e-patient.net. In this blog post “e-Patient Dave” shared his experiences migrating his personal health records over to Google Health. To say that the quality of the information that was transferred was poor is an understatement. Amongst other things:

Yes, ladies and germs, it transmitted everything I’ve ever had. With almost no dates attached.

So, to someone looking at e-Patient Dave’s medical records in Google Health it would appear that his middle name might be Lucky as he had every ailment he’s ever had… at the same time.

Not only that, for the item where dates did come across on the migration, there were factual errors in the data. For example, the date given for e-Patient Dave’s cancer diagnosis was out by four months. To cap things off, e-patient Dave tells us that:

The really fun stuff, though, is that some of the conditions transmitted are things I’ve never had: aortic aneurysm and mets to the brain or spine.

The root cause that e-Patient Dave uncovered by talking to some doctors was that the migration process transferred billing code data rather than actual diagnostic data to Google Health. As readers of Larry English’s Improving Data Warehouse and Business Information Quality will know, the quality of that data isn’t always *ahem* good enough. As English tells us:

An insurance company discovered from its data warehouse, newly loaded with claims data, that 80% of the claims from one region were paid for a claim with a medical diagnosis code of  “broken leg”. Was that region a rough neighborhood? No, claims processors were measured on how fast they paid claims, rather than for accurate claim information. Only needing a “valid diagnosis code” to pay a claim, they frequently allowed the system to default to a value of “broken leg”.

(Historical note: while this example features in Larry’s book, it originally featured in an article he wrote for DM-Review (now Information-Management.com) back in 1996.)

“e-patient Dave” adds another wrinkle to this story..

[i]f a doc needs to bill insurance for something and the list of billing codes doesn’t happen to include exactly what your condition is, they cram it into something else so the stupid system will accept it.) (And, btw, everyone in the business is apparently accustomed to the system being stupid, so it’s no surprise that nobody can tell whether things are making any sense: nobody counts on the data to be meaningful in the first place.)

To cap it all off, a lot of the key data that e-Patient Dave expected to see transferred wasn’t there, and of what was transferred the information was either inaccurate or horridly incomplete:

  • what they transmitted for diagnoses was actually billing codes
  • the one item of medication data they sent was correct, but it was only my current BP med. (Which, btw, Google Health said had an urgent conflict with my two-years-ago potassium condition, which had been sent without a date). It sent no medication history, not even the fact that I’d had four weeks of high dosage Interleukin-2, which just MIGHT be useful to have in my personal health record, eh?
  • the allergies data did NOT include the one thing I must not ever, ever violate: no steroids ever again (e.g. cortisone) (they suppress the immune system), because it’ll interfere with the immune treatment that saved my life and is still active within me. (I am well, but my type of cancer normally recurs.)
  • So, it would seem that information quality problems that have been documented in the information quality literature for over a decade are at the root of an embarassing information quality trainwreck that could (potentially) have an affect on how a patient might be treated at a new hospital – considering they have all these ailments at once but appear asypmtomatic. To cap it all off, failures in the mapping of critical data resulted in an electronic patient record that was dangerously inaccurate and incomplete.

    Hugh Laurie as Dr. Gregory House

    Hugh Laurie as Dr. Gregory House

    What would Dr. Gregory House make of e-Patient Dave’s notes?

    e-Patient Dave’s blog post makes interesting reading (and at 2800 words + covers a lot of ground). He details a number of other reasons why quality problems exist in electronic patient records and why :

    • nobody’s in the habit of actually fixing errors. (he cites an x-ray record that shows him to be a female)
    • processes for data integrity in healthcare are largely absent, by ordinary business standards. I suspect there are few, if any, processes in place to prevent wrong data from entering the system, or tracking down the cause when things do go awry.
    • Data doesn’t seem to get transferred consistently from paper forms to electronic records (specficially e-Patient Dave’s requirement not to have steriods).
    • Lack of sufficient edit controls and governance over data and patient records, including audit trails.

    e-Patient Dave is at pains to make it clear that the problem isn’t with Google Health. The problem is with the data that was migrated across to Google Health from his existing electronic patient record.

    Google Health – DOA after an IQ Trainwreck.?

    7 thoughts on “Google Health – Dead on Arrival due to duff data quality?

    1. Pingback: Headlines for April 7-18 | Health Content Advisors

    2. admin Post author

      DCO,

      Thanks for the link to the Consumerist article. 80% of hospital bills having errors counts as an IQTrainwreck in its own right, so we’ll be investigating that story further.

      Reply
    3. e-Patient Dave

      I can’t believe I didn’t comment here – I’ve been telling people everywhere that this is one of the most accurate posts anywhere about what did and didn’t happen, and what it means. (And that’s going some – googling “e-patient dave” +”google health” now produces thousands of hits.)

      Okay, you’re the expert(s) – is it overreaching to assert that screwed-up, mismanaged, automated processes for medical data could potentially produce one of the biggest large-scale trainwrecks ever?

      Count me as an ally in whatever research you want to do. This isn’t my day job, obviously (I’m “just a patient”) but we did dig some more. Interesting example: the armpit cyst item was submitted for billing during an abdominal ultrasound. Hmmmm.

      I hear tell that “upcoding” is a common (and widely known) practice – people called “coders” sit in hospital basements combing through records of your visit and picking out keywords that justify billing for the highest-priced item on the insurance company’s reimbursement menu. Aside from the ethical issues, you and I know the implications of then reading back that data as if it were an earnest attempt to express reality.

      I’ve now gotten invited into some policy discussions and I’m arguing that “IT grown-ups” should scrutinize any plans to go large on EMRs…. I’ll subscribe here; keep in touch.

      I mean, this is personal: we’re talking about the quality of YOUR data a few years from now, YOUR mom’s, YOUR kid’s, whoever it might be.

      A related story: The Data Model That Nearly Killed Me.

      Reply
      1. Daragh O Brien

        e-Patient Dave,
        Congratulations on getting such a high search score on Google (one chuckles at the irony), and thanks for taking the time to stop by and share here as well (particularly the bit about this being “one of the most accurate posts”).

        You are not wide of the mark when you say that “screwed-up, mismanaged, automated processes for medical data could potentially produce one of the biggest large-scale trainwrecks ever”.

        In industries as diverse as telecommunications, financial services, government services, healthcare, and education (to name but a few), in every country of the world, crummy quality information is costing money and often lives. Throwing more automation at bad processes and crummy data just leads to faster arrival at a trainwreck scenario.

        Your story just has so many specific examples of the common types of error that it was too good a write up to miss out on.

        If what you have uncovered about Up-coders is true, then it is a classic example of the objectives for which information is captured and the purposes for which it will be used being at odds with the strategic and tactical intent of the insurance company – and it is a recipe for an IQTrainwreck and potential litigation in an e-Patient record scenario where a doctor misdiagnoses or mis-medicates based on intentionally inaccurate information.

        See this article from the IAIDQ and this article as well for some insight into the legal issues that arise in Information Quality (the latter article is restricted to members of the IAIDQ at the moment but I can email you a copy if you are interested).

        The International Association for Information & Data Quality is an all-volunteer non-profit dedicated to raising awareness of these types of issues, and promoting the development of a strong professional approach to tackling these issues. Your offer of input/support for research is a generous and intriguing one as we do have research partnerships with a number of Universities in the US and elsewhere. I’ll talk to our partners there and see if we can’t put something together on this.

        Reply
    4. Pingback: I am not a number - I’m a human being! | IQTrainwrecks.com

    5. Pingback: Imagine someone had been managing your data: next anecdote | e-Patients.net

    Leave a Reply

    Your email address will not be published. Required fields are marked *