April 25, 2014

avatar

Going to the doctor and worrying about cybersecurity

For most people, going to the doctor means thinking about co-pays and when they’ll feel better. For me though, it means thinking about those plus the cyber security of the computer systems being used by the medical professionals.

I’ve spent more time than usual visiting doctors recently. I broke my hand – sure I’ll tell you how.  It was a hit-and-run accident with a woodchuck. I was riding my bike, the woodchuck ran in front of me, I ran over him, and he fled into the woods, leaving me lying on the ground moaning in pain.  Okay now that we got that out of the way…

So the emergency room doctor ordered a CT scan (to check for a concussion and the presence of a brain) and various x-rays.  I thought  about the computer controls while in the CT scanner, but what was really interesting was when the hospital emergency room digitized  the results and gave them me on a CD to provide to the orthopedist.

Before going to the orthopedist, they had me fill out a bunch of forms online. As I provided the detailed medical information, I wondered how secure the web interface is, and whether someone could attack the medical record system through the patient input interface.

When I got to the orthopedist’s office a few days later, I gave the receptionist the CD, which she promptly read into the medical records computer and returned to me. It occurred to me that the risk taken in reading a CD  or other media from an unknown source is pretty substantial, something we’ve known in the security world for  decades but has not filtered well into other fields.  On the other hand, every time I’m on a conference program committee I open PDFs from people I may never have heard of, so it’s not as if I’m immune from this risk myself.

When I got home, I read the CD on my Mac laptop, and discovered that it has an autorun.INF file to start the application that reads the x-ray data files. I don’t know whether the doctor’s office disables AutoRun on their computers; undoubtedly some doctors do and others don’t.

And even if the doctors’ computers have disabled AutoRun and don’t use the software on the CD to view the test results, how secure are they against data-driven attacks, such as we saw a number of years ago against JPEG files in browsers?

So given this experience, how would I use the information if I were a bad guy?  Patient-provided removable media are a part of the attack surface that may not have been considered.  If the security model assumes that the media is coming from a trustworthy source, there needs to be a way to validate that trust.  Relying on a imprint on the media is not much of a protection. Creating a CD with a legitimate looking imprint from a hospital isn’t hard; and if I didn’t know what an imprint looked like, I would make one up and put address in a state or country far enough away that it’s unlikely it ever would’ve been seen before by the doctors office staff. Next, the attacker needs to make an appointment with a doctor who is inclined to read data off a CD. In addition to orthopedists, that probably includes many other specialties such as oncologists and cardiologists given an appropriate explanation of what the data is. Finally, the attacker needs to create appropriate malware. But that’s easier than a web attack against a medical application, since they’re going to run whatever program is put on the disk, and there’s no need to find new vulnerabilities.

But that begs the question, why would someone bother? I’m not really sure, but blackmail, identity theft, or just kicks from knowing that you could seem like possible motivations. Then again, I doubt many of us could have predicted the varied motivations that exist for malware on the web today.

I (obviously) didn’t infect my doctor’s computers with malware, however tempting the thought may be, especially after I got the bill. But the lesson learned for me was that attack surfaces show up in the most unanticipated places.

[Postscript: Thanks to David J for pointing out several typos which have been corrected. The side effect of being a novice at using speech-to-text, thanks to the above-cited broken hand!]

Comments

  1. rjh says:

    The media probably complied with the specifications found in DICOM, http://medical.nema.org/standard.html, and IHE’s Portable Data for Imaging Interchange Profile (PDI), http://www.ihe.net/Technical_Framework/index.cfm#radiology, see Volumes 1 and 3. Media readers are generally well secured and hard to penetrate, but by no means inpenetrable. One factor that helps in their defense is the strict encoding rules for DICOM content. These make it very easy to avoid malware because it is almost certainly improperly encoded, and hence rejected at a very early stage. Most media readers ignore all non-DICOM content found on the media.

    Some of your concerns are specifically addressed by DICOM and IHE. Others are potentially addressed as part of FDA regulatory approvals for medical devices. (The reader might or might not fall into the category of regulated medical device. They are on the borderline for regulation, and other details will determine this.)

    Improvements to the Security Considerations in Volume 1 for PDI are welcome.

    • RonK says:

      Given the recent fiascoes we’ve seen with SSL certificate authorities, one would imagine that infiltrating the computer systems of a manufacturer of DICOM reader software wouldn’t be unimaginable, or even very hard. And the payoff could be access to whatever market share of the medical providers that manufacturer has.

      • rjh says:

        This started with a discussion of media. If you mean penetrating the software development and release process, yes that is a potential threat. It’s a threat in all sorts of dimensions if there is covert penetration software inside medical systems software. I wouldn’t attack the media reader if I had that kind of access. I’d go after the main database servers first.

        I would expect it to be hard, not easy. The QA/QC processes needed for FDA medical devices make it hard to go undetected. You have the developers, the code review team, and the testing teams all looking at the software. It’s hard to avoid someone noticing a covert change.

        • Jeremy Epstein says:

          RJH, the systems in question aren’t medical devices which require FDA approval. Rather, it’s the doctors office medical records systems which could become infected, and thereby exfiltrate (or modify or destroy) patient medical information.

          • rjh says:

            CD reading devices are in the grey zone for medical device regulation. Some are regulated medical devices. Some are not. If you have a malicious attacker inside the development process of an unregulated device, then there are perhaps fewer protections. Independent code review, validation, and verification remain good engineering practice but would no longer be required by law. It will depend on the individual vendor’s choices.

            The current practices reduce the risk. Risk has not been eliminated.

        • RonK says:

          > You have the developers, the code review team, and the testing teams all
          > looking at the software. It’s hard to avoid someone noticing a covert change.

          In that case I think I’d attack the build/release system used to generated files sent to the customer, rather than the application code itself. Or does the FDA also audit the compiler and build system? At some point there must be an interface where the FDA review process simply trusts the underlying layers, no?

    • Chuck Lauer Vose says:

      I think the assumption, because we don’t see the monitor that reads the CD, is that there’s a pdf or some equivalent on the CD. I’m pleased that there is some effort to ameliorate this attack vector. The risk seems to be medical offices that don’t use this system or are improperly setup; the reference to autorun being one possible place to inject code.

      • Jeremy Epstein says:

        @Chuck, yes, that’s what I was thinking. The medical data isn’t a PDF, but some other format (together with a reader). I don’t know if it’s a standardized format. But if the reader tool has vulnerabilities, or the systems are set up improperly (e.g., with Autorun enabled), then it’s a way to get into the medical records system.

        • rjh says:

          The format is DICOM. It’s an international standard. It’s freely available at the links I gave above.

    • Nacnud Nosmoht says:

      Hah, that’s funny. You missed the part about Autorun.inf. Sure, it SHOULD tell the doctor’s computer to run some DICOM reader, which may very well be hard to penetrate. But instead it can tell the doctor’s computer to run anything. And the doctor’s computer will say, “Yes sir!” and do what it’s asked.

  2. eas says:

    Next time, consider also the security of the software that controls the X-ray dose you receive in the CT scanner and X-ray machines.

  3. Ian says:

    @rjh: I don’t think you addressed the concern of executable code on the CD. Just because the CD is *supposed* to contain clean data doesn’t mean that it does. Like the author, I had advanced imagery performed last year. In my case, it was an MRI that discovered torn cartilage in my shoulder. I received a CD containing the imagery from the MRI as well as a bundled reader to display that image. I made a copy of the CD and gave it to my orthopedic surgeon for evaluation. I could have loaded *any* software onto that CD, to include the malware of my choice. I could have configured the CD to install a rootkit and a remote access tool, then run the media displayer, or hooked the autorun.INF to drop the malware on disc insert. I believe that implicit trust by most outside the cybersecurity arena is what Jeremy was concerned about.

    • Anonymous says:

      Ian, yes, that’s what I was thinking. Jeremy

      • rjh says:

        Yes, you could provide a modified CD. Hence the recommendations that only the DICOM formatted data be processed and all other content be ignored. This includes explicit recommendations that executables be ignored and autorun capabilities be disabled.

        • Nacnud Nosmoht says:

          What percentage of doctor’s offices follow the recommendations? It may take a while, but I bet it wouldn’t be long before you find one who doesn’t.

  4. Doug D says:

    You’ve got me thinking about some kind of “removable media content certification fingerprint” standard… like, a file in an extra session/track on the CD that contains a fingerprint of the ISO9660 filesystem in another track and a signature from authorized content authoring software. I think CD-ROM could be made to work flawlessly in “normal” computers, but secure computers could be designed to check for the fingerprint/signature before agreeing to mount media. (Not perfect, but better than nothing.)

    You’ve also got me wondering how you’d react to the “23andMe” service.

  5. Brandon says:

    “But that begs the question, why would someone bother?”

    Imagine you are MI6 and want health information about a certain person. Your malware could snoop for info already there, or install a backdoor for future info. Now imagine you are the Mossad and want to feed MI6 misinformation for whatever reason. You edit information, or again install a backdoor allowing you to.

    • Dave Page says:

      Or you want to find people who are HIV+, see if any of them are famous, and blackmail them… there are lots of possible motivations for an attack.

  6. Nick P says:

    Fortunately, many common attack vectors can be stopped easily. Commenters have overcomplicated the CDROM issue. Disabling autorun and digitally signing/checking the files stops that attack vector with no modification to hardware or the application. Good format design, safe languages, parser generators, and sandbox technologies like Sandboxie can prevent or contain issues with malicious files or apps. If good design is applied, preventing con jobs is the most important next step. Most black hats i know of just con their way in.

    Nick P
    schneier.com

    • rjh says:

      The design recommendation is similar but simpler. The recommendation is to disable autorun and ignore all executables. The data format, DICOM (links above) does not permit executable content and it is easy to verify format correctness of the content. This closes the easy paths for malware. Good design and code review makes it hard to penetrate via that path.

      You are correct that con jobs, bribery, and abuse of authority are a higher risk to privacy. The public exposure records show many exposures resulting from them and none yet through the CDROM devices.

      • Nacnud Nosmoht says:

        You are relying on system administrators to follow the recommendations. Any system that relies humans not making mistakes is certain to fail soon enough, because, simply put, humans DO make mistakes. Always have, always will.

  7. Deborah C. Peel, MD says:

    The healthcare system has huge amounts of the most sensitive personal data of all, but does not provide meaningful or comprehensive data security or privacy. 80% of hospitals have never invested in even the most basic security protections, like data encryption. They claim they don’t see the ROI.

    We are in a very dangerous environment where health data is not secure and patients have no control over who can see, use, or sell it.

    The dangers of health data being to discriminate against people in jobs and credit or for extortion is very high because obtaining health is easy, but the worst harms by far are that patients who know their health records are not private REFUSE critical diagnosis and treatment, to keep from losing jobs and reputations. The lack of privacy causes BAD outcomes.

    At the same time the federal government is putting $29 Billion into building the infrastructure for electronic health records and data exchange. Rather than testing systems and software, or requiring industry to produce better products the govt is buying poorly designed, legacy systems. Billions in tax dollars is going to buy the equivalent of Model T Fords, not Priuses or electric cars.

    It’s shocking how few computer scientists look at health IT systems and what is being done with data without the patient’s knowledge or consent. Poorly designed electronic health records systems or intrusions/attacks can both cause deaths. Yet electronic health records (a medical device) are not required to be evaluated the FDA.

    I would like to invite all of you to attend the 2nd International Summit on the Future of Health Privacy June 6-7 in DC at Georgetown Law Center. Registration is free to attend or watch via live-streamed video at: http://tiny.cc/27e1dw

    Ross Anderson and Latanya Sweeney are both speaking about some of the problems with health IT.

    Best,
    Deborah C. Peel, MD
    http://www.patientprivacyrights.org

    • ChumpusRex says:

      Thank you Dr Peel. You’ve said exactly what I wanted to say.

      Presently, patient data is very poorly secured on most systems and in most hospitals. This whole CD farce is a very common one, and unfortunately, the relevant technical standards in the medical industry (DICOM and IHE) don’t provide support for security protocols.

      CDs are frequently given to patients and sent between hospitals when a patient’s care needs to be transferred. There is no industry standard way of encrypting these. In the UK, the government made encryption of medical CDs a legal necessity – as a result, a lot of medical device vendors developed proprietary encryption techniques. These would usually work on individual PCs, but would rarely, if ever, work when inserted into a certified “diagnostic quality” medical imaging workstation (which is usually running a heavily restricted embedded OS – many run Linux).

      As an example, of how troublesome this lack of standardised encryption can be, I recount a story told to me by a neurosurgeon. He would frequently have CDs of CT scans urgently couriered to him for advise on whether to accept a patient with a head injury from a regional hospital. These CDs were encyrpted, so could not be accessed on the CT workstations at his hospital. The regular desktop PCs were also heavily restricted, and the decrypter failed to run. While he could usually get IT to decrypt the CD, overnight, this wasn’t an option. He had to bring his personal laptop to work, decrypt the CD on the laptop, and burn the decrypted contents to a new CD (in the clear) and then load that CD on a hospital workstation (taking care to scrub his laptop, and shred the newly burned CD).

      Increasingly, in the UK, hospitals and clinics are turning to direct electronic transfer – but again, there is no secure standard for this. Instead, it relies on VPN tunnels which have to be manually configured for each individual source-destination pair, or proprietary file transfer systems run by 3rd parties.

      Additionally, contrary all the reasssuring statments in the early replies to this thread about how trustworthy medical software is, I’m afraid that this is simply false. Medical software is generally appallingly bad in terms of stability, security and overall quality control. I’ve used many products which are riddled with bugs, including race conditions which can result in patient data being misfield in another patient’s file. Stability is frequently poor, in one installation of a PACS (digital X-ray viewing system), the servers struggled to maintain 95% uptime for the first 12 months, until about 18 patches and updates, and removal of several key features, finally got the uptime closer to 98%.

      Additionally, much medical software is poorly engineered with a view to data validation, range checking, etc. I know of one PACS installation (quite old, but still in use) which doesn’t honor some of the image file metadata (most notably “bits per pixel”). If you load a dataset from a modern CT scanner (which produces 16 bpp images), the PACS assumes 12 bit (as “most” CT scanners use 12bpp). The results are interesting. Rather than give an error. The image is displayed with lots of trippy colors. Basically, the conversion from monochrome value in the image to display pixel value is done by: RGBval = monoPixVal * 0×00010101. The problem is that there is no range checking on the “monoPixVal” leading to overflows (and therefore trippy colors, reversed contrast and all sorts of general chaos).

      I recently dissected an electronic patient record system that is currently in use at a number of major hospitals. The system is 100% based on dynamically generated SQL statements, with a number which are not correctly escaped. The database action is done under the SYSTEM account, and the connection string is stored in clear text in the application directory. Oh, and to make things better, the user passwords are protected with a Vigenere cipher – seriously, a Vigenere cipher (modified to accept numerals), and encrypted using the username as a key. This wouldn’t be so bad, but for performance reasons, the client software caches the [Users] table on the local hard drive!!!

      Then there is just simple bad practice (which often comes because the systems available are bordering on unmanageable). E.g. I know one major children’s hospital in London (known by it’s street address) where the medical records systems are so difficult to manage and staff change so frequently, that everyone just shares a generic login (which is a ludicrously obvious username/password. I was able to guess it on the 3rd try). Of course, on the surface, everything is secure. There are even posters up all over the hospital which reassure parents how their children’s data are secured electronically and access restricted only to authorized personnel directly involved with their child’s care.

      Ross Anderson wrote in the most recent edition of his book security engineering, in reassuring terms about a smartcard system used in the UK for securing medical records. In theory, he’s correct. In practice, the implementation was so bad, that it was unusable – sharing cards was rife (because the log-out process would crash the client computer), users weren’t able to change their PIN codes leading to frequent lock-outs, and a tendency for the card administrators just to use simple PINs like 1234; business continuity was critically affected because card certificates would expire with no warning, leading to sudden withdrawal of access to surgeons and other doctors overnight or at weekends, with no proper support system to find out what the causes of failed access was, let alone anyone to correct the problem (this would often require the surgeon to make a trip to a town on the other side of the county, in person, to get the card reprogrammed).

      The quality of medical IT is a true scandal. It’s amazing that it doesn’t appear to have been targeted in a systematic way.

  8. Chris Tscharner says:

    A somewhat different issue but Continuity of Operations is equally dismal. Every physician or dentist I’ve used (not to mention my dog’s veterenarian :-) ) has the patient records in paper folders in a fiimsy wood (or maybe thin sheetmetal) cabinet. I’ll bet there is no effective digitization/backup plan in most cases. One fire and they would be in for a major hassle (plus possible negative effect on the health status of patients).

  9. David says:

    The DICOM format is a horrible mess of a standard. It feels like several different systems were lumped together and called a standard. Although there is formal testing of DICOM readers I would strongly suspect that there are many vulnerabilities in reader libraries. There are not a lot of reader libraries used (a few are used in many locations) and so a single vulnerability would have wide applicability.

    • Steve Lodin says:

      In a past job, I had the opportunity to look at some of these standards (HL7, DICOM, etc.) and it’s not the standard I worry so much about. It’s the implementation of the standard. A similar situation existed with early Internet protocols such as telnet, ftp, dns, etc… The protocol description and format is fairly robust, but the particular client implementation is usually where the errors come in, typically in the standard SDLC-type software bugs such as range checking, buffer overflows, etc… These were never designed and implemented in a security-aware development environment like you see nowadays at companies like Microsoft or Adobe.. I always thought it would be very interesting to see what happens when you run a slightly smart fuzzer against some of these healthcare data exchange programs. I’ve changed employers since then so I never explored that path.

      Finding problems, though, would be ugly. Many of these legacy medical devices do not have good/easy/effective ways of software updates.

  10. krs says:

    This is all very interesting, but when dealing with my doctors and I have several, I am more concerned with the amount of data they are asking me to supply, supposedly for HIPAA or billing reasons.AND the other Attack Vectors besides CDs or Thumb Drives.

    Over two years ago, they wanted to take my picture to add to my file, so that they could be sure of my identity when I came to their office. I have been going to the same Doctor’s office and seeing the same Doctors since 1996!

    This year they are requesting a Debit or Credit Card number so they can immediately get the co-pay before the Insurance pays them. The alternative is to hand them a $100 bill or check, which is what I do.

    They have had wireless installed for over 3 years, I have not attemped to check their security, but their wireless enabled Windows tablets appear to only require an 8 character password (from observation).

    This is an all-in-one office, e.g. they have their own X-Ray room and Testing Lab, and naturally all the results are in their computer system.

    I agree with all of Dr. Peel’s comments above.

  11. Adrian Gropper MD says:

    Physical CD / DVD media for medical imaging is as obsolete as it is for movies and music. It is now possible to stream radiology and pathology images directly to any modern HTML5 web browser (including iPads) without the hassle, delay and risk of physical copies. (Security aside, the biggest problem with physical copies is that they’re unreliable to read).

    DICOM is not a bad intramural vendor neutral file format. It was never meant to have security or to be shipped with viewers. If someone using a streaming viewer wants a copy, they should be able to download the original DICOM to good effect.

    This, of course all requires the imaging provider to operate a secure web portal. It’s 2012, and even healthcare should be able to manage that.

    • JT says:

      I recently saw I DICOM-based CT application that was in fact run through a web browser.

      What confuses me about the comments is that everyone assumes the CD would be put straight into a DICOM system. Yes, it will be eventually, but I’ve seen doctors take a CD from me and put it straight into the exam room PC…

      That’s the machine that probably isn’t locked down tight against this threat.

  12. antibozo says:

    I gave my doctor’s office an email address when i first started visiting them. They promptly started emailing me unencrypted lab results. I told them to delete the email address, and started having them fax me the results instead. I soon noticed from the fax headers that the faxes were being delivered via an email-to-fax gateway.

    So, basically, we’re screwed.

    Also, BTW, http://begthequestion.info/

  13. Dennis Blankenship says:

    Speaking only to motivation for malware/data mining attack on a medical staff’s computer: the first thing that came to my mind was its potential for use in civil litigation, especially where one party’s mental or physical state may be questioned, suck as a worker’s compensation claim. There is a long and nefarious history of abuse of EAPs, for example, in these circumstances, long before cyber security of these data became an issue. While criminal proceedings are subject to the exclusionary rule on evidentiary matters, I do not think this has been ever applied to civil litigation. So if an attorney can access some damning medical data on a party to the suit, I believe he/she is free to use it. I am not an attorney, by the way, and I would be interested to have an actual attorney weigh in on this.

  14. Elf says:

    I work in litigation support, scanning & converting records. It’s likely that:
    1) The scans are JPGs displayed in a proprietary reader/viewer program;
    2) Nobody who works with the hardware or puts data in the medical database has any idea how to look at the actual files;
    3) The scans probably take up 1-8mb, and the program probably takes a few more, and there’s a *lot* of empty space on that disc.

    Also, you wouldn’t need a reasonable facsimile of the hospital’s official label and logo, and fake has a good chance of being noticed by the receptionist, who sees a *lot* of those exact labels. What’s likely to work is… a blank CD with the patient’s last name written in Sharpie, and a mumble about the office being out of label/CD supplies.

    Odds are, the files from the disc could be put on a flash drive instead, and handed to the receptionist with the explanation, “they said they were out of CDs and wouldn’t get them in for another couple of hours but I had a spare thumb drive with me so they used that instead”–allowing whatever’s on the computer to be immediately copied to the flash drive. A clever malware programmer would put in pop-ups warning the user to leave the program open for an extra couple of minutes “to avoid file corruption” while the files were being copied.

    It’s possible the flash drive trick wouldn’t work; some offices have security that blocks external drive use entirely. But those are very rare–and the blank CD with Sharpie label would probably still work; security procedures implemented at the IT level are often not actually explained to the people working with the data. The assumption is that if you configure the hardware/software on the computer correction, the operator won’t be able to cause any problems.

  15. John Moehrke says:

    I agree that there is concern with the completeness and consistency of the security (and privacy) of the operational environment. HIPAA has tried really hard to provide a framework, and does the correct thing in focusing on a Risk Assessment. There are plenty of ways to mess this up, plenty of ways to be ignorant, and plenty of ways to simply assume that there is no problem. However this is true across the board, not specific to CDROM formatted image import.

    RJH has outlined how the standards have tried to include capabilities and impart urgency to the software designers and operational environments. It should be noted that the standards organizations in healthcare are ahead of most in that there are Risk Assessment processes that have been inplace for years, to make sure that each standard developed considers security (and privacy) risks. But these can only provide guidance, there is no way to enforce perfection.

    That said, I think that the scenario that you outline does have some inherit security (and privacy) built into it. I will be an optimist and presume that the software and operational environment have considered these. For example: When you hand over your CDROM, they know who you are. They knew this because you made an appointment, or better yet, were referred. A walk-in will likely cause more investigation into who you are. They have surely done some background work to make sure you have insurance or can pay upfront. Thus, if you were to infect their computer there would be a history. It is true they may not know it was your CDROM, but they know when their system was good and when it went bad and can investigate all the patients between.

    There is also plenty of off-the-shelf software that can help here. Common today that any anti-malware (antivirus) will automatically scan removable media, remember the first malware was floppy based. Given that the registration clerk processed your CDROM without question, I optimistically assume that the clerk has done this workflow multiple times, and thus they have considered how to get the data off the CDROM. Likely using the formats that RJH indicated, as DICOM formatted CDROM is quite common today and will likely continue to be quite common for the next 5-10 years. Although exchange through the Internet is possible, there is caution in enabling this universally. I am doing what I can to make it happen.