Few people are more qualified to raise awareness of cybersecurity in health care than Eric Perakslis.
The executive director of the Harvard Medical School Center for Biomedical Informatics and the Francis A. Countway Library of Medicine, Perakslis is also an instructor in pediatrics at Boston Children’s Hospital. But his expertise branches far outside academia.
Prior to arriving at HMS in 2013, Perakslis served as chief information officer and chief scientist (informatics) at the U.S. Food and Drug Administration. Before that he spent 13 years at Johnson & Johnson Pharmaceutical Research and Development, where he eventually became the senior vice president and CIO of R&D Information Technology and the head of informatics for the Corporate Office of Science and Technology.
With deep experience in the IT infrastructure of both industry and regulatory bodies, Perakslis is worried that entities that should be coordinating efforts are not, and that this missed opportunity should alarm the health care industry. Perakslis hopes that his recent New England Journal of Medicine perspective piece titled “Cybersecurity in Health Care” will create further awareness.
Perakslis recently sat down to discuss his concerns, paint a few unnerving scenarios and emphasize the need for a broad discussion on the issue.
HMS: So, why should we be concerned about cybersecurity in health care?
PERAKSLIS: I think medicine and research have always had to balance benefit versus risk. So if you look at both the explosion of technology that we’ve all lived through in the last 10 years in medicine as well as the recent surge of things like electronic medical records, a lot of people took very quick advantage of the benefits of these technologies but are just now realizing some of the risks.
Medicine has an additional complexity in that you often don’t have time to dwell on technology security. If you’ve got a patient connected to a device, the doctor can’t go through five security screens to get to the data bedside. Now, if you’re running a Swiss bank, you have time and can go have your second cup of coffee while your computer is going through all the security protocols.
Banks, utilities and other fields have had to think about this before medicine had to. But now there’s been this huge push of new technology in hospitals. They haven’t actually had time to think about how to lock it down securely.
HMS: In your article you talk about medicine’s inherent risks. What do you mean?
PERAKSLIS: If you’re hacking into a medical system you don’t need to steal data to cause great harm—you may just have to change data. For example, if a hacker raised every inpatient’s potassium value in an electronic medical records system by one, the number of medication errors that could happen in an hour would be horrifying. You are also in an environment where human beings are directly wired to networked devices, which brings the potential risk of direct physical harms. To be clear, I don’t mean to imply that there’s nothing being done. There certainly are safeguards in place. But the maturity curve is lagging. I think the medical system is realizing that some of our new capabilities also pose new threats.
Here’s an illustration to give some perspective. Many years ago, after my freshman year of college, I had a student internship at local hospital. That was in 1985, right when Hurricane Gloria came through Boston. As the hurricane was occurring we had a patient in the emergency room who needed a CT scan. We couldn’t do it because we found that every single part of the radiology suite was on emergency power except the CT scanner. Of course, all the blueprints, wiring diagrams and other materials indicated that the CT was on emergency power, but it didn’t work. Fortunately, a solution was found, but I use that example to show that, while proper planning may have occurred, diligent testing had not.
I’ve spoken to many CIOs and technology leaders and asked, “Have you guys ever looked in a wiring closet? Have you ever walked by when the phone guy was working on your wires and see what’s really in there? You’ve got to test this stuff, and you don’t want the test to be during an emergency when your only option is heroics.”
HMS: You write about your concerns over what you call “the Internet of Things.” Can you explain?
PERAKSLIS: What I’m referring to are devices. Everybody’s walking around with tablets and cell phones and everything is networked. Things are connected to printers and often to many different public networks within a single day. Do we have a systematic way to think about what the biggest threats and risks are in this environment? And, are our resources focused toward them?
There’s a huge gap between compliance and security. Take HIPAA, for example. The HIPAA security rule and privacy rule are obviously intended to ensure privacy and security. But telling me something is HIPAA compliant doesn’t mean I can’t hack it. The people who best understand compliance do not necessarily understand technology and many technologists do not understand compliance. There is a big gap there. In many institutions, these functions are isolated from each other and they’re both assuming the other guy’s got it taken care of. If I’m an adversary, that’s exactly what I want. I don’t want the Army talking to the Navy, so to speak. There’s a really big domain gap, and it exists in many health care delivery institutions.
The medical profession is going to have to think about quantifying risk in the same way they quantify things like hospital-borne infections or medication delivery errors, or other things that could happen either because of error or bad intention.
HMS: Some say we’ve jumped the gun on EMRs, while others say we’ve been dragging our heels for too long on this. What is your position?
PERAKSLIS: I tend to be in the middle on that. But you know, what’s done is done. The issue is, do we understand what we did? Are we going back over it? Do we know what mistakes we might have made?
For example, one hospital may have used their EMR project to really modernize and consolidate their technology architecture and infrastructure. Another might have just dropped it on top of everything else and made an even more complex mess.
But what is especially concerning is that the FDA and other agencies are not yet dealing with cyberthreat as a high priority. They’re not getting involved with things like cyberprotection within medical device safety.
And that goes back to the Internet of Things problem we just mentioned. Think about your cell phone. You often get alerted to a patch you need to download and the reason given is “security.” Most medical devices don’t do that, and their manufacturers aren’t responsible. So, when you have a new medical device, like an EKG monitor or an infusion pump, no one is responsible for maintaining virus patching and other things on those systems.
So, over time, they become a mess. And in fact, a lot of the manufacturers would say that if you try to patch updates on those machines, you’re violating the warranty on it. It’s their machine. And you don’t know what you’re putting on it. And they’re not obligated by regulatory statute to do this work. So it’s a huge regulatory hole. Nobody’s watching it.
HMS: So who should be in charge of getting this done?
PERAKSLIS: The Institute of Medicine is very good at influencing regulatory pathways. But I think people have to get together. Also, hospitals themselves have to heed this call, because right now, they’re the ones getting hacked.
Part of the problem is the genuine lack of available expertise in this field. Experienced and qualified information security professionals are in high demand and experienced information security professionals who truly understand health care delivery are almost nonexistent. That’s one of the reasons why I’m starting a program on risk-based analytics and health care here in my lab at the Center for Biomedical Informatics. We need a pipeline of talent, people who will really understand health care, informatics, IT and security. You know, because someone is going to have to fix all these problems.
HMS: The challenge here seems overwhelmingly daunting.
PERAKSLIS: It’s easy to feel ineffectual. But what you’ve got to start doing is applying intention and effort. The hospitals have to do this. And this is why I talk about risk and threat-based analytics. What are you most scared of? Can you quantify and qualify the risk? Are you proportionally resourcing these efforts accordingly?
If you’re most scared of interruption of services, or if you’re most scared of someone tampering with data, are you budgeting for that? You’re not going to be able to address all your issues at once. But are you proportionally resourcing? Is your security posture going to get stronger or weaker based on what you do within every IT project?
You’ve got to be approaching it in a way in which you know you’re consciously getting better, even if you can’t fix it all at once. Most important is to do risk management, not risk avoidance.
HMS: What should a public conversation about this look like? Who should come together?
PERAKSLIS: Hospital administrators, along with their technology and risk management experts and regulators, need to sit down and have data-driven dialogue. By regulators I mean not only the FDA but also the Office of the National Coordinator for Health Information Technology and the Federal Communications Commission.
HMS: Ultimately, how much of what threatens us here is proactively malicious?
PERAKSLIS: Today, statistically, only about a third of the incidents are caused by a bad guy. Another third is technology errors. And another third is human error. I used to see this all the time when I worked in industry. A nurse at some clinical center, late at night, is about to email a report to a doctor. She types the first letters of his last name, hits send, and inadvertently sends a whole bunch of patient records to someone else’s email address. She didn’t mean to do it. It was an accident. But that type of thing happens a lot. And this data might then allow a hacker to come in and get into everything else.
Again, the point isn’t to terrify, because terrifying people usually doesn’t help them make the right decisions. We want to raise awareness, so that people know they need to act on it. Fear is not a good motivator for sound decision making.
HMS: History has demonstrated that.
PERAKSLIS: You have to cross the awareness threshold to spawn credible action. But you don’t want to spawn overreaction. What you want is to engage the right people in the conversation, inspire the right level of awareness. And understand that you’ve got to intersect business, technology and compliance. Because right now, these folks often aren’t talking. That makes it really difficult to grasp how “threat” relates to “risk.” I can tell a hospital what threats are out there, but they need to figure out their risk based on their environment.
On any given day, what should you be worried about if you’re doing risk management? It’s hard to know unless you have studied it very carefully.