Disaster Preparedness: Protecting Data and Maintaining Communications

Transcript

Heather Dorset:  …disaster preparedness protecting data and maintaining communication. I’m Heather Dorset, Director of Marketing for PerfectServe. I will be your moderator today. Thank you for joining us. Before we get started, let’s review the webinar platform.

In the middle of your browser, you will see a box containing today’s slides. They will advance automatically throughout the presentation. To the right of the slide, you’ll see a Q&A box to submit a question at any time. Your questions will be addressed during the Q&A portion at the end of the presentation.

Go ahead and put questions in there. We’ll collect them throughout the presentation and then address them at the end. Underneath the Q&A box, you’ll see a Twitter feed with highlights from today’s presentation. Feel free to tweet any commentary you find interesting and use the hashtag #PSWebcast.

Finally, in the lower left‑hand corner, you’ll find a resource list containing a PDF of today’s slides, and a few other helpful resources.

In addition, today’s webinar is being recorded for our friends and colleagues who are unable to join us. We’ll have the recording available in a few days after the webinar today. As an audience member, you are listen‑only mode. Please do make use of that Q&A box if you need to communicate with our speaker or with me today.

Finally, at the end of the webinar today, we will ask that you complete a short survey, and tell us what you thought of today’s event. For those of you may not know PerfectServe, we provide healthcare’s only comprehensive and secure communications and collaboration platform, used in more than 200 hospitals, and 25,000 practices and post‑acute providers.

PerfectServe is an integrated system of secure, cloud‑based communication applications. What does that mean? We are a communication platform. We provide secure messaging, secure calls, and a way for clinicians and other care team members to communicate with each other in a secure environment.

Not only do we do that, but we route communications to the appropriate care team member, who is on call or providing care for any given patient at any given time. You can be assured that your communication, whether it’s a call or a text, is going to the right person that you need to reach. You’re not wasting time trying to track that person down.

We connect care team members across settings, whether it’s inpatient or outpatient. Like I said, we have a technology called Dynamic Intelligent Routing that automatically identifies and connects you with the right Care Team member you need at that moment. We support all communication devices and modalities from smartphones, even pagers.

Ultimately, we’re really focused on driving that last mile communication workflow with the EMR and other point solutions including nurse call or some notifications, scheduling and more. We integrate with all the technologies that you are used to already in your organization, which makes it much easier for you to do what you need to do when you’re collaborating on patient care.

This webinar series is a part of our ongoing promise to drive meaningful improvement in care delivery processes. Thanks for joining us. Now, it is my pleasure to introduce Drex Deford as today’s speaker.

Drex comes to us with a long career as a healthcare executive including his experience as co‑founder and CEO of Next Wave Connect, EVP and CIO at Steward Healthcare in Boston, SVP and CIO at Seattle Children’s Health System and Research Institute, and Corporate VP and CIO at Scripps Health in San Diego.

Prior to that, Drex spent 20 years in the US Air Force where he served as a regional CIO, a medical center CIO, and Chief Technology Officer for the US Air Force Health System’s World‑Wide Operations. Drex, after all that, I will pass it over to you.

Drex Deford:  Thanks very much, Heather. I appreciate it. Hello to everyone on the webinar. Thanks for having me here. I was super excited when the team from PerfectServe asked me to join them for a webinar on Disaster Preparedness. It’s one of my favorite topics and it’s one of the things that I definitely have a passion for way back since my days in the Air Force.

I’ll talk about that more here in just a minute, but let me take a minute and introduce myself. You heard Heather talked a lot about me already. I’ll try to keep this brief. I’m a farm kid from Indiana. I didn’t have money to go to college, so I enlisted in the military. I had a good boss right out of the gate who signed a paperwork for me to go to college at night as long as I would work the overnight shift.

For three and a half years in the Air Force as an enlisted guy, I worked on graveyard shifts. I went to school, I finished my bachelors, and the Air Force offered me a commission as an officer, as a hospital administrator, which was really a good mission inside of an already good mission.

As an officer and a hospital administrator, I was lucky enough to do lots of different things.

I did supply chain, finance, disaster preparedness, readiness ‑‑ I’ll talk about that more in a minute ‑‑ and health‑care information technology, obviously. I was a CIO at a small hospital, then the Deputy CIO at the Air Force School of Health‑Care Sciences. I took a role as a regional CIO, then onto one of our large medical centers.

Finally, as Heather talked about, I went to the US Air Force Surgeon General’s Office as the Chief Technology Officer for Air Force Health’s worldwide operations. Before I knew it, 20 years and 21 days had flown by. Finally, this time I mean it. I’m really going to leave the Air Force, so I retired. I went to Scripps Health as their CIO. I really loved it and great place to be.

I’m actually in San Diego today as I do this webinar. Then I was recruited up to the CIO job at Seattle Children’s. I’d always wanted to do pediatrics. I’d always wanted to do an academic medical center and a research institute, and I was able to do all that there. They were also a Toyota Lean production operation. I’d been a Lean practitioner for much of my career.

I’d lived in Japan at that point in my life for three and a half years, so I’d seen Lean work in a lot of different facilities. It was another big reason to go to Children’s. Then I was recruited to Boston to Steward Health Care, one of the original pioneer ACOs and for‑profit, venture‑capital‑backed health‑care systems. It was for me a whole new adventure.

New business and clinical models to learn, and it was great working at one of the country’s biggest value‑based, risk‑based health‑care leaders. I left Steward to do a startup as CEO of a startup called Next Wave Connect, the health‑care social‑collaboration platform. I learned a ton there. I got an informal PhD in writing a startup.

Then I realized maybe I was at a point in my life where I needed to re‑balance and think less about climbing and more about balancing work‑life balance. I started my own practice three years ago. I’m truly an independent consultant. I don’t have any employees. It turns out that’s a pretty awesome thing.

I have a ton of great clients, health systems, vendor partners, startups, and VC firms. I do a bunch of other stuff, too. In full disclosure, here I’m on the Board of Directors at CynergisTek, which is a best‑in‑class health‑care security consulting firm based in Austin, Texas. Along the way, I served on the CHIME board, on the HIMSS board, and on the HIMSS state boards.

That’s a little bit about me. That little overview of my career might give you some idea about how I’ve developed this passion for disaster preparedness. In fact, after being invited by PerfectServe to do this webinar, I really started thinking about how and where I got this readiness, disaster preparedness, incident response ‑‑ whatever you want to call it ‑‑ branded into my brain.

As best as I could self‑diagnose, it went back to my time as a young Air Force officer deployed to Desert Shield, Desert Storm. If you’ll bear with me for the balance of the presentation, I’ll try to help you move into a position to think more about disaster preparedness and develop that mindset, too.

It ultimately, to me, helps you succeed not only in health care, but I think it helps you with a lot of other things in business and life ‑‑ the whole disaster‑preparedness mindset mentality. A little more than 27 years ago the air wars started in Iraq during the first Gulf War affectionately known to all of us who were there as Desert Storm.

A lot of us had been there for several months. Prior to the start of the air war, I was a young second lieutenant with the 23rd Tactical Hospital, which was an air‑transportable hospital. Think of it like MASH. You’ve seen the TV show, right? It’s like an Air Force version of MASH only the air‑transportable hospitals were more modern, and we didn’t have a Corporal Klinger. [laughs]

Iraq invaded Kuwait in August. I was the readiness officer, disaster‑preparedness officer, for the 23rd Med Group out of Alexandria, Louisiana. The morning after the invasion, being the curious little lieutenant I was, I went over to the base war‑planner office. I pulled out all the classified documents about who supports a possible air war in Southwest Asia.

Lo and behold, that was one of the spots that our hospital was asked to support. Later that morning, I talked to the hospital commanders and the other senior officers in our stand‑up meeting. I mentioned this fact, which got a nice chuckle from everybody in the room. A strong reassurance from the Senior Officers in the room that nobody was going to Southwest Asia.

What happens two weeks later? I’m loading a C1‑41 Cargo Aircraft with 40,000 pounds of cargo. Enough to get the basic transportable hospital up and running. I found myself as part of an advance on‑site team at King Khalid International while report in Saudi Arabia.

Now, King Khalid International Airport, KKIA, would turn out to be one of the worst medical hubs for everyone from the US Army to the French and the British and every other coalition country that was involved in the war. We were there pretty fast. We had our room seat apartment up and running within 24‑hours over the course of the next three days.

We built up the entire hospital and we were ready for war, which didn’t come for four months. Let me pause here and make my point. That’s Bob Hope by the way. If you don’t know who he is, you should Google.

We all love Bob by the way. Because he always reminded us when we saw him during Desert Storm show, he reminded us that everyone back home was rooting for us and we hung in there, the war would be over and our lives would return to something more normal in short order.

That’s really the thing that we’ve longed for. I think during any kind of disaster, a return to normalcy there’s a little Bob Hope in all of us. Longing for the good old days, hoping that we can get past this disaster as soon as possible and return to our normal lives and our normal work, and back to everything else that passes normal.

Before disaster, we tend to spend a lot of time worrying about what happens. We sometimes passed by ourselves by saying things like, “Well, it is what it is. If it happens, there’s not much I can do about it.”

That’s the trick to disaster preparedness staff. You actually can do something about it. For me, it’s always been started by thinking about risk.

A good plan for success involves considering all the bad things that might happen and making some estimates about how likely a given scenario is. Then, prioritizing the most likely bad things or the activities you want to spend the most time training for in your disaster preparedness exercise.

If you do that, your measuring and prioritizing risk. That’s another key to your disaster preparedness planning. Preparing for the most likely unexpected event that could negatively impact you and your healthcare business and its resources.

Now, it’s good to remember too, that healthcare has for years, and for years and years really, been concerned about dealing with disasters and mass casualties. We’re normally part of Local and State Disaster Response Systems. As health systems and clinics is part of our joint commission requirements and for others, it’s generally speaking part of a community obligation.

Then, here’s where I bring a little biasness to all of this after years as a CIO, we can often times talk about all kinds of disasters, hurricanes, and floods, and earthquakes, and plane crashes.

I think a lot of senior healthcare executives have yet to internalize one specific kind of disaster. For most of us, the disaster I think we’ll mostly have deals with an information systems outage that may very well be tied to a cyber‑security attack. Don’t get me wrong.

I think CEO’s and other exes understand that they need to prepare and train for disasters. I think they worry about a lot about a security breach like a ranch more attack. I don’t think most healthcare executives have completely put those two things together yet. They know there’s risk, but they’re not training to deal with that risk. At least, not in a big way.

They’re mostly spending their time and their resources hoping to avoid a problem and that’s not a bad thing. Preparing for risks absolutely includes a lot of balancing work of a tone.

On one hand, we’re preparing for the worst‑case scenario and preparing for disaster training and exercises. We’ll be ready to deal with the problem when it happens.

On the other hand, we’re simultaneously investing in tools and people you need to try and avoid the problem from happening altogether. The decreasing of the chances of it happening by expending those resources to decrease the risk.

For the most part, when it comes to cyber‑security solutions, in particular, Senior Healthcare Executives, I think still have a lot of work to do in this category. It was something that was released on cyber‑risk management from a firm, a risk management firm, Marsh, just a couple days ago.

It said that 90 percent of executives are confident in their cyber‑security risk response. 30 percent of them said they at least have a plan. That’s what’s most worrisome to me, they’re not regularly running a full‑scale business continuity exercises under information service outage conditions.

I worry they’ve not spent a lot of time thinking about how to operate for protracted periods of time without significant parts of our information technology infrastructure being available. Like everyone else who becomes so used to having all those information systems up and running well. We all curse them from time to time.

Let’s face it, information technology is a critical component of how we do care for patients and families now. For those of you in IT, you become so good at running our information system that when something breaks down, health systems limp along for a few hours, and, then, they start diverting patients. Then, they usually wind up wishing they had a better plan.

Now it’s not every health system, of course. I don’t need to paint with an ugly, broad brush. When I talk to most healthcare execs about disaster preparedness plans, we really get past a few questions before they say something like, and then, we just have to figure it out. We have to be agile.

I’m a big fan of agility, believe me. I’ll go on endlessly about flexibility and adaptability as a key to make it through any kind of disaster relatively unscathed. Agility, to me, really comes from planning and practicing. Making mistakes and being honest and critical about those mistakes and building a better process and being more prepared in running the exercise again.

Ultimately, you’re more agile because you practiced. That really is true of lots of different things, including, as I take this return trip back to Desert Storm. The 23rd Tact Hospital most of us arrived in September 1990.

Of course, before deploying, we regularly practice putting up our temper tents and setting up the hospital. We practice war‑time scenarios including wearing this kind of mop here, this chemical gear for long periods of time under simulated chemical attack.

We’ve done all of that back home in Louisiana. We were able to set up the initial hospital pretty quickly and feel pretty comfortable that we can protect ourselves should something happen with a chemical attack or something like that.

We’ve not operated this hospital or done in any of this kind of work. We had not done this in a desert on the other side of the planet. While we had some agility, thanks to our exercises and practices and planning, we really need to sharpen our skills and we had to customize to our current situation.

The beauty of the pause between September and January, when the war started, was that we had time to prepare. We didn’t just sit around and worry about what might happen. We ran a lot of exercises. We had a lot of different scenarios.

We changed the volume and the severity of patients. We did math around when we thought certain kinds of supplies might run out. We worked closely with the other coalition hospitals to role‑play personnel exchanges. What happens if we have patient overflow and we need to shift patients to them or receive patients from them?

We practiced air evac‑ing patients out of the theater and off to Germany. We even did things like work with the local Saudi hospitals on transporting or receiving patient should there be an attack on Riyadh or other nearby cities.

In many of these scenarios, we had to involve outside organizations, the transportation squad, and the security forces team. We did worst‑case scenarios, best‑case scenarios. We got pretty good at it. We were really as prepared as we could be, and that made us agile and competent and calm when things got ugly, which they eventually did.

For everyone involved and for you, those skills ‑‑ agility, confidence, and calm ‑‑ turn out to be critical when you’re dealing with a crisis, when a disaster happens. For me, even if you’ve been practicing and preparing when a disaster happens, when something unexpected happens in a big way, your heart will pump fast and your adrenaline will dump.

Sometimes even, you feel like you want to barf. Many of you have been there. If you’ve done the exercises, if you’ve done the practice, then your training kicks in and your nerves calm down.

Because it’s not all new, you’ve seen a situation something like this before, you know how to begin to deal with it. You’re cool under pressure. You have perspective on the situation.

Let’s talk about some ideas about how you help coach the executives and all your teammates into a scenario where agility, confidence, and calm rule the day. We want everybody to be cool under pressure.

Readiness, as I said, is the key to the operation. You may call it incident response or disaster preparedness or anything else you like. How do you get to the state of readiness?

Here’s some tips. Not necessarily in this order, except the first one. I’m probably going to slip and talk about this in the context of an information security disaster plan from time to time, because being ready for that scenario ultimately can prepare you for a lot of other disaster types.

Here we go. Number one, start now. The best time to start an information security disaster preparedness program that’s integrated into your hospital’s disaster preparedness program was five years ago.

The second‑best time is right now. You have to have an overall plan, so start building it. What’s the most important thing to your healthcare system or to your company if you’re a vendor?

We’ve certainly seen it nuance and in all scripts recently, cloud‑based vendors having their own issues. If their health system or the company can only keep one server up and operating, what is it?

These are these kinds of things you have to think through. What’s the first service you would abandon in a disaster scenario? If you’re thinking of the answer to a lot of these questions is it depends, that’s exactly right.

You need start creating as many possible scenarios to do the planning and the exercises around these tests. Here’s a tip, too. If you’re a hospital, if you’re a health system, there is a disaster preparedness person in your organization already.

Do you know who they are? Have you looked at their plans? Do you know where their office is? What’s the most likely disaster to happen in their opinion? Expand your mind and their mind on this question.

If you live in Seattle, where I do, you might see things like an earthquake. It might actually be an unexpected snowfall or a massive power failure or a catastrophic network failure or a ransomware.

All of those are legitimate. Supporting the mission while coming back from those disasters could require substantially similar work, but you have to make the security breach incident to be one of the scenarios you’re working on aggressively, especially these days.

Like I said in the beginning, most of us agree it’s not if, but when this kind of an information security incident will occur at your healthcare organizations. We are really big targets.

Enough on that. Number two, you don’t have to do this all by yourself. There’s lots of smart people out there who can help you. Many of you are lucky enough to have an incident response analyst or maybe a disaster response business continuity analyst.

This is one of those things that I’ve learned over time. There are health systems who are way ahead of you and are probably very willing to share their plans and their scenarios. They might even offer up a site visit, so take advantage of that.

The government has tons of good resources online. Just take advantage of those despite of googling. There are a lot of really interesting stuff. There’s lots of really interesting plans and scenarios and checklists online that you can find easily.

Is there a military base nearby? I always loved it when civilians asked if they could come and be observers during one of our exercises. Ask. If you don’t ask, the answer is automatically no. Ask and see if you can be a part of those.

It may also be smart to engage in services, especially education information systems [inaudible 24:34] engage in services of an information systems security consulting firm to help you build out plans and exercise them.

As you build this network of smart readiness disaster preparedness people, they’ll introduce you to other smart readiness disaster preparedness people. It’s like we’re a cult, a bunch of disaster preparedness nerds, and we embrace it.

Many of these folks spent a lot of time thinking about the information security scenarios and plans and checklists. They have a lot of this stuff built, so borrow and steal and learn from those plans.

I have a saying that I’m sure I got from someone else at some point in the past that plagiarism is the most sincere form of flattery. Don’t think you have to invent everything totally on your own.

Political buy‑in, this is a big one, too, and can definitely be a challenge. Your board or your CEO might not view this, or they might view this, especially information security version of this, as an IT security department problem, but it’s not.

Again, you can’t do this all by yourself. You have to get your boss involved and you have to get your boss’ boss involved. You have to convince them to invite your board of trustees to be involved.

This whole section might really be the hardest part, helping the executive team and the board understand that this information security disaster isn’t something that might happen. It’s something that will happen.

That can be tough for them to swallow. What will happen, in my experience, is that they will want to hand over money and FTEs, and tell you that your job is to never ever, ever, ever, ever let this happen. You have to push back on that.

Now, of course, my advice would be to take all the money and take all the FTEs but help them understand that security uses analogy all the time. Security is like a picket fence. You can make the pickets higher and you can move them closer together, but they’re never going to be airtight. There’s never going to be an airtight seal on that fence.

Helping them to understand that they have to be bought into this, not only from a prevention side but from a how‑do‑we‑deal‑with‑it when‑it‑happens side is a big part of disaster preparedness.

Assign roles internally. Look at information services team internally. Hopefully, like I said before, you already have an incident response person and analyst. You probably have an incident response process.

If you have those things, you’re already a little bit ahead of the game. You have a disaster recovery plan. Have you worked on that? Leverage that as part of this. You probably already have an operation center.

Unfortunately, you probably already have some experience with downtimes. What do you do in those situations? These are the same types of skills that maybe you adlib a lot of that today.

If you actually wrote it down and documented it, you would have a good start on a plan. Who inside the department has what role when it comes to networks, and clinical apps, and voice, and business apps, and interface engines, and all that?

Here’s a good one. Who’s in charge when the number one person that you’ve just identified as being in charge of networks or clinical apps is not there? This is another one that I think everyone struggles with a bit.

Is biomedical equipment part of the IS department or are they separate? How do you work them into the team? The same thing really goes to facilities, because we have a lot of facilities near now that is really information‑technology‑like.

You have to assign those roles internal to IS, but you can see that, in fact, I’m reaching out to the external roles, too. Like I said earlier, this isn’t an information services department responsibility alone.

Lab, and radiology, and pharmacy, and nursing, and the physicians, and MarCom all have a role to play when a disaster unfolds. Does your CFO have a Bitcoin account? Do you know your local FBI cybersecurity contact?

Time will be urgent when this stuff happens, so making it up as you go along has unintended consequences. You won’t have time for that in the middle of a disaster, so bring them all into the planning. Bring them all into the exercise.

Hopefully, your hospital already has some kind of a disaster preparedness program, and your hospital is part of those exercises when there are big community exercises. I wonder, have they done an exercise where information security is the focus?

From there, start building plans and checklists. There are things that you can figure out during these exercises that you’ll do over and over again, regardless of the scenario like contacting other agencies, or shutting down equipment, or calling to confirm a fuel delivery, or contacting your vendor partners and service providers for support.

You’re going to want to build books and checklists so that you don’t have to keep all of this in your head or you don’t have to go find that file or that scrap of paper.

This is another thing, too. Create binders with instructions and indexes that you can easily access and review. Unfortunately, I’m going to tell you, I think in many cases, you have to put those things on paper.

If you have an outage, you may not have access to all those electrons. Your share point site may be down, your Google Drive might be down, and you might not be able to get to those. Make sure that you have some printed material to work from.

Then exercise. Just like in military missions from my past, the only way you get more fit to fight is to exercise. You build your scenarios. You run them.

Start easy. Start with tabletop exercises. Then, expand with something more exposed to the whole health system, the whole organization, or the whole company over time.

Work on developing scenarios that focus on weak parts of the previous exercise. Have a designated exercise observer throw a wrench into the plan by adding an unanticipated problem during the course of the exercise.

Here is another tip. Do your outage exercises at different times of the day. Trust me, that if you’re at a hospital, the overnight shift at the hospital, they’re already pretty good at downtimes, because that’s when you run all of your planned downtimes today.

I know because I’ve spent a lot of time with the overnight shift, watching what they do and figuring out how to document and steal a lot of the work that they do, because they’ve able to figure it out.

You don’t want to see some people freak out during unscheduled downtime exercises during the day. Remember that political buy‑in line on the last slide? That’s how you know you’ve really got top cover for something like this. Remember, you can’t get stronger if you don’t exercise.

There’s one other thing, too. If you have a break your glass agreement with a security vendor or some of your other vendors or agencies, make sure that they’re read into your exercise, too.

One thing before I move on. I did not tell you to go run a disaster preparedness exercise during the day without making sure everybody who needs to know knows about that. That will definitely get you into trouble. It’s definitely the kind of thing that you should aspire to work up to.

With those plans, review, edit and update. Update the information in those plans and check lists regularly. People move. Phone numbers change. Emails change.

That update process, put it in a calendar. Put it in your calendar so that you know that you’re supposed to go in and pull those books and look at that information. Make sure everything is updated.

In fact, you can go so far as to lay some of this onto your compliance folks and say, “We want you to come and check us from time to time, to make sure that you’re keeping us honest about keeping all this information tied to our disaster response updated.”

This is really one of those things that is tied to disaster preparedness. As much as anything else, it’s tied to good operations. This for me is general order one every place that I get.

If you have networks and application portfolio that’s grown up organically over the years, and you haven’t put forth the resources to streamline, de‑duplicate, un‑complicate, understand and simplify your architecture, then not only are you making operations harder than they need to be. You’re creating a situation where doing good security is painfully difficult.

If you don’t have a clear view of how your operations really work, north, south, east, and west, if you’re operating them blind, you’ve got trouble. I mean, how does your data center look? Is it well organized? Is everything labeled?

When was the last time you toured all the communications closets? Are the cables cleanly routed? Do they look like a bowl of spaghetti? Do you have the right tools to see into your infrastructure and understand what’s really happening?

These fundamental operations, if you do them well, it makes security easier. It makes outages less likely, even if it’s not a ransomware attack. It’s just one of those things that happens.

Simplify, or the disasters you’ll experience are much more likely to be self‑inflicted. Another way to say that is that simple and clean architectures are easier and less expensive to both operate and secure.

This cat doesn’t have anything to do with this presentation. I just wanted to see if you are still paying attention. Not my fault if you just spit soda out onto your keyboard. It’s good. I’m glad you’re keeping up. That’s Manny, the selfie‑taking cat on Instagram, by the way, if you’re interested.

The last thing I would say is communicate. Communicate, communicate, communicate. Have a plan for all kinds of communications.

Being transparent during a disaster helps everyone stay cool under pressure, because they’re not sitting around, wondering what is happening. They actually know what’s happening. They know when the next update is coming. You got to have those kinds of schedules.

How do you tell everyone that you’re having an outage if the network is down and you can’t send email? What if the cellphones were out?

Do you have a recall roster with all of your employees’ names, addresses, and alternate phone numbers? That’s a military‑type recall roster, where people can drive to other people’s house and tell them, “We have a disaster. We have a problem. I need for you to come in.”

Do you have an IS department private Twitter account? This is actually something that I saw at an organization the other day.

It was one of those things where it was private. You can only get in if you’re a member of the department. People knew that if they couldn’t communicate in any other way, if they could get to Twitter and see that account, they’re going to be updated on what was happening.

Is there a standing bridge line number that goes live during a disaster for your IT team? Maybe as important, is there a bridge line that the hospital or clinic leadership knows to automatically dial into when there’s an IT disaster?

Those should not be the same lines. You don’t need the CEO or the chief operating officer or the clinic president dialing in on the line where you’re actually trying to solve problems on the IT side of the house.

Where is the rally point in case there is a disaster? Where is your IT team get‑together in that event? In case that place doesn’t exist anymore, or there’s a problem there, where is the alternate rally point?

We talked a little bit about, do you know your local FBI contact? Do you know your state police cyber contact? Have they been involved in your exercises? Are you part of the NH‑ISAC? If you don’t know what that is, you should definitely google that from an information security perspective.

Don’t forget the patients. How and what do you tell the patients, both those in the hospital if there’s an incident, and those in the community who may have appointments or who may not have appointments?

A lot of this communication can be drafted in advance. It can be plagiarized from others who have done it well. Not just a, “We’re just winging it” moment. It’s one of those things where you can actually be well prepared for this.

In that particular case, it really is something that is not done in information services. That kind of an external communication should be prepared by your marketing and communication department. They will get better at that if they’re involved in your exercises.

Winter is coming. Not to be too excited about generating fear. It’s just the reality of my background and where I come from. Winter is coming. Stuff is going to get the fan. Murphy’s law will rule the day. It’s just a matter of when.

If you do some of the stuff that I talked about today, then you’ll be prepared. You’ll be calm, cool, collected, and truly agile in the heat of a real‑world event.

I know there’s a lot more that we could talk about. I wanted to leave plenty of time for questions. I want to open it up to questions, and hear what you’re thinking and what you’ve been doing to become more ready and to prepare for your disasters and outages.

Let me say thanks to PerfectServe and you for having me here today. I’m ready for questions, if you have some.

Heather:  Thanks, Drex. Yes. Fantastic information that you shared with us today. We do have some questions coming in. Before we get to those, I do want to remind all of our listeners that you can submit any questions for Drex, using the Q&A box located in the upper right‑hand corner of the webinar platform.

Let’s get to some of these questions I have come in. Let’s see. This question says, “We’re a small clinic. We don’t have a big IT staff. What’s the best way for us to get started?”

Drex:  I would say, really, a lot of this is follow the steps above. Start with the buy‑in from your clinic leaders. Start small. I know in a lot of small clinics that don’t have big IT staff and may actually have an IT army of one, this can be a real challenge.

Depending on the size of the clinic, you can do very simple disaster training scenarios, even with just part of the office staff. I’ve seen folks do a good job, even doing this individual provider by individual provider in the beginning.

Once you have them thinking about this, then it’s easier to move them toward a bigger disaster preparedness exercise and get it onto the calendar for a more regular practice. I also know that it’s a challenge because, especially in a small practice or a small clinic, every minute you’re not seeing patients is a minute that you’re not making money.

It’s important to be reasonable. Be calm and be flexible. Show other examples of outages that have happened in other places. Not to spread fear but to help them understand that this situation may be more likely than they think and help them get to a better place to get started.

Heather:  Thank you. This question is related to policies and procedures. They’re asking, “Are policies made to be revised and updated? Do you have any advice on where to start? Should we start with a business impact analysis first? Where should we begin on revising those?”

Drex:  That idea of starting with a business impact analysis, early on in the presentation, I talked quite a bit about risk ‑‑ or I talked a bit about risk ‑‑ and figuring out what are the most likely scenarios.

When you’re looking at policies and procedures, I’m making the assumption that it’s policies and procedures around disaster preparedness, a lot of this goes back to risk.

The scenario that you think is most likely and…When I work with a lot of clients and have a lot of these conversations, and we talk about cybersecurity risk or information services downtimes or things like that, those aren’t necessarily the big things that hospitals are thinking about. They are thinking about earthquakes, plane crashes, tornadoes, and things like that.

It’s making sure that you don’t leave that one out. It doesn’t have to be number one. It should be easy for folks involved to understand how likely that is, because almost everyone has unscheduled downtimes today, and to figure out how do you build out those scenarios, policies, and procedures around that?

I’m a Toyota production systems Lean person. I look at everything like that. It’s incremental improvement.

You learn something in one exercise. As you update one set of policies and procedures, you figure out how to take that goodness, spread it over to the other policies and procedures, and make them better.

If you do that often enough, you will wind up with the kinds of documents that you’re hoping to wind up with at the end of all of these. The thing is, too, they’re never done. You’re always going to be updating them.

Heather:  Very good point. Thank you for that. We have another question that’s come in.

This person says, “We’re in a small office. My staff and patients really only use cellphones nowadays. What have you seen in your experience, Drex, as a good suggestion to contact patients in time of a community disaster if cellphones are out?”

Drex:  Sure. I go back to this idea of social media. A lot of this probably is how I’m wired. Some of this is also helping the patients understand that when something happens, your clinic or your hospital may not be able to call all of them individually.

If they’re wondering, they may have to take some responsibility to go to the website. See if we’re actually publishing anything about that. There are certainly tools where you could set up and send text to all the patients.

There are auto dialer kinds of tools, too, that can do robocalls. Call all the patients and tell them what’s going on as far as appointments are being kept or, “We’re delaying them, or we’re canceling these appointments today. We’re going to be calling to reschedule.”

Those are some of the best things that I’ve seen. I’m sure that PerfectServe may have some comments about how to do that better, too.

Heather:  Absolutely. We can provide some further information about Perfect Serve and how our cloud‑based platform really addresses any of that between clinicians and when patients are calling into our practice as well.

Because we’re cloud‑based, we’re not tied to an answering service or something that’s local to a facility, and we have 99.9 percent uptimes, our systems are always up and available, and provide that communication platform between clinicians and then also for patients to call into the practice. That is something that we provide as well.

Drex:  Nice.

Heather:  Thank you for that. I’ve got a few more questions here. Just as another reminder, if you have questions, anyone from the audience, go ahead and put it in that Q&A box. We’ll continue just going through these questions.

Let’s see. This person says, “We only do a disaster preparedness exercise once per year. You talked a lot about exercise, Drex.” [laughs]

This person says, “Even then, once per year is pretty limited in the groups that participate. What is your recommendation on driving wider participation?”

Drex:  That is a good question. That is a question that a lot of places are challenged with. This ultimately comes back to the bullet point about, it’s a leadership issue. You have to get executive buy‑in.

If they’re not interested and they don’t understand it…maybe I should say it in a different way. If they’re interested and they understand it, they’ll work at it. If not, they won’t. Your mission, should you choose to accept it, is to better understand what motivates key organizational leaders and get them involved to the exercises.

As example, one of the techniques I’ve used before is to find a clinician or administrator who’s anxious to show their leadership skills. Maybe they’re not somebody who’s at the big table today, but they’re respected as an informal leader.

If you can give them an opportunity to grab on to and sink their teeth into something that nobody else is doing, but they understand why it’s important and why the organization should be doing it, that turns out to be a good one.

If you can find somebody that has a military background or is a leader in your organization, you wind up with somebody who has a pre‑built understanding of the value of the work. Talk to them about it, too.

Most importantly, don’t give up. Incremental progress is good. Rome wasn’t built in a day and either was a good disaster preparedness program. Stay accurate.

Heather:  Thank you. This person is asking, “Do you have any recommendation as to how we can put a dollar figure on application loss and/or patient impact for disasters that occur?”

Drex:  I wish I had that right here in front me, but I don’t. I know there are definitely calculators that help you do that kind of calculation. This might be something that I have to look into after the fact. Maybe I can share it with you after the fact.

I know there are companies who do support service work, who specialize in this. I know that I’ve seen in the disaster preparedness material some calculators for things like that.

Basically, it comes down to, really, doing the math around the specific calculations of how much do you make per visit, or some scenario like that. You work that math one out.

It’s definitely out there. I’m positive that you’re not the first person to do this. I know I’ve seen some of those kinds of calculations in the past.

I might have to go back and just figure out a little bit and see if I can give you something after the fact. I’m sorry. I can’t answer that one right off the bat. I know they exist. I just don’t have them at hand.

Heather:  That’s perfectly fine. We will send a follow‑up email out to all of our attendees. We can certainly include any information in that follow‑up communication. That would be great. All right.

Drex:  Thanks for asking that question. That’s a good one.

Heather:  Yes, thank you. Thanks. We got a couple more here. This person says, “In the event of a disaster, is there a goal time frame as to when normalcy should return, assuming it varies based on what type of disaster it is? Is there a goal that we should shoot for?”

Drex:  Yes. When you go through and do a business continuity, disaster recovery piece of work specifically ‑‑ I talked about this specifically with an information services ‑‑ part of that work that you will do, we need to talk with end users and leaders at various departments who use the application that you run.

Through a very structured survey process, they can give you some good ideas about how long they can be down before certain types of services start to be degraded. From that kind of information, you start to be able to derive how soon from the time that I have the network up and running can I get this application up and running.

The thing that you get out of a lot of this kind of work is that you don’t really realize that, when you’re starting from scratch, how many things are prerequisites to other things so you have to have the interface engine up and running before you can bring some applications back online.

Sometimes people get wrapped around the idea of, for example, I can have my Cerner electronic health record up within x number of minutes or x numbers of hours from scratch. Realistically, the network has to be up, the interface engine has to be up, etc., etc., etc. before you can even start bringing the Cerner system up.

The important part of this whole conversation, this whole exercise is to understand how long that takes. It’s good to know how soon an end user says, “We now have significant degradation.” Being able to deliver a service, that gives you a goal of understanding what’s happening with your partners in the healthcare system.

It’s also good for them to know, “This is how long it actually takes from the time we figure everything out to bring everything back.” That helps them drive a process around, “OK, we’d like to have it back in three hours, but in fact, it might be nine hours, so what are we going to do for that six‑hour period?”

Those are the kinds of things, from a disaster response perspective or a disaster preparedness perspective, that end users can prepare for them. We do need to have paper. We are going to have to have those flow sheets. We’re going to have to train nurses on how they used those old‑fashioned flow sheets instead.

We’re going to have to make sure that we have a printer connected to a computer that isn’t networked ‑‑ the printer isn’t networked ‑‑ so that we can print out the latest information on lab results and pharmacy results that we have set up to download to this computer every hour if you’re in a unit.

Those kinds of things are the things that you figure out as you go through a lot of this process. Unfortunately, then the bottom line answer, I don’t think there is one good standard response to that, that you should expect to be up in 12 hours. A lot of it just depends on what the problem is, how it affected your infrastructure.

Do you have the backups and all of the things that you should have? A lot of folks find out that they thought they were doing backups, but it turns out they had never actually tested the backups. The backups that they were doing religiously weren’t very good.

Certainly, if you’re working with a cloud‑provider you’re probably more likely to be better off in that situation than doing the work on your own. They spend a lot more time. I shouldn’t say they spend a lot more time. Generally speaking, they spend a lot more time checking and double‑checking that kind of work because it’s all they do. It’s all they’re focused on.

I had a long rambling answer there. The short answer is I don’t think there’s a good specific answer for that. It depends.

Heather:  Great. Thank you for that.

Drex:  Sure.

Heather:  Very helpful. We have two final questions here, and they’re very similar. In the spirit of imitation is the highest form of flattery, do you have templates or a step‑by‑step guide that our listeners today can follow?

Drex:  I would say the best place to get that really would be to have conversations with peer organizations who might be ahead of you or might be in the same place as you, hopefully, a little ahead of you. Those are great places to go steal those things. I think if you’d look online and you do some Google searches, you’ll find some good templates for everything from policy to communication plans.

The government has a lot of great stuff online. Those are really good places to start. I would say that the listeners have my contact info, and I’m certainly willing to help however I can help.

Heather:  All right. Good. Thank you for that. At this time there are no additional audience questions. We’ll wrap up the Q&A portion, but we do, PerfectServe has a question for you, our audience. Please let us know. Would you like more information on how PerfectServe can support your disaster‑preparedness strategy?

Please take a moment and answer that polling question right on your screen there.

That does wrap up our Q&A segment today. As I mentioned at the start of today’s webinar, PerfectServe is dedicated to supporting improvement in care delivery. These webinars are just one of the ways that we do that. We do have educational webinars throughout the year. Our next webinar is coming up in May. These are free to attend, so we encourage you to sign up for these.

You can do that by clicking on the icon that’s in the center of your screen that looks like a link to register. As soon as we wrap up this poll, there we go. Our upcoming webinar is on March 27th, so in about a month, with the advisory board company. It’s focused on innovations in clinical technology. You can, like I said, sign up for that by clicking the link in your webinar platform there.

I once again would like to thank Drex Deford for a unique and engaging presentation. Thank you, Drex. I learned quite a bit of information myself. I appreciate you…

Drex:  Thank you.

Heather:  …being on with us today. Thanks to our audience. We know your time is extremely valuable, so we thank you for spending some of it with us today. That does conclude our webinar. Please take a moment to complete our short survey and tell us how we did. Have a great afternoon.