Kral Ussery LLC, Certified Public Accountants
TX Office: (817) 416-6842
NV Office: (702) 565-2727

How to Respond to a Data Breach

« Back to News List            What the Delay of FASBAs Revenue Recognition Standard... >>

How to Respond to a Data Breach

It’s been a bad year (or several) for information security. Incidents seem to be getting bigger and more frequent.

Wednesday, July 15, 2015

By John Parkinson, affiliate partner at Waterstone Management Group, published in CFO.com

It’s been a bad year (or several) for information security. Incidents seem to be getting bigger and more frequent.

According to the 2015 Verizon Data Breach Investigations report (which should be required reading for all business executives) there were more than 7 million vulnerability exploits in 2014 — and 70% of breaches affected at least one additional organization beyond the initial breach, with the secondary impact occurring within one day.

A few years ago I spent about a year responsible for information security at a business that held personally identifiable information (PII) on more than 500 million individuals (and depended on a business model that required tens of thousands of external network connections moving terabytes of data). I have a good deal of sympathy, therefore, for those managers who must wrestle with the challenges of effective cybersecurity. Back in 2010 (when the cyber threats landscape was simpler), we were monitoring millions of unauthorized access attempts each day, and occasionally, more than 90% of the external traffic at our perimeter was suspicious or clearly malicious.

I am reasonably certain that we never had an incident that touched the PII data, although intruders did occasionally get past our outer layers of defense. Our controls and technology, supplemented by careful hiring practices, periodic awareness training, and policy reviews made it unlikely that we would suffer a major breach, but not a day went by when I did not worry about the possibility and look for ways to strengthen our defenses.

It was a regular executive management issue (we had a security committee of senior executives that met monthly) and a periodic board issue. It also created some interesting challenges when dealing with our regulators — on several occasions I had to decline to describe the details of our security architecture and methods because I could not afford to trust the regulators to keep them confidential. I was always willing to share what we were doing, but not exactly how.

And despite our confidence (and evidence to support us) we had an incident response plan in place.

Incident response plans are one of the most neglected aspects of information security. Not just the technical response — that was relatively straightforward, and much of what we needed to do technically was implemented to some degree every day by the security team responding dynamically to the internal and external activity we monitored.

Just as critical, however, was the business response plan. Who must be informed? Who should be informed? Who should be involved? What would we say about an incident, when, and to whom? How would we communicate, especially if our normal tools (email, network, etc.) were compromised or unavailable because of the incident or our technical response?

Back in 2010 there was already a patchwork of disclosure requirements, varying by state and generally inconsistent with each other, plus some federal rules (and federal agencies that were likely to be useful in tracking down perpetrators). Our plan had to account for more than a dozen different notifications (it would be more today). Plus we had a series of stakeholder groups that needed to be notified:

  • The incident response team, who would need to get together to manage the process;
  • The executive team (and in some cases the board, although for us, it had been agreed in advance that the CEO had that responsibility);
  • Business managers who had customers that might be impacted by a data loss or service interruption;
  • Employees, especially if employee-related data (and not just HR data — employee information could also be gathered from their web browsing or online shopping activity) had been compromised; and
  • Our investors, especially if the incident could impact revenues and earnings.

The plan had to include how to contact each of these groups (with options depending on what communications methods were working) with what priority, and had to assign responsibility for crafting an appropriate message and then delivering it and making sure it was received.

One of the complications you run into early on is that everyone inside the business (and often a lot of people outside) wants to know what’s happening, almost always before there is an accurate understanding of the incident, its impact, and its root cause. A second complication is keeping the messages that do go out consistent and accurate — the more people that are involved in the telling, the greater the risk of confusion and inaccurate comments that have to be retracted or corrected later.

You want to communicate as much as you can (and are sure you know), but avoid speculation (remember it’s always an “incident” not a “breach” until you are sure it is a breach). So it’s important that the plan makes clear who gets to say what and to whom. In our case, we had a single spokesperson nominated for each kind of incident (and a series of fallbacks if that person could not be available for any reason).That puts a lot of strain on the nominated individual, but keeps the communications consistent and controls the flow of information. Because it’s likely that interested outsiders have multiple contacts within the organization, it’s essential to have (and publish and enforce) a policy that routes all questions to the response team.

Our plan also had a timetable for communications and a set of standardized message templates that could be rapidly customized to the specifics of an incident. It took us a while to get a workable schedule and an effective set of templates worked out, but once we had them in place, only minor tweaks (mostly from the after-incident reviews) were necessary.

Everything needed to execute the plan was stored offline (actually on a number of laptop PCs with the network card and wireless facilities disabled and stored in separate locations). Updates had to be made from verified external media. This was possibly more paranoid than necessary, but a smart attacker would probably try to find and delete or damage the plan details if they were stored on the network.

A plan like this can of course be used for incidents other than those that are cybersecurity related, and we did indeed use it that way. Whenever we had any kind of actual or potential service interruption (rare, with a good high availability architecture, but incidents still happen for reasons you don’t control) the plan determined how we should react. So it got tested in use several times a year — and we did a dry run at least once a quarter and any time a key resource on the response team changed.

Putting a plan like this together, keeping it up to date, and exercising it periodically is a lot of work — a major reason that it doesn’t always get done. But when something bad happens (and it will) having the plan available and the experience that only comes from practice will save a lot of time and potentially avoid embarrassment at best and litigation at worst.

Related links:
http://ww2.cfo.com/cyber-security-technology/2015/07/respond-data-breach/

IPO FAQs | IPO Process | Detailed IPO Process Steps
Home | Privacy Policy | Disclaimer | Site Map

Copyright © , Kral Ussery LLC, Certified Public Accountants All Rights Reserved

Web Presence By Netphoria Inc