Regulatory Enforcement and Litigation

One way to measure the increasing importance of cybersecurity to American businesses is to track how often the issue arises as a risk factor in corporate filings with the Securities and Exchange Commission.

A recent analysis by Bloomberg BNA charted a dramatic rise over the past six years, with only a tiny fraction of businesses citing cybersecurity risks in 2011 SEC filings compared to a substantial percentage in the first six months of 2017.

The report notes that a likely reason for the increase was SEC guidance issued in 2011 that clarified when cyber incidents should be disclosed in financial filings, leading to cybersecurity’s being “elevated into the general counsel’s office [and onto] the board’s agenda.”

Read more at Bloomberg BNA’s article Corporate Cyber Risk Disclosures Jump Dramatically in 2017.

Acting Federal Trade Commission (FTC) Chairman Maureen K. Ohlhausen made it clear that she expects the FTC’s enforcement role in protecting privacy and security to encompass automated and connected vehicles. In her opening remarks at a June 28, 2017 workshop hosted by the FTC and National Highway Traffic Safety Administration (NHTSA), she said the FTC will take action against manufacturers and service providers of autonomous and connected vehicles if their activities violate Section 5 of the FTC Act, which prohibits unfair and deceptive acts or practices.

Such concern is warranted as new technologies allow vehicles to not only access the Internet, but also to independently generate, store and transmit all types of data – some of which could be very valuable to law enforcement, insurance companies, and other industries. For example, such data can not only show a car’s precise location, but also whether it violated posted speed limits, and aggressively followed behind, or cut-off, other cars.

Acting Chairman Ohlhausen noted that the FTC wants to coordinate its regulatory efforts with NHTSA, and envisions that both organizations will have important roles, similar to the way the FTC and the Department of Health and Human Services both have roles with respect to the Health Insurance Portability and Accountability Act (HIPAA).

Traditionally, NHTSA has dealt with vehicle safety issues, as opposed to privacy and data security. Thus, it may mean that the FTC will have a key role on these issues as they apply to connected cars, as it already has been a major player on privacy and data security in other industries.

Acting Chairman Ohlhausen also encouraged Congress to consider data breach and data security legislation for these new industries, but speakers at the workshop (video available here and embedded below) noted that legislation in this area will have difficulty keeping up with the fast pace of change of these technologies.

Part 1:

Part 2:

Part 3:

Specific federal legislation, or even laws at the state level, may be slow in coming given the many stakeholders who have an interest in the outcome. Until then, the broad mandate of Section 5 may be one of the main sources of enforcement. Companies who provide goods or services related to autonomous and connected vehicles should be familiar with the basic FTC security advice we have already blogged about here, and should work with knowledgeable attorneys as they pursue their design and manufacture plans.

In one of the best examples we have ever seen that it pays to be HIPAA compliant (and can cost A LOT when you are not), the U.S. Department of Health and Human Services, Office for Civil Rights, issued the following press release about the above settlement.  This is worth a quick read and some soul searching if your company has not been meeting its HIPAA requirements.

FOR IMMEDIATE RELEASE
April 24, 2017
Contact: HHS Press Office
202-690-6343
media@hhs.gov

$2.5 million settlement shows that not understanding HIPAA requirements creates risk

The U.S. Department of Health and Human Services, Office for Civil Rights (OCR), has announced a Health Insurance Portability and Accountability Act of 1996 (HIPAA) settlement based on the impermissible disclosure of unsecured electronic protected health information (ePHI). CardioNet has agreed to settle potential noncompliance with the HIPAA Privacy and Security Rules by paying $2.5 million and implementing a corrective action plan. This settlement is the first involving a wireless health services provider, as CardioNet provides remote mobile monitoring of and rapid response to patients at risk for cardiac arrhythmias.

In January 2012, CardioNet reported to the HHS Office for Civil Rights (OCR) that a workforce member’s laptop was stolen from a parked vehicle outside of the employee’s home. The laptop contained the ePHI of 1,391 individuals. OCR’s investigation into the impermissible disclosure revealed that CardioNet had an insufficient risk analysis and risk management processes in place at the time of the theft. Additionally, CardioNet’s policies and procedures implementing the standards of the HIPAA Security Rule were in draft form and had not been implemented. Further, the Pennsylvania –based organization was unable to produce any final policies or procedures regarding the implementation of safeguards for ePHI, including those for mobile devices.

“Mobile devices in the health care sector remain particularly vulnerable to theft and loss,” said Roger Severino, OCR Director. “Failure to implement mobile device security by Covered Entities and Business Associates puts individuals’ sensitive health information at risk. This disregard for security can result in a serious breach, which affects each individual whose information is left unprotected.”

The Resolution Agreement and Corrective Action Plan may be found on the OCR website at https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/cardionet

HHS has gathered tips and information to help protect and secure health information when using mobile devices:  https://www.healthit.gov/providers-professionals/your-mobile-device-and-health-information-privacy-and-security

To learn more about non-discrimination and health information privacy laws, your civil rights, and privacy rights in health care and human service settings, and to find information on filing a complaint, visit us at http://www.hhs.gov/hipaa/index.html

The “new age” of internet and dispersed private data is not so new anymore but that doesn’t mean the law has caught up.  A few years ago, plaintiffs’ cases naming defendants like Google, Apple, and Facebook were at an all-time high but now, plaintiffs firms aren’t interested anymore.  According to a report in The Recorder, a San Francisco based legal newspaper, privacy lawsuits against these three digital behemoths have dropped from upwards of thirty cases in the Northern District of California i 2012 to less than five in 2015.   Although some have succeeded monumentally—with Facebook writing a $20 million check to settle a case over the fact that it was using users’ images without their permission on its “sponsored stories” section—this type of payout is not the majority.  One of the issues is that much of the law in this arena hasn’t developed yet.  Since there is no federal privacy law directly pertaining to the digital realm, many complaints depend on old laws like the Electronic Communications Privacy Act and Stored Communications Act (1986) as well as the Video Privacy Protection Act (1988).  The internet and its capacities was likely not the target of these laws—instead they were meant to prohibit such behavior as tapping a neighbor’s phone or collecting someone’s videotape rental history.

Further, it seems unavoidable now to have personal data somewhere somehow.  Privacy lawsuits attempting to become class actions have a difficulty in succeeding in a similar way that data breach class actions do: the plaintiffs face the challenge of proving concrete harms.  In a case later this year, Spokeo v. Robins, the Supreme Court may change this area of law because it will decide whether an unemployed plaintiff can sue Spokeo for violating the Fair Credits Reporting Act because Spokeo stated that he was wealthy and held a graduate degree.  The issue will turn on proving actual harm.  Companies that deal with private information on a consistent basis should protect themselves by developing privacy policies that, at the very least, may limit their liability.   The reality is that data is everywhere and businesses will constantly be finding creative and profitable ways to use it.

To keep up with the Spokeo v. Robins case, check out the SCOTUSblog here.

http://www.scotusblog.com/case-files/cases/spokeo-inc-v-robins/

The freedom from automated calls at random hours of the evening may seem like the true American dream these days as more and more companies rely on these calls to reach out and communicate with customers.  Unfortunately, now that the Federal Communications Commission (“FCC”) voted to expand the Telephone Consumer Protection Act (“TCPA”) to include stringent yet vague restrictions on telemarketing robocalls, it may not be a dream for everyone. 

In June of this year, in a 3-2 vote, the FCC voted on adding the rule to the TCPA that entails barring companies from using “autodialers” to dial consumers, disallowing more than one phone call to numbers that have been reassigned to different customers, and mandating a stop to calls under a customer’s wishes.  These restriction may seem reasonable but dissenting Commissioner, Ajit Pai, recognized that the rule’s broad language will create issues because it does not distinguish between legitimate businesses trying to reach their customers and unwanted telemarketers.  Some attorneys have further commented on the rule stating that its use of “autodialer” opens up a can of worms of interpretations and can really be viewed as any device with even the potential to randomly sequence numbers, including a smartphone.  Companies using even slightly modernized tactics to reach out to their customer base are now at risk of facing litigation—and it won’t stop there.  Businesses that legitimately need to reach out to their customers will be caught between a rock and a hard place as they face a one-call restriction now and may also open themselves up to litigation if a customer decides to take that route.

The FCC Chairman, Tom Wheeler, attempted to quash concerns by stating that “Legitimate businesses seeking to provide legitimate information will not have difficulties.”  This statement unfortunately won’t stop plaintiff’s attorneys from greasing their wheels to go after companies who even make “good faith efforts” to abide by the new rule.  Attorneys who defend businesses have recognized that the rule is ridden with issues that could potentially harm companies that simply do not have the mechanisms to fully control and restrict repeated calls or the technology that makes those calls.  But, long story short, just because this rule has been put in motion, does not mean it will stand as is. Litigation and court action will likely be a natural consequence and that may result in changes for the future.  For now, businesses that utilize automated phone calls should be wary of the technology used and attempt to at least keep track of numbers and phone calls made.  When in doubt, talk to an attorney to make sure you are taking the appropriate precautions.

A recent District of Nevada ruling could cause issues for consumers in data breach class action cases moving forward.  On June 1, 2015, the court ruled that a consumer class action against Zappos.com Inc. could not proceed because the class did not state “instances of actual identity theft or fraud.”  The suit was brought as a result of a 2012 data breach where Zappos’ customers’ personal information was stolen, including names, passwords, addresses, and phone numbers.  Even though the information was stolen, the court dismissed the case because the class could not prove that they had been materially harmed and had no other standing under Article III.

If a data breach has occurred, but the victims cannot claim any harm besides the fear that a hacker has their information, courts have been willing to grant defendants’ motions to dismiss.  The ruling by the District of Nevada court is the most recent decision in a trend to block consumer class actions relating to data breaches.  Many of these recent rulings have been influenced by the Supreme Court’s 2013 decision in Clapper v. Amnesty International USA.  In Clapper, the Supreme Court held that claims of future injury could only satisfy the Article III standing requirement if the injury was “certainly impending” or if there was a “substantial risk” that the harm was going to occur.  Unfortunately for the consumer class in the Zappos’ case this means that unless their stolen information has been used to harm them, the data breach alone is not enough standing to bring a suit.

However, some district courts have been able to find sufficient standing for data breach victims in spite of the Clapper decision.  In Moyer v. Michaels Stores, a district court in the Northern District of Illinois ruled that data breach victims had standing to sue.  The court relied on Pisciotta v. Old National Bancorp, a Seventh Circuit pre-Clapper decision, which held that the injury requirement could be satisfied by an increased risk of identity theft, even if there was no financial loss.  Moyer further distinguished itself from Clapper by explaining that Clapper dealt with national security issues, and not general consumer data breaches.  Other district courts have distinguished their cases from Clapper by holding that Clapper dealt with harm that was too speculative to quantify, while consumer data breach cases deal with the concrete possibility of identity theft.

Although Clapper set the tone for consumer data breach claims, district courts have been divided because of different interpretations in the ruling.  The Supreme Court recently granted certiorari in another Article III standing case, Spokeo Inc. v. Robins Inc., which deals with a private right of action grounded in a violation of a federal statute.  Although it does not directly deal with consumer data breaches, the decision may lead the Supreme Court to expand the standing requirements generally.  Given society’s increasing use of technology and inclination to store personal information electronically, consumer data breach claims will only increase in the future.  The courts’ standing requirements must adapt to meet the changing needs of individuals and businesses alike.

Guest Blogger: Violetta Abinaked, Summer Associate

As noted in Dittman et al. v. The University of Pittsburgh Medical Center, Case No. GD-14-003285, previously reported on here, Pennsylvania has firmly adopted the approach that the Risk of Harm is Not Enough in Data Breach Actions. Still, data breaches have become some of the most noteworthy headlines in recent news. An increase in litigation has brought with it efforts to shrink the case load through the Article III requirement of standing. This means that courts are finding that the plaintiffs have not sufficiently established a concrete injury in order to seek remedies from the court. One of the main issues with data breaches is that once the data has been extracted or accessed, it is not necessarily always true that tangible harm will follow. Due to that nature, the Third Circuit established that when it comes to data breach actions, simply the risk of future harm does not suffice to save the claim. The seminal case of Reilly v. Ceridian Corp. held that where no actual misuse is alleged, “allegations of hypothetical, future injury do not establish standing under Article III.” 664 F. 3d 38 at 41 (3rd Circuit 2011).

The courts are making it tougher to carry out a data breach claim if the plaintiff can’t show actual or certainly impending misuse of the information. Reilly’s narrow definition of standing is leading the courts’ decisions in dismissing cases. A defendant will likely have a higher chance of getting a dismissal in a data breach action if the plaintiff is not able to provide any actual misuse of the information—at least in the Third Circuit. As a company which may be at risk for a data breach, this heightened need for tangible damage from the plaintiff may be a saving grace if future litigation arises.

The Federal Trade Commission recently announced that it settled charges against a health billing company and its former CEO that they misled consumers who had signed up for their online billing portal by failing to inform them that the company would seek detailed medical information from pharmacies, medical labs and insurance companies.

The Atlanta-based medical billing provider operated a website where consumers could pay their medical bills, but in 2012, the company developed a separate service, Patient Health Report, that would provide consumers with comprehensive online medical records.  In order to populate the medical records, the company altered its registration process for the billing portal to include permission for the company to contact healthcare providers to obtain the consumer’s medical information, such as prescriptions, procedures, medical diagnoses, lab tests and more.

The company obtained a consumer’s “consent” through four authorizations presented in small windows on the webpage that displayed only six lines of the extensive text at a time and could be accepted by clicking one box to agree to all four authorizations at once.  According to the complaint, consumers registering for the billing service would have reasonably believed that the authorizations related only to billing.

The settlement requires the company to destroy any information collected relating to the Patient Health Report service.

This case is a good reminder for companies in the healthcare industry looking to offer new online products involving consumer health information that care must always be taken to ensure that consumers understand what the product offers and what information will be collected.

 

On October 24, the Federal Communications Commission (FCC) threw its hat into the data security regulation ring when it announced it intends to fine two telecommunications companies $10 million for allegedly failing to safeguard the personal information of their customers.

Both TerraCom, Inc. (TerraCom) and YourTel America, Inc. (YourTel) allegedly collected customers’ personal information, including names, addresses, Social Security numbers, and driver’s licenses, and stored it on servers that were widely available on public websites online through a simple Google search.  The information could be accessed by “anyone in the world” exposing their customers “to an unacceptable risk of identity theft and other serious consumer harms.”

According to the FCC, TerraCom and YourTel violated Sections 201(b) and 222(a) of the Communications Act of 1934 by:

  • Failing to properly protect the confidentiality of consumers’ personal information, including names, addresses, Social Security numbers, driver’s licenses;
  • Failing to employ reasonable data security practices to protect consumer information;
  • Engaging in deceptive and misleading practices by representing to consumers in the companies’ privacy policies that they employed appropriate technologies to protect consumer information when they did not; and
  • Engaging in unjust and unreasonable practices by not notifying consumers that their information had been compromised by a breach.

Whether the FCC’s announcement signals its intention to become yet another regulator of data security remains to be seen.  But companies that collect and store customer personal information must take the initiative to ensure information is stored properly with appropriate data security safeguards in place.  And safeguards are not enough.  If, after investigation, a company uncovers a breach, it must timely notify customers in accordance with state law and federal regulations.

For more information about the FCC’s announcement, click here.

 

The Federal Trade Commission entered into a settlement with the social networking site Twitter on Thursday, June 25th.  The settlement was the result two 2009 hacker breaches, which resulted in 35 user accounts (mostly celebrities and politicians) being compromised and passwords disclosed.  For those wondering, the first breach was achieved in January 2009 by using a password guessing tool to gain access through a lowercase/weak password protected Twitter administrative account and then reset user account passwords.  The second breach in April 2009 allowed the hacker to gain access to a Twitter employee’s email account, where that employee had "similar" passwords stored in plain text, resulting in further user password resets.  You may recall hearing about (or receiving) the "Tweet" from President-elect Obama offering you an opportunity to receive $500 in free gas.  Seriously, that happened.

According to the FTC press release, [u]nder the terms of the settlement, Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information, including the measures it takes to prevent unauthorized access to nonpublic information and honor the privacy choices made by consumers. The company also must establish and maintain a comprehensive information security program, which will be assessed by an independent auditor every other year for 10 years."

What did Twitter do wrong, you may ask?  The FTC alleged in its complaint that Twitter was really bad at preventing unauthorized access to its system.  Really, really bad.  Specifically, Twitter failed to take reasonable steps to:

  • require employees to use hard-to-guess administrative passwords that they did not use for other programs, websites, or networks;
  • prohibit employees from storing administrative passwords in plain text within their personal e-mail accounts;
  • suspend or disable administrative passwords after a reasonable number of unsuccessful login attempts;
  • provide an administrative login webpage that is made known only to authorized persons and is separate from the login page for users;
  • enforce periodic changes of administrative passwords, for example, by setting them to expire every 90 days;
  • restrict access to administrative controls to employees whose jobs required it; and
  • impose other reasonable restrictions on administrative access, such as by restricting access to specified IP addresses.

Sounds like pretty reasonable steps for Twitter to have taken.  Frankly, it sounds like pretty reasonable expectations in 2000, not just 2009.  Your IT Department surely has at least these requirements, right?  Right?

To many, this settlement is further evidence that the "we are serious this time, seriously" approach touted by the FTC in recent years is merely lip service. 

That being said, the ban on misleading customers for 20 years is not just empty words.  If Twitter allows any other privacy breach to occur, it will find itself without much leniency from the FTC.  It also puts the FTC in a position to immediately fine Twitter up to $16,000 per incident for future lapses, a power that the FTC does not have absent the settlement and resulting (future, expected) order.

The comment period on the settlement will end on July 26, 2010, at which time it expected that the order will be entered and the settlement will become final.