Acting Federal Trade Commission (FTC) Chairman Maureen K. Ohlhausen made it clear that she expects the FTC’s enforcement role in protecting privacy and security to encompass automated and connected vehicles. In her opening remarks at a June 28, 2017 workshop hosted by the FTC and National Highway Traffic Safety Administration (NHTSA), she said the FTC will take action against manufacturers and service providers of autonomous and connected vehicles if their activities violate Section 5 of the FTC Act, which prohibits unfair and deceptive acts or practices.

Such concern is warranted as new technologies allow vehicles to not only access the Internet, but also to independently generate, store and transmit all types of data – some of which could be very valuable to law enforcement, insurance companies, and other industries. For example, such data can not only show a car’s precise location, but also whether it violated posted speed limits, and aggressively followed behind, or cut-off, other cars.

Acting Chairman Ohlhausen noted that the FTC wants to coordinate its regulatory efforts with NHTSA, and envisions that both organizations will have important roles, similar to the way the FTC and the Department of Health and Human Services both have roles with respect to the Health Insurance Portability and Accountability Act (HIPAA).

Traditionally, NHTSA has dealt with vehicle safety issues, as opposed to privacy and data security. Thus, it may mean that the FTC will have a key role on these issues as they apply to connected cars, as it already has been a major player on privacy and data security in other industries.

Acting Chairman Ohlhausen also encouraged Congress to consider data breach and data security legislation for these new industries, but speakers at the workshop (video available here and embedded below) noted that legislation in this area will have difficulty keeping up with the fast pace of change of these technologies.

Part 1:

Part 2:

Part 3:

Specific federal legislation, or even laws at the state level, may be slow in coming given the many stakeholders who have an interest in the outcome. Until then, the broad mandate of Section 5 may be one of the main sources of enforcement. Companies who provide goods or services related to autonomous and connected vehicles should be familiar with the basic FTC security advice we have already blogged about here, and should work with knowledgeable attorneys as they pursue their design and manufacture plans.

Eric Bixler has posted on the Fox Rothschild Physician Law Blog an excellent summary of the changes coming to Medicare cards as a result of the Medicare Access and CHIP Reauthorization Act of 2015.  Briefly, Centers for Medicare and Medicaid Services (“CMS”) must remove Social Security Numbers (“SSNs”) from all Medicare cards. Therefore, starting April 1, 2018, CMS will begin mailing new cards with a randomly assigned Medicare Beneficiary Identifier (“MBI”) to replace the existing use of SSNs.  You can read the entire blog post here.

The SSN removal initiative represents a major step in the right direction for preventing identity theft of particularly vulnerable populations.  Medicare provides health insurance for Americans aged 65 and older, and in some cases to younger individuals with select disabilities.  Americans are told to avoid carrying their social security card to protect their identity in the event their wallet or purse is stolen, yet many Medicare beneficiaries still carry their Medicare card, which contains their SSN.  CMS stated that people age 65 or older are increasingly the victims of identity theft, as incidents among seniors increased to 2.6 million from 2.1 million between 2012 and 2014.  Yet the change took over a decade of formal CMS research and discussions with other government agencies to materialize, in part due to CMS’ estimates of the prohibitive costs associated with the undertaking.  In 2013, CMS estimated that the costs of two separate SSN removal approaches were approximately $255 million and $317 million, including the cost of efforts to develop, test and implement modifications that would have to be made to the agency’s IT systems – see United States Government Accountability Office report, dated September 2013)

We previously blogged (here and here) about the theft of 7,000 student SSNs at Purdue University and a hack that put 75,000 SSNs at risk at the University of Wisconsin.  In addition, the Fox Rothschild HIPAA & Health Information Technology Blog discussed (here) the nearly $7 million fine imposed on a health plan for including Medicare health insurance claim numbers in plain sight on mailings addressed to individuals.

On July 23, 2017, Washington State will become the third state (after Illinois and Texas) to statutorily restrict the collection, storage and use of biometric data for commercial purposes. The Washington legislature explained its goal in enacting Washington’s new biometrics law:

The legislature intends to require a business that collects and can attribute biometric data to a specific uniquely identified individual to disclose how it uses that biometric data, and provide notice to and obtain consent from an individual before enrolling or changing the use of that individual’s biometric identifiers in a database.

— Washington Laws of 2017, ch. 299 § 1.  (See complete text of the new law here).

Washington’s new biometrics act governs three key aspects of commercial use of biometric data:

  1. collection, including notice and consent,
  2. storage, including protection and length of time, and
  3. use, including dissemination and permitted purposes.

The law focuses on “biometric identifiers,” which it defines as

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual.

— Id. § 3(1).

The law excludes all photos, video or audio recordings, or information “collected, used, or stored for health care treatment, payment or operations” subject to HIPAA from the definition of “biometric identifiers.” Id.  It also expressly excludes biometric information collected for security purposes (id. § 3(4)), and does not apply to financial institutions subject to the Gramm-Leach-Bliley Act.  Id. § 5(1).  Importantly, the law applies only to biometric identifiers that are “enrolled in” a commercial database, which it explains means capturing a biometric identifier, converting it to a reference template that cannot be reconstructed into the original output image, and storing it in a database that links the biometric identifier to a specific individual.  Id. §§ 2, 3(5).

Statutory Ambiguity Creates Confusion

Biometric data
Copyright: altomedia / 123RF Stock Photo

Unfortunately, ambiguous statutory language, combined with rapidly-advancing technology, virtually guarantees confusion in each of the three key aspects of the new law.

Regarding collection, the new law states that a company may not “enroll a biometric identifier in a database for a commercial purpose” unless it: (1) provides notice, (2) obtains consent, or (3) “provid[es] a mechanism to prevent the subsequent use of a biometric identifier for a commercial purpose.”  Id. § 2(1).  Confusingly, the law does not specify what type of “notice” is required, except that it must be “given through a procedure reasonably designed to be readily available to affected individuals,” and its adequacy will be “context-dependent.”  Id. § 2(2).

If consent is obtained, a business may sell, lease or disclose biometric data to others for commercial use.  Id. § 2(3).  Absent consent, a business may not disclose biometric data to others except in very limited circumstances listed in the statute, including in litigation, if necessary to provide a service requested by the individual or as authorized by other law. Id. However, the new law may ultimately be read by courts or regulators as including a “one disclosure” exception because it says disclosure is allowed to any third party “who contractually promises that the biometric identifier will not be further disclosed and will not be enrolled in a database for a commercial purpose” inconsistent with the new law.  Id.

The new law also governs the storage of biometric identifiers.  Any business holding biometric data “must take reasonable care to guard against unauthorized access to and acquisition of biometric identifiers that are in the possession or control of the person.”  Id. § 2(4)(a).  Moreover, businesses are barred from retaining biometric data for any longer than “reasonably necessary” to provide services, prevent fraud, or comply with a court order.  Id. § 2(4)(b).  Here too the law fails to provide certainty, e.g., it sets no bright-line time limits on retention after customer relationships end, or how to apply these rules to ongoing but intermittent customer relationships.

The Washington legislature also barred companies that collect biometric identifiers for using them for any other purpose “materially inconsistent” with the original purpose they were collected for unless they first obtain consent.  Id. § 2(5).  Confusingly, even though notice alone is enough to authorize the original collection, it is not sufficient by itself to authorize a new use.

Interestingly, the new Washington law makes a violation of its collection, storage or use requirements a violation of the Washington Consumer Protection Act (the state analog to Section 5 of the FTC Act).  Id. § 4(1).  However, it specifically excludes any private right of action under the statute and provides for enforcement solely by the Washington State Attorney General, leaving Illinois’s Biometric Information Privacy Act as the only state biometrics law authorizing private enforcement.  Id. § 4(2).

Washington’s new law was not without controversy.  Several state legislators criticized it as imprecise and pushed to more specifically detail the activities it regulates; proponents argued that its broad language was necessary to allow flexibility for future technological advances. Ultimately, the bill passed with less than unanimous approval and was signed into law by Washington’s governor in mid-May.  It takes effect on July 23, 2017.  A similar, but not identical, Washington law takes effect the same day governing the collection, storage and use of biometric identifiers by state agencies.  (See Washington Laws of 2017, ch. 306 here).

Copyright: hywards / 123RF Stock Photo
Copyright: hywards / 123RF Stock Photo

France’s data protection regulator – the  Commission Nationale de L’Informatique et des Libertés (CNIL) – ordered Alphabet Inc.’s Google in 2015 to comply with the right to be forgotten.

If the ruling is upheld, the approach to personal privacy threatens the equal and competing legitimate freedom of expression and access to information rights of businesses and consumers outside the European Union.

Scott L. Vernick and Jessica Kitain recently authored the Bloomberg BNA Privacy and Security Law Report article “The Right To Be Forgotten – Protection or Hegemony?” We invite you to read the full article.

Reproduced with permission from Privacy and Security Law Report, 15 PVLR 1253, 6/20/2016. Copyright © 2016 by The Bureau of National Affairs, Inc. (800.372.1033) http://www.bna.com

Fox Partner and Chair of the Privacy and Data Security Practice Scott L. Vernick was a guest on Fox Business’ “The O’Reilly Factor” and “After the Bell” on February 17, 2016, to discuss the controversy between Apple and the FBI over device encryption.

A federal court recently ordered Apple to write new software to unlock the iPhone used by one of the shooters in the San Bernardino attacks in December. Apple CEO Tim Cook has vowed to fight the court order.

The Federal Government vs. Apple (The O’Reilly Factor, 02/17/16)

Apple’s Privacy Battle With the Federal Government (After the Bell, 02/17/16)

 

 

 

In February 2013, President Obama issued his Improving Critical Infrastructure Cybersecurity executive order, which presented a plan to decrease the risk of cyberattacks on critical infrastructure.  The US Department of Commerce’s National Institute of Standards and Technology (NIST) was charged with creating the plan, which became known as the Framework for Improving Critical Infrastructure Cybersecurity (Framework).  The NIST worked with over three thousand individuals and business organizations to create the Framework.  The goal of the Framework is to help businesses develop cybersecurity programs within their organizations and to create industry standards for dealing with cybersecurity issues.

The Framework is designed to work with businesses to reach a sufficient level of cybersecurity protection regardless of size, sector, or level of security.  The Framework consists of three parts (1) The Framework Core, (2) The Framework Implementation Tiers, and (3) The Framework Profiles.  The Framework Core is a grouping of cybersecurity activities based on industry indicators, desired outcomes, and practices.  It assists businesses in developing Framework Profiles, which are used to create cybersecurity plans.  Essentially, the Core characterizes all aspects of a business’ cybersecurity protection so that the Framework can assist the business in creating a secure network.

The Framework Implementation Tiers assess how a business acknowledges cybersecurity issues and ranks the business into one of four tiers.  Ranked from weakest to strongest the four tiers are: (1) Partial, (2) Risk Informed, (3) Repeatable, and (4) Adaptive.  The Partial Tier is for businesses that may not consult risk objectives or environmental threats when deciding cybersecurity issues.  The Risk Informed Tier is for businesses that have cybersecurity risk management processes, but may not implement them across the entire organization.  The Repeatable Tier is for businesses that regularly update their cybersecurity practices based on risk management.  The Adaptive Tier is for businesses that adapt cybersecurity procedures frequently and implement knowledge gained from past experiences and risk indicators.  The Tier assignment helps a business better understand the impact of cybersecurity issues on its organizational procedures.

After a business has gone through the necessary steps with the Framework Core and Implementation Tiers, it can create a Framework Profile based on its individual characteristics.  A “Current” Profile allows a business to have a clear sense of where it stands in terms of cybersecurity and what aspects of its cybersecurity program need improvement.  A “Target” Profile represents the cybersecurity state that a business wants to achieve through the use of the Framework.  By comparing its “Current” Profile and “Target” Profile, a business is able to prioritize its actions and measure its progress.

There are several resources that support the Framework including the NIST’s Roadmap for Improving Critical Infrastructure Cybersecurity, the NIST’s Cybersecurity Framework Reference Tool, and The Department of Homeland Security’s Critical Infrastructure Cyber Community C3 Voluntary Program.  A business that wants to utilize the Framework should visit the NIST’s Framework website at:  http://www.nist.gov/cyberframework/.

Copyright: argus456 / 123RF Stock Photo
Copyright: argus456 / 123RF Stock Photo

Fox Rothschild partner Scott L. Vernick was quoted in The New York Times article, “Hacking Victims Deserve Empathy, Not Ridicule.” Full text can be found in the September 2, 2015, issue, but a synopsis is below.

While some data breach victims may face only minor frustrations – changing a password or getting a new credit card – it is a different story for the more than 30 million Ashley Madison users who had their accounts for the infidelity website compromised.

Many of the victims of this latest massive data breach have been plunged into despair, fearing they could lose jobs and families, and expecting to be humiliated among friends and colleagues.

“It’s easy to be snarky about Ashley Madison, but just because it’s unpopular or even immoral, it doesn’t mean this sort of activity shouldn’t be protected,” said Scott L. Vernick, a noted privacy attorney. “This gets at fundamental issues like freedom of speech and freedom of association – today it’s Ashley Madison, tomorrow it could be some other group that deserves protection.”

On July 20, 2015, in Remijas v. Neiman Marcus Group, LLC, No. 14-3122 (7th Cir. 2015), the Seventh Circuit held that the United States District Court for the Northern District of Illinois wrongfully dismissed a class action suit brought against Neiman Marcus after hackers stole their customers’ data and debit card information.  The District Court originally dismissed the plaintiffs’ claims because they had not alleged sufficient injury to establish standing.  The District Court based its ruling on a United States Supreme Court decision, Clapper v. Amnesty Int’l USA, 133 S.Ct. 1138 (2013), which held that to establish Article III standing, an injury must be “concrete, particularized, and actual or imminent.”

However, the Seventh Circuit clarified that Clapper “does not, as the district court thought, foreclose any use whatsoever of future injuries to support Article III standing.”  Rather, “injuries associated with resolving fraudulent charges and protecting oneself against future identity theft” are sufficient to confer standing.

In Remijas, the Seventh Circuit explained that there is a reasonable likelihood that the hackers will use the plaintiffs’ information to commit identity theft or credit card fraud.  “Why else would hackers break into a store’s database and steal consumers’ private information?” – the Seventh Circuit asked.  The Seventh Circuit held that the plaintiffs should not have to wait until the hackers commit these crimes to file suit.

The Seventh Circuit also considered that some of the plaintiffs have already paid for credit monitoring services to protect their data, which it held is a concrete injury.  Neiman Marcus also offered one year of credit monitoring services to its customers affected by the breach, which the Seventh Circuit considered an acknowledgment by the company that there was a likelihood that their customers’ information would be used for fraudulent purposes.

Ultimately, this decision may serve to soften the blow dealt by Clapper to data breach plaintiffs.  Specifically, based on this ruling, plaintiffs who have not incurred any fraudulent charges, but have purchased credit monitoring services, or have spent time and money protecting themselves against potential fraud may argue that they have standing.

On June 30, 2015, Connecticut Governor Dannel Malloy signed into law Senate Bill 949, “An Act Improving Data Security and Agency Effectiveness”, a data privacy and security bill that creates stricter data breach response requirements.  S.B. 949 specifies that an entity that experiences a data breach must give notice to those affected no “later than [90] days after discovery of such breach, unless a shorter time is required under federal law.”  Previously, Connecticut law only required entities to provide notice of a data privacy breach to affected individuals “without unreasonable delay.”

During a press conference on June 2, 2015, Attorney General George Jepsen clarified that 90 days is the floor – not the ceiling.  He stated that “[t]here may be circumstances under which it is unreasonable to delay notification for 90 days.”  Projected to become effective October 1, 2015, S.B. 949 also requires entities affected by breaches to provide at least one year of free identity theft prevention services for breaches involving the resident’s name and Social Security number.

Guest Blogger: Kevin P. Demody, Summer Associate

Cyberattacks are not reserved for science fiction or corporate America; they can also impact professional sports.  An example of cybercrime is currently unfolding in Major League Baseball, where the St. Louis Cardinals are under investigation for cyberattacks.  The F.B.I. and Justice Department prosecutors are investigating whether the Cardinals hacked into the Houston Astros’ computer systems to obtain confidential baseball data.

Investigators have discovered evidence suggesting that Cardinals’ front office employees hacked the Astros’ computer systems containing information regarding possible trades, injury reports, and scouting evaluations.  If the allegations prove to be accurate, the attack would be the first known instance of corporate cyber warfare between professional sporting organizations.  The Cardinals organization, one of the most successful baseball clubs over the past two decades, has been served with subpoenas to obtain electronic correspondence that may have been related to the attacks.

In a written statement from Major League Baseball, the organization assured the public that it “has been aware of and has fully cooperated with the federal investigation into the illegal breach of the Astros’ baseball operations database.”  The League also promised to “evaluate the next steps” and “make decisions promptly” after the federal investigation concludes.

The cyberattacks may have been a revenge tactic by Cardinals’ employees against former Cardinals executive and current Astros’ general manager, Jeff Luhnow.  Mr. Luhnow, a scouting and player development executive with the Cardinals, was instrumental in the team’s World Series success by developing a unique way to evaluate players and manage talent.  Much of Luhnow’s success with the Cardinals was attributed to a computer system, named “Redbird,” which contained the organization’s collective baseball knowledge.  When Mr. Luhnow’s polarizing tenure with the Cardinals came to an end after the 2011 season, he left to become the general manger of the Astros.  Once with the Astros, he used his computer expertise to create an electronic baseball knowledge system similar to the Cardinals’ “Redbird.”

The Astros’ system, known as “Ground Control,” was a collection of the team’s baseball data that weighted information based on the opinions of the team’s physicians, scouts, statisticians, and coaches.  Investigators believe that members of the Cardinals organization used Luhnow’s old passwords to hack into the team’s system and steal data.  This is a common practice among cybercriminals who attempt to use previous passwords to gain access to other restricted networks.  The investigation initially began last year when the Astros believed that the cyberattacks had originated from rouge outside hackers.  It was only after further investigation that the F.B.I. determined the source of the cyberattacks to be a home occupied by a Cardinals’ employee.

At this point the investigation is ongoing and federal officials would not comment on which Cardinals’ employees were involved in the matter or if the front office executives had any knowledge of the cyberattacks.  No Cardinals’ employees have been suspended or put on leave yet.