A number of employers in Illinois are involved in pending class action litigation regarding violations of the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (the “BIPA”). The BIPA, which was enacted in 2008, addresses the collection, use and retention of biometric information by private entities. Any information that is captured, stored, or shared based on a person’s biometric identifiers, such as fingerprints, iris scans, or blood type, is considered “biometric information.” The Illinois Legislature enacted the BIPA because biometric information is unlike any other unique identifier in that it can never be changed, even once it has been compromised.

The BIPA requires that, before a private entity can obtain and/or possess an individual’s biometric information, it must first inform the individual, or the individual’s legally authorized representative, in writing of the following: (1) that biometric information is being collected or stored; (2) the specific purpose for the collection, storage, and use of the biometric information; and (3) the length of time for the collection, storage, and use of the biometric information. Furthermore, before collecting any biometric information, the private entity must receive a written release for the collection of the biometric information from the individual or the individual’s legally authorized representative after the above notice has been given.

The BIPA additionally requires the private entity to develop a written policy that establishes a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information. That policy must be made available to the public. The collected information must be destroyed once “the initial purpose for collecting or obtaining such information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first.” 740 ILCS 14/15. In the pending cases, the private entity employers failed to obtain informed written consent prior to the collection, storage, and use of fingerprints and other biometric information. The employers also failed to publish any data retention and deletion policies for the biometric information.

The BIPA also restricts a private entity’s right to sell, lease, trade or otherwise profit from a person’s biometric identifier or biometric information. An employer who adheres to the requirements of the BIPA will be able to avoid class action litigation on this issue and maintain compliance with industry standards.

On Tuesday, November 7th from 2:00 to 6:30, Fox Rothschild and Kroll will be presenting the CLE: Staying One Step Ahead: Developments in Privacy and Data.  The CLE will take place at Fox Rothschild’s offices at 353 N. Clark Street in Chicago.  The speakers are Bill Dixon from Kroll, and Dan Farris and Mark McCreary from Fox Rothschild.  Cocktails and networking will follow the presentations.

If you are in the Chicago are on November 7th, I hope you will join us.  Click here to register for this free event.

Elizabeth Litten (Fox Rothschild Partner and HIPAA Privacy & Security Officer) and Mark McCreary (Fox Rothschild Partner and Chief Privacy Officer) will be presenting at the New Jersey Chapter of the Healthcare Financial Management Association on August 30, 2017, from 12:00-1:00 pm eastern time.  The presentation is titled: “Can’t Touch That: Best Practices for Health Care Workforce Training on Data Security and Information Privacy.”

This webinar is a comprehensive review of information privacy and data security training, with an emphasis on imparting practical know-how and a fluency with the terminology involving phishing, ransomware, malware and other common threats. We will cover best practices for sensitizing health care industry workers to these threats as part of their ongoing HIPAA compliance efforts and, more generally, for training workers in any business on the proper handling of sensitive data. We will cover the adoption of policies and a training regimen for the entire workforce, as well as tailored training for those in positions responsible for implementing security policies.

More information and a registration link can be found here.

Eric Bixler has posted on the Fox Rothschild Physician Law Blog an excellent summary of the changes coming to Medicare cards as a result of the Medicare Access and CHIP Reauthorization Act of 2015.  Briefly, Centers for Medicare and Medicaid Services (“CMS”) must remove Social Security Numbers (“SSNs”) from all Medicare cards. Therefore, starting April 1, 2018, CMS will begin mailing new cards with a randomly assigned Medicare Beneficiary Identifier (“MBI”) to replace the existing use of SSNs.  You can read the entire blog post here.

The SSN removal initiative represents a major step in the right direction for preventing identity theft of particularly vulnerable populations.  Medicare provides health insurance for Americans aged 65 and older, and in some cases to younger individuals with select disabilities.  Americans are told to avoid carrying their social security card to protect their identity in the event their wallet or purse is stolen, yet many Medicare beneficiaries still carry their Medicare card, which contains their SSN.  CMS stated that people age 65 or older are increasingly the victims of identity theft, as incidents among seniors increased to 2.6 million from 2.1 million between 2012 and 2014.  Yet the change took over a decade of formal CMS research and discussions with other government agencies to materialize, in part due to CMS’ estimates of the prohibitive costs associated with the undertaking.  In 2013, CMS estimated that the costs of two separate SSN removal approaches were approximately $255 million and $317 million, including the cost of efforts to develop, test and implement modifications that would have to be made to the agency’s IT systems – see United States Government Accountability Office report, dated September 2013)

We previously blogged (here and here) about the theft of 7,000 student SSNs at Purdue University and a hack that put 75,000 SSNs at risk at the University of Wisconsin.  In addition, the Fox Rothschild HIPAA & Health Information Technology Blog discussed (here) the nearly $7 million fine imposed on a health plan for including Medicare health insurance claim numbers in plain sight on mailings addressed to individuals.

The “new age” of internet and dispersed private data is not so new anymore but that doesn’t mean the law has caught up.  A few years ago, plaintiffs’ cases naming defendants like Google, Apple, and Facebook were at an all-time high but now, plaintiffs firms aren’t interested anymore.  According to a report in The Recorder, a San Francisco based legal newspaper, privacy lawsuits against these three digital behemoths have dropped from upwards of thirty cases in the Northern District of California i 2012 to less than five in 2015.   Although some have succeeded monumentally—with Facebook writing a $20 million check to settle a case over the fact that it was using users’ images without their permission on its “sponsored stories” section—this type of payout is not the majority.  One of the issues is that much of the law in this arena hasn’t developed yet.  Since there is no federal privacy law directly pertaining to the digital realm, many complaints depend on old laws like the Electronic Communications Privacy Act and Stored Communications Act (1986) as well as the Video Privacy Protection Act (1988).  The internet and its capacities was likely not the target of these laws—instead they were meant to prohibit such behavior as tapping a neighbor’s phone or collecting someone’s videotape rental history.

Further, it seems unavoidable now to have personal data somewhere somehow.  Privacy lawsuits attempting to become class actions have a difficulty in succeeding in a similar way that data breach class actions do: the plaintiffs face the challenge of proving concrete harms.  In a case later this year, Spokeo v. Robins, the Supreme Court may change this area of law because it will decide whether an unemployed plaintiff can sue Spokeo for violating the Fair Credits Reporting Act because Spokeo stated that he was wealthy and held a graduate degree.  The issue will turn on proving actual harm.  Companies that deal with private information on a consistent basis should protect themselves by developing privacy policies that, at the very least, may limit their liability.   The reality is that data is everywhere and businesses will constantly be finding creative and profitable ways to use it.

To keep up with the Spokeo v. Robins case, check out the SCOTUSblog here.

http://www.scotusblog.com/case-files/cases/spokeo-inc-v-robins/

New innovations come hand in hand with new privacy issues.  Privacy policies may seem like a last minute add-on to some app developers but they are actually an important aspect of an app.  Data breaches are an imminent risk and a business’s first defense to potential problems is a privacy policy.

Fordham University in New York hosted its Ninth Law and Information Society Symposium last week where policy and technology leaders came together to discuss current privacy pitfalls and solutions.  Joanne McNabb, the California attorney general’s privacy education director and a leader in policies affecting the privacy agreements of companies such as Google and Apple, emphasized in a panel that she “wants to make the case for the unread privacy policy.”  She noted that the policy mainly promotes “governance and accountability [and] it forces an organization to be aware of their data practices to some degree, express them and then therefore to stand behind them.”  The privacy policy still matters because it protects businesses from the risks associated with having a high level of data. It is especially necessary for those businesses that depend solely on private information because they are at a higher risk of breach.

The FTC (Federal Trade Commission) has suggested using an approach called “Privacy By Design” which is a method of imbedding privacy protections into the infrastructure of the app.  This approach removes the concern of implementing privacy policies post-development. Another method of simplifying the privacy policy is the alert prompt that some apps have employed to consistently give consumers notice of when and where their information is used. McNabb and her fellow panelists found this method of “short, timely notices” helpful in closing the gap between the unread privacy policies and the claimed “surprise” of consumers who blame an app for the dissemination of information.

As the industry moves forward, privacy will become an even greater part of the equation. Whether a privacy policy is read is insignificant. The protections it puts in place for all parties involved are crucial. As apps and technologies become more connected to the private preferences of consumers, businesses with a leg up on privacy protections will thrive against the backdrop of those who view privacy as a second tier requirement.

For more information on “Privacy By Design” click here.

The freedom from automated calls at random hours of the evening may seem like the true American dream these days as more and more companies rely on these calls to reach out and communicate with customers.  Unfortunately, now that the Federal Communications Commission (“FCC”) voted to expand the Telephone Consumer Protection Act (“TCPA”) to include stringent yet vague restrictions on telemarketing robocalls, it may not be a dream for everyone. 

In June of this year, in a 3-2 vote, the FCC voted on adding the rule to the TCPA that entails barring companies from using “autodialers” to dial consumers, disallowing more than one phone call to numbers that have been reassigned to different customers, and mandating a stop to calls under a customer’s wishes.  These restriction may seem reasonable but dissenting Commissioner, Ajit Pai, recognized that the rule’s broad language will create issues because it does not distinguish between legitimate businesses trying to reach their customers and unwanted telemarketers.  Some attorneys have further commented on the rule stating that its use of “autodialer” opens up a can of worms of interpretations and can really be viewed as any device with even the potential to randomly sequence numbers, including a smartphone.  Companies using even slightly modernized tactics to reach out to their customer base are now at risk of facing litigation—and it won’t stop there.  Businesses that legitimately need to reach out to their customers will be caught between a rock and a hard place as they face a one-call restriction now and may also open themselves up to litigation if a customer decides to take that route.

The FCC Chairman, Tom Wheeler, attempted to quash concerns by stating that “Legitimate businesses seeking to provide legitimate information will not have difficulties.”  This statement unfortunately won’t stop plaintiff’s attorneys from greasing their wheels to go after companies who even make “good faith efforts” to abide by the new rule.  Attorneys who defend businesses have recognized that the rule is ridden with issues that could potentially harm companies that simply do not have the mechanisms to fully control and restrict repeated calls or the technology that makes those calls.  But, long story short, just because this rule has been put in motion, does not mean it will stand as is. Litigation and court action will likely be a natural consequence and that may result in changes for the future.  For now, businesses that utilize automated phone calls should be wary of the technology used and attempt to at least keep track of numbers and phone calls made.  When in doubt, talk to an attorney to make sure you are taking the appropriate precautions.

A recent District of Nevada ruling could cause issues for consumers in data breach class action cases moving forward.  On June 1, 2015, the court ruled that a consumer class action against Zappos.com Inc. could not proceed because the class did not state “instances of actual identity theft or fraud.”  The suit was brought as a result of a 2012 data breach where Zappos’ customers’ personal information was stolen, including names, passwords, addresses, and phone numbers.  Even though the information was stolen, the court dismissed the case because the class could not prove that they had been materially harmed and had no other standing under Article III.

If a data breach has occurred, but the victims cannot claim any harm besides the fear that a hacker has their information, courts have been willing to grant defendants’ motions to dismiss.  The ruling by the District of Nevada court is the most recent decision in a trend to block consumer class actions relating to data breaches.  Many of these recent rulings have been influenced by the Supreme Court’s 2013 decision in Clapper v. Amnesty International USA.  In Clapper, the Supreme Court held that claims of future injury could only satisfy the Article III standing requirement if the injury was “certainly impending” or if there was a “substantial risk” that the harm was going to occur.  Unfortunately for the consumer class in the Zappos’ case this means that unless their stolen information has been used to harm them, the data breach alone is not enough standing to bring a suit.

However, some district courts have been able to find sufficient standing for data breach victims in spite of the Clapper decision.  In Moyer v. Michaels Stores, a district court in the Northern District of Illinois ruled that data breach victims had standing to sue.  The court relied on Pisciotta v. Old National Bancorp, a Seventh Circuit pre-Clapper decision, which held that the injury requirement could be satisfied by an increased risk of identity theft, even if there was no financial loss.  Moyer further distinguished itself from Clapper by explaining that Clapper dealt with national security issues, and not general consumer data breaches.  Other district courts have distinguished their cases from Clapper by holding that Clapper dealt with harm that was too speculative to quantify, while consumer data breach cases deal with the concrete possibility of identity theft.

Although Clapper set the tone for consumer data breach claims, district courts have been divided because of different interpretations in the ruling.  The Supreme Court recently granted certiorari in another Article III standing case, Spokeo Inc. v. Robins Inc., which deals with a private right of action grounded in a violation of a federal statute.  Although it does not directly deal with consumer data breaches, the decision may lead the Supreme Court to expand the standing requirements generally.  Given society’s increasing use of technology and inclination to store personal information electronically, consumer data breach claims will only increase in the future.  The courts’ standing requirements must adapt to meet the changing needs of individuals and businesses alike.

With 2013 being dubbed as the “Year of the Mega Breach” it comes as no surprise that the Federal Trade Commission (“FTC”), on June 30, 2015 published “Start with Security: A Guide for Businesses” to educate and inform businesses on protecting their data.  The FTC is tasked with protecting consumers from “unfair” and “deceptive” business practices and with data breaches on the rise, it has come to take that job much more seriously.  The lessons in the guide are meant to aid businesses in their practices of protecting data and the FTC cites to real examples of its data breach settlement cases to help companies understand each lesson and the real world consequences that some companies have faced.  Here are the lesson headlines:

  1. 1. Start with security;
  2. 2. Control access to data sensibly;
  3. 3. Require secure passwords and authentication;
  4. 4. Store sensitive personal information securely and protect it during transmission;
  5. 5. Segment networks and monitor anyone trying to get in and out of them;
  6. 6. Secure remote network access;
  7. 7. Apply sound security practices when developing new products that collect personal information;
  8. 8. Ensure that service providers implement reasonable security measures;
  9. 9. Implement procedures to help ensure that security practices are current and address vulnerabilities; and
  10. 10. Secure paper, physical media and devices that contain personal information.

  Katherine McCarron, the Bureau of Consumer Protection attorney, explained that the Bureau “look[s] at a company’s security procedures and determine[s] whether they are reasonable and appropriate in light of all the circumstances” when evaluating an organization’s conduct.  It is likely that this guide will become the FTC’s road map for handling future enforcement actions and will help businesses to remain on the safe side of the data breach fence.

Whether you run a mom and pop shop or a multi-million dollar company, this guide is a must-read for any business that processes personal information.

Start reading here.

https://www.ftc.gov/tips-advice/business-center/guidance/start-security-guide-business

Last week we posted about A Brief Primer on the NIST Cybersecurity Framework.  Our partner and HIPAA/HITECH expert Elizabeth Litten took the NIST Cybersecurity Framework and created a blog post for the HIPAA, HITECH and Health Information Technology Blog on how How the NIST Cybersecurity Framework Can Help With HIPAA Compliance: 3 Tips, which can be read here.  For those facing any HIPAA-related issues, it is a worthwhile read.