The Federal Emergency Management Agency has published its “Exercise Starter Kit for Workshop on Reconstituting Operations,” which is available here.

This excellent resource will get many businesses started as they prepare to resume limited or full operations, but entities should also be careful to address any safety, privacy and insurance issues (to name a few) with their counsel.

The kit, issued May 12, includes sample documents organizations can use to conduct their own planning workshops or tabletop exercises on returning to normal operations as well as a facilitator guide and conduct slides. Suggested discussion questions focus on the topics of People, Facilities, Messaging & Communications and Resources & Logistics.

There is no “one-size-fits-all” way for a business to reopen, and we are already seeing plaintiff lawyers circling looking for missteps and opportunities to bring lawsuits. There are many data security and privacy considerations that must be addressed.

Additional Information

While it received little publicity in the midst of the COVID-19 pandemic, Washington State recently passed a landmark facial recognition law regulating state and local government agencies’ deployment of facial recognition software. The law becomes effective July 1, 2021, and could ultimately forecast future private sector regulation.

The law regulates facial recognition services, defined as technology that analyzes facial features for the “identification, verification, or persistent tracking of individuals”. The law requires agencies to conduct operational and independent testing of the facial recognition technology, requires employee training and meaningful human review. Agencies that use facial recognition technology must comply with multiple requirements, including filing a notice of intent with the relevant legislative authority. The agency must also file an accountability report, detailing the following:

  • The name of the facial recognition technology, its proposed use and a description of its capabilities and limitations
  • The type of data input and a description of how data is collected and processed
  • The type of data the facial recognition technology will ultimately generate
  • A use and data management policy requiring, in part, a description of how the agency will deploy the technology and whether it will be used by another entity on the agency’s behalf
  • Principles of data minimization, data integrity, and data retention
  • The agency’s training procedure for personnel who operate the technology or process personal data obtained from use of the technology
  • A plan to address error rates and unauthorized use
  • Potential impact to privacy and marginalized communities and a plan to mitigate those concerns

Prior to finalizing the accountability report, the agency must permit public review, comment, and community meetings. The report must be updated once every two years. Further, the agency must require vendors to disclose any complaints of bias found in the technology.

Importantly, meaningful human review is required where an agency uses facial recognition technology to make decisions that produce legal or “similarly significant effects” concerning the “provision or denial of financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, health care services, or access to basic necessities such as food and water, or that impact civil rights of individuals.” Prior to making these decisions, the agency must first test the technology in operational conditions and make the tool available for independent testing.

In contrast to other state and local bans on deploying facial recognition tools, Washington State has shown itself to be a leader in regulating facial recognition technology. What is left to be determined, however, is its influence on the use and regulation of this technology in the private sector.

Synthetic data — defined as artificial data having the same statistical properties as real data — has gained much attention recently as a privacy-enhancing technology. If done properly, the artificial data acts as a proxy for the real data, is completely anonymous, de-identified, and cannot be connected to the original data. Not only can synthetic data provide badly needed access to data used to fuel research, it also provides a potential remedy to privacy concerns.

Synthetic data is created from original individual data. A synthetic data engine and algorithms process this “real” data, learning correlations, trends, and individual behaviors. As the algorithm learns how customers behave, it generates new artificial individuals with the same correlations, patterns, and trends as the original data set, but no connection to actual individuals. The result, if done properly, is synthetic data that cannot be re-identified.

Original data carries with it use limitations. Original data is considered personal information if it identifies, relates to, is capable of being associated with or could reasonably be linked with an individual.  Original data carries legal obligations to obtain individual consent, implement security controls, and protect privacy rights. Once the synthetic data is produced, however, even broad definitions of personal information seems to exclude synthetic data, as it cannot reasonably be said to be linked with a particular individual. Thus, the use of synthetic data may provide a viable option, with less privacy risk, for entities operating in an over-regulated privacy industry.

Synthetic data, however, is not without limitations and there are factors which may cause the synthetic data not to be truly anonymized. For example, consider outlier information. If the original data contains unique outliers captured by a synthetic data engine, the synthetic data will unavoidably reproduce these outliers, and, depending on how unique the data set is, could identify an individual. In addition, there should be strong privacy provisions in agreements between the business and vendors who generate the synthetic data. Provisions should incorporate the appropriate end care of the original data, including prohibitions against re-identifying the data so as not to defeat the benefits of synthetic data.

As a relatively new technology, synthetic data, when done right and without any one-to-one ratio to the original data, appears to provide an avenue that would allow companies to utilize, share, and perhaps monetize synthetic data.

Italy’s Garante and France’s CNIL publish updated guidelines on privacy in the workplace as workplaces are opening up for a phased return to normal.

Per CNIL:

  • Automatic collection of temperature (e.g. by thermal cameras) is not allowed
  • Taking temperature by means of a manual thermometer (such as for example of infrared type without contact) at the entry of a site, without a trace being kept, nor any other operation is carried out (such as readings of these temperatures, information feedback, etc.), does not fall under data protection regulations and is permissible.

Per Garante:

  • If you are recording temperatures per legal requirements, you may do so but while recording only whether or not the temperature exceeds the threshold established by law and the reasons that prevented access to the workplace (if you are required to document this).

Read the CNIL guidance.

Read the Garante guidance.

Is the Federal Trade Commission (FTC) considering amending its health data breach notification rule?

“The Federal Trade Commission is seeking comment on whether proposed changes should be made to a decade-old rule that requires certain companies that provide or service personal health records to notify consumers and the Commission of a data breach,” the agency writes in a recent notice published in the Federal Register.

The Health Breach Notification Rule, which went into effective in 2009, requires vendors of personal health records and related entities that are not covered by the Health Insurance Portability and Accountability Act (HIPAA) to notify individuals, the FTC, and, in some cases, the media of a breach of unsecured personally identifiable health data.

Currently, the rule requires such entities to provide notifications within 60 days after discovery of the breach. If more than 500 individuals are affected by a breach, however, entities must notify the FTC within 10 business days.

Read the FTC’s comment notice.

The “COVID–19 Consumer Data Protection Act of 2020” has been filed and purports to regulate the information collection which will likely increase as businesses and workplaces reopen. Provisions include:

  • Prior consent for the collection of personal information for the purpose of COVID-19 tracking
  • The ability to revoke this consent at any time
  • Enforcement by the FTC, including over nonprofits
  • Enforcement by attorneys general
Of interest is the definition of de-identified information as information which:
  • Does not identify and is not reasonably linkable to an individual;
  • Does not contain any information that could be used to re-identify the individual;
  • Is subject to a public commitment
    • To refrain from attempting to use such information to identify any individual; and
    • To adopt technical and organizational measures to ensure that such information is not linked to any individual and is not disclosed unless the disclosure is subject to a contractually or other legally binding requirement that:
      • The recipient not use the information to identify any individual;
      • All onward disclosures of the information are subject to this limitation.

Read the press release announcing the measure.

According to a new Pew Research survey:

  • Six in ten Americans say that if the government tracked people’s locations through their mobile phones, it wouldn’t make much of a difference in limiting the spread of COVID-19
  •  52% say it would be at least somewhat acceptable for the government to use people’s cellphones to track the location of people who have tested positive for COVID-19
  • 72% believe that all, almost all or most of what they do online or while using a cellphone is being tracked by advertisers, technology firms or other companies
  •  63% said in 2019 that they knew very little or nothing at all about the laws and regulations currently in place to protect their data privacy

Read the full Pew Research study.

The European Data Protection Supervisor addressed the coronavirus crisis in a post titled “Carrying the torch in times of darkness.”

“The outbreak of Covid-19 is affecting our lives at an unprecedented pace. It is testing the resilience of our societies as we respond to this global crisis and try to contain its consequences, both in the short and in the long run.”

“Personal data have and will continue to play an important role in the fight against the pandemic.”

“Humanity does not need to commit to a trade-off between privacy and data protection from one side, and public health, on the other. Democracies in the age of Covid-19 must and can have them both.”

“As we discuss digital solutions to manage the pandemic, and we subject them to public and democratic debate, we shall keep sight of the endemic problems of the digital ecosystem and have them subject to democratic oversight and deliberations.”

“Where to draw the line – in times of emergencies – can be prescribed by the law, but it definitely implies answering prominent ethical questions, in particular where we wish to head to as democratic societies, based on the rule of law.”

Read the full article.

On May 7, 2020, the New York Attorney General announced she will not sue Zoom after it agreed to adopt enhanced data security and privacy measures to protect the data of its 300 million plus users. As COVID-19 social distancing policies radically change the way individuals and industries communicate, Zoom saw a reported 3,000 percent increase in meeting participants per day. According to the AG, reports of privacy and data security issues soon followed, including conferences interrupted by uninvited participants or “Zoombombing”, lack of end-to-end encryption, unauthorized disclosure of users’ personal information to other users without consent, and sharing personal information with Facebook.

The New York Attorney General has closed its investigation into Zoom after it agreed to (1) a comprehensive information security program and enhanced data security practices, (2) increased privacy controls, and (3) protection of users from online abuse, among other measures including audits. The Attorney General purports that the agreement will protect New Yorkers and users nationwide by ensuring Zoom’s compliance with the Children’s Online Privacy Protection Act (COPPA) and New York’s statute making unlawful any deceptive acts or practices, in addition to other protective laws.

Comprehensive Information Security Program and Enhanced Data Security Practices

Zoom has agreed to implement and maintain a comprehensive data security program that will be run by the company’s Head of Security.  The program will be designed to protect the security, confidentiality, and integrity of personal information that Zoom collects, receives, or processes and will include administrative, technical, and physical safeguards such as, among others:

  • conducting risk assessment and software code reviews to mitigate against vulnerabilities to hackers;
  • enhanced encryption protocols by encrypting users’ information both in transit and as stored online on their cloud servers;
  • operating a software vulnerability management program; and
  • annual penetration testing.

Increased Privacy Controls

Zoom has agreed to enhanced privacy controls for free accounts and education accounts by allowing hosts to:

  • control access to their video conferences by requiring a password or digital waiting room prior to accessing a meeting;
  • control access to private messages in a Zoom chat;
  • control access to email domains in a Zoom directory;
  • control who can share screens; and
  • limit participants of a meeting to specific email domains.

In addition, Zoom removed the Facebook SDK (which enabled users to login via Facebook) and removed its LinkedIn Navigator feature to curtail unnecessary data disclosure.

Protection of Users from Abuse

Zoom agreed to continue to maintain reasonable procedures to enable users to easily report violations of Zoom’s Acceptable Use Policy and will update its policy to clarify that prohibitions against abusive conduct include hatred against others based on race, religion, ethnicity, national origin, gender, or sexual orientation.

Argentina has issued guidance on the use of geolocation data in the context of battling the COVID-19 pandemic.

When processing geolocation data you must at all times respect the principle of data quality, namely:

  • Use only accurate and relevant data
  • Collect only the data that you need
  • Use it only lawfully
  • Use only for the purpose for which you collected
  • Store in a way that allows you to accommodate access and deletion rights
  • Destroy when no longer necessary for the purpose.

Read a detailed analysis in this client alert.