Header graphic for print

Privacy Compliance & Data Security

Information on Data Breach Prevention and the Appropriate Response

Can Third Party Certification Ensure Data Privacy Best Practices?

Posted in Data Protection Law Compliance, Electronic Data Security, FTC, Privacy Policy

This week the Federal Trade Commission (FTC) fined TRUSTe, a company that endorses the data privacy practices of businesses, for misrepresenting its certification programs to consumers. TRUSTe offers Certified Privacy Seals, representing TRUSTe’s guarantee that e-commerce websites, mobile apps, cloud-based services, and child-centric websites are compliant with applicable regulatory mandates and employ best practices in protecting consumer information. To earn a Certified Privacy Seal, businesses must share their data privacy practices with TRUSTe, meet TRUSTe’s requirements for consumer transparency, and allow consumers to choose how personal information is collected and used.

However, once TRUSTe bestowed a Certified Privacy Seal on some companies, the FTC alleges that TRUSTe did little to ensure that these companies continued to follow TRUSTe’s best practices. TRUSTe admitted that it failed to conduct annual audits of previously certified websites, but reiterated that less than 10% of TRUSTe’s certifications were part of this oversight. You can read TRUSTe’s statement on its blog.

So, if you’re a business that deals with consumer personal information, is it worth the time and expense to receive third party certifications like those given by TRUSTe? It depends. Third party oversight may be valuable reassurance for your business, instilling confidence that all best practices and regulatory frameworks are identified and followed. However, don’t rely too heavily on such third party certification. While the FTC was silent on any ramifications for customers of TRUSTe, businesses should engage any third party certification with the mindset that the business itself is ultimately responsible for ensuring its privacy practices follow industry standards and meet all regulatory requirements.

 

Michael Kline’s “List of Considerations” for Indemnification Provisions in Business Associate Agreements

Posted in Uncategorized

I strongly urge every covered entity and business associate faced with a Business Associate Agreement that includes indemnification provisions to read Michael Kline’s “List of Considerations” before signing.  Michael’s list, included in an article he wrote that was recently published in the American Health Lawyers Association’s “AHLA Weekly” and available here, highlights practical and yet not obvious considerations.  For example, will indemnification jeopardize a party’s cybersecurity or other liability coverage?

Data use and confidentiality agreements used outside of the HIPAA context may also include indemnification provisions that are triggered in the event of a privacy or security breach.  Parties to these agreements should take a close look at these “standard” provisions and Michael’s list and proceed carefully before agreeing to indemnify and/or be indemnified by the other party.

The FCC – A New Data Security Regulator?

Posted in Data Protection Law Compliance, Data Security Breach Response, Data Theft, Electronic Data Security, FCC Rules and Regulations, Privacy Rights, Regulatory Enforcement and Litigation

On October 24, the Federal Communications Commission (FCC) threw its hat into the data security regulation ring when it announced it intends to fine two telecommunications companies $10 million for allegedly failing to safeguard the personal information of their customers.

Both TerraCom, Inc. (TerraCom) and YourTel America, Inc. (YourTel) allegedly collected customers’ personal information, including names, addresses, Social Security numbers, and driver’s licenses, and stored it on servers that were widely available on public websites online through a simple Google search.  The information could be accessed by “anyone in the world” exposing their customers “to an unacceptable risk of identity theft and other serious consumer harms.”

According to the FCC, TerraCom and YourTel violated Sections 201(b) and 222(a) of the Communications Act of 1934 by:

  • Failing to properly protect the confidentiality of consumers’ personal information, including names, addresses, Social Security numbers, driver’s licenses;
  • Failing to employ reasonable data security practices to protect consumer information;
  • Engaging in deceptive and misleading practices by representing to consumers in the companies’ privacy policies that they employed appropriate technologies to protect consumer information when they did not; and
  • Engaging in unjust and unreasonable practices by not notifying consumers that their information had been compromised by a breach.

Whether the FCC’s announcement signals its intention to become yet another regulator of data security remains to be seen.  But companies that collect and store customer personal information must take the initiative to ensure information is stored properly with appropriate data security safeguards in place.  And safeguards are not enough.  If, after investigation, a company uncovers a breach, it must timely notify customers in accordance with state law and federal regulations.

For more information about the FCC’s announcement, click here.

 

HIPAA Does Not Preempt State Privacy Cause of Action But May Represent “Standard of Care”, Says Connecticut Supreme Court

Posted in Data Protection Law Compliance

As if compliance with the various federal privacy and data security standards weren’t complicated enough, we may see state courts begin to import these standards into determinations of privacy actions brought under state laws.  Figuring out which federal privacy and data security standards apply, particularly if the standards conflict or obliquely overlap, becomes a veritable Rubik’s cube puzzle when state statutory and common law standards get thrown into the mix.

A state court may look to standards applied by the Federal Communications Commission (“FCC”), the Federal Trade Commission (“FTC”), the Department of Health and Human Services (“HHS”), or some other federal agency asserting jurisdiction over privacy and data security matters, and decide whether the applicable standard or standards preempts state law.  The state court may also decide that one or more of these federal agencies’ standards represent the “standard of care” to be applied in determining a matter under state law.  Or, as shown in a recent Connecticut Supreme Court decision described below, a court may decide that state law is not preempted by federal law or standards in one respect, while recognizing that the federal law or standard may embody the “standard of care” to be applied in deciding a privacy or data security matter under state law.

Fox Rothschild LLP partner Michael Kline posted the following at http://hipaahealthlaw.foxrothschild.com/:

The Connecticut Supreme Court handed down a decision in the case of Byrne v. Avery Center for Obstetrics and Gynecology, P.C., — A.3d —-, 2014 WL 5507439 (2014) that:

[a]ssuming, without deciding, that Connecticut’s common law recognizes a negligence cause of action arising from health care providers’ breaches of patient privacy in the context of complying with subpoenas, we agree with the plaintiff and conclude that such an action is not preempted by HIPAA and, further, that the HIPAA regulations may well inform the applicable standard of care in certain circumstances.

Interestingly, the decision is dated November 11, 2014, the federal holiday of Veterans Day, but was available on Westlaw on November 7, 2014.  The Court’s decision was rendered 20 months after the date that the case was argued on March 12, 2013.

The decision adds the Connecticut Supreme Court to a growing list of courts that have found that HIPAA’s lack of a private right of action does not necessarily foreclose action under state statutory and common law.  The Byrne case, however, has added significance, as it appears to be the first decision by the highest court of a state that says that state statutory and judicial causes of action for negligence, including invasion of privacy and infliction of emotional distress, are not necessarily preempted by HIPAA.  Moreover, it recognized that HIPAA may be the appropriate standard of care to determine whether negligence is present.

The Byrne case has important implications for HIPAA matters beyond the rights of individuals to sue under state tort law, using HIPAA regulations as the standard of care.  For example, in the area of business associate agreements (“BAAs”) and subcontractor agreements (“SCAs”), as was discussed in a posting in October 2013 on this blog relating to indemnification provisions,

there should be a negation of potential third party beneficiary rights under the BAA or SCA. For example, HIPAA specifically excludes individual private rights of action for a breach of HIPAA – a [p]arty does not want to run a risk of creating unintentionally a separate contractual private right of action in favor of a third party under a[n indemnification] [p]rovision.

A party should, therefore, endeavor to limit the number of persons that may assert a direct right to sue for indemnification resulting from a breach of a BAA.  Failing to limit the number of persons that may assert a direct right to sue for indemnification resulting from a breach of a BAA or SCA can be costly indeed, especially if the number of states that follow the Byrne case principles increases.

Efforts to use HIPAA regulations as standards for causes of action under state law can be expected to rise as a result of the Byrne decision.  Covered entities, business associates and subcontractors should consider acquiring sufficient cybersecurity insurance with expanded coverage and limits.

Medical Device, “Heal Thyself” from Data Hacking

Posted in HIPAA

Innovative health care-related technology and developing telemedicine products have the potential for dramatically changing the way in which health care is accessed.  The Federation of State Medical Boards (FSMB) grappled with some of the complexities that arise as information is communicated electronically in connection with the provision of medical care and issued a Model Policy in April of 2014 to guide state medical boards in deciding how to regulate the practice of “telemedicine”, a definition likely to become outdated as quickly as the next technology or product is developed.

Interestingly, the development and use of medical devices and communication technology seems to outpace agency definitions and privacy laws as quickly as hackers outpace security controls.  So how can we encourage innovation and adopt new models without throwing privacy out with the bathwater of the traditional, in-person patient-physician relationship?  A first step is to see and understand the gaps in privacy protection and figure out how to they can be narrowed.

HIPAA does not protect all information, even when the information is clearly health information and a specific individual can be identified in connection with the health information.   A guidance document issued jointly by the U.S. Department of Health and Human Services (HHS) and the Food and Drug Administration (FDA) on October 2, 2014 (FDA Guidance Document) contains the agencies’ “non-binding recommendations” to assist the medical device industry with cybersecurity.  The FDA Guidance Document defines “cybersecurity” as “the process of preventing unauthorized access, modification, misuse or denial of use, or the unauthorized use of information that is stored, accessed, or transferred from a medical device to an external recipient.”  If my medical device creates, receives, maintains, or transmits information related to my health status or condition, it’s likely I expect that information to be secure and private – but unless and until my doctor (or other covered entity or business associate) interfaces with it, it’s not protected health information (PHI) under HIPAA.

The FSMB’s Model Policy appropriately focused on the establishment of the physician-patient relationship.  In general, HIPAA protects information created, received, maintained or transmitted in connection with that relationship.  A medical device manufacturer, electronic health application developer, or personal health record vendor that is not a “health care provider” or other covered entity as defined under HIPAA, and is not providing services on behalf of a  covered entity as a business associate, can collect or use health-related information from an individual without abiding by HIPAA’s privacy and security obligations.  The device, health app, or health record may still be of great value to the individual, but the individual should recognize that the information it creates, receives, maintains or transmits is not HIPAA-protected until comes from or ends up with a HIPAA covered entity or business associate.

The FDA Guidance Document delineates a number of cybersecurity controls that manufacturers of FDA-regulated medical devices should develop, particularly if the device has the capability of connecting (wirelessly or hard-wired) to another device, the internet, or portable electronic media.  Perhaps these controls will become standard features of medical devices, but they might also be useful to developers of other types of health-related products marketed to or purchased by consumers.  In the meantime, though, it’s important to remember that your device is not your doctor, and HIPAA may not be protecting the health data created, received, maintained or transmitted by your medical device.

Litigation Roundup: Cybersecurity

Posted in Data Security Breach Response

Scott L. Vernick, Chair of the Privacy and Data Security Practice Group at Fox Rothschild, spoke at the Practising Law Institute (PLI) seminar, Cybersecurity 2014: Managing the Risk, held in New York, on September 10, 2014.

Among other topics, the day-long conference addressed the rapidly changing litigation environment created by cyber-attacks and data breaches. Click here to view Scott’s PowerPoint presentation.

Updates on State Breach Notification Laws in First Half of 2014

Posted in Data Protection Law Compliance, Data Security Breach Response, Proposed Law

It is midway through 2014 and there have been updates to four existing, and one new, state breach notification laws.  Iowa and Florida have substantively amended their current breach notification laws, both of which went into effect on July 1, 2014, and Kentucky has become the 47th state to implement a breach notification law, which went into effect on July 14, 2014.

Idaho and Vermont also amended their data breach laws.  Idaho’s amendments were merely technical and did not change the substance of the law.  Vermont’s amendments were similarly technical, but a provision was added that requires a Vermont law enforcement agency to notify a business in writing if it has a reasonable belief that a security breach has or may have occurred at the business.

Iowa’s Breach Notification Law

Starting on July 1, 2014 Iowa’s amended breach notification law created a few changes that will impact when and who an individual or business must notify if there is a data breach.  The highlights of the amendments are as follows:

  •          A “Breach of Security” now includes an unauthorized acquisition of Personal Information that was transferred from computerized form to any medium, including paper.
  •          “Personal Information” now includes encrypted, redacted, or otherwise altered data elements if the keys to unencrypt, unredact, or otherwise read the data elements were acquired through the security breach.
  •          An expiration date is now included as a data element for combination with account numbers or credit or debit card numbers.
  •          Notification must now be provided to the Director of the Consumer Protection division of the Office of the Attorney General if the breach includes more than 500 Iowa residents.

Florida’s Breach Notification Law

Florida implemented the Information Protection Act of 2014 that repeals the existing data breach law and implements strengthened notification requirements.  The new law was signed by Governor Rick Scott on June 20, 2014, and went into effect on July 1, 2014.  The new law redefines a Covered Entity, expands the definition of Personal Information, and expands the notification requirements if there is a data breach.

Florida’s new breach notification law redefines a “Covered Entity” as any sole proprietorship, partnership, corporation, trust, estate, cooperative, association, or other commercial entity or governmental entity that acquires, maintains, stores, or uses Personal Information.

In addition to what the original law included, “Personal Information” now includes a username or email address in combination with a password or security question and answer that would permit access to an online account.  Further, “Personal Information” includes the following new data elements:

  •          A passport number, military identification number, or other government issued number used to verify identity.
  •          The medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional.
  •          The health insurance policy number or subscriber identification number in combination with a unique identifier used by the health insurer.

The new Florida law also provides that Personal Information does not include information that is encrypted, secured, or modified by any other method or technology that removes elements that personally identify an individual or that otherwise renders the information unusable.

If there is a data breach, notice must be provided to individuals in Florida as expeditiously as practicable and without unreasonable delay, but no later than 30 days after the “Covered Entity” concludes that a breach occurred or has reason to believe a breach occurred.  Notice of a data breach may be delayed by a federal, state, or local law enforcement agency if the agency believes notice of the data breach will interfere with a criminal investigation.  Notice of a data breach must be provided to consumer reporting agencies without unreasonable delay if the data breach requires notification of more than 1,000 individuals at a single time.  The new Florida law expands the notification requirement to include the Department of Legal Affairs.  Notifying the Department of Legal Affairs is only required if the security breach affects 500 or more individuals in Florida (Florida’s breach notification law does not refer to residents, unlike other states’ breach notification laws).  Notice to the Department of Legal Affairs must be provided as expeditiously as practicable, but no later than 30 days after the “Covered Entity” concludes that a breach occurred or has reason to believe a breach occurred.

The new Florida law also requires specific information to be included in a data breach notification, depending on to whom such notification is addressed.  When notifying an individual of a data breach by written or email notice, the notice must include:

  •          the date, estimated date, or estimated date range of the breach;
  •          a description of the “Personal Information” accessed or reasonably believed to have been accessed during the breach; and
  •          the contact information for the individual to reach the entity.

When notifying an individual of a data breach by substitute notice, which method can be used if the written notice or email notice is not feasible because the cost of providing notice would exceed $250,000, the affected individuals exceed 500,000 persons, or the “Covered Entity” does not have a mailing address or email address for the affected individuals, the notice shall include:

  •          a conspicuous notice on the entity’s website, if the entity maintains a website; and
  •          notices in print media and in broadcast media, including major media in urban and rural areas where the affected individuals reside.

When notifying the Department of Legal Affairs of a data breach, the notice must be in writing and include:

  •          a synopsis of the breach;
  •          the number of Florida residents affected by the breach;
  •          any services being offered to the affected individuals;
  •          a copy of the notice to the individuals or an explanation of other actions taken; and
  •          the contact information of an employee or agent the Department of Legal Affairs may contact to obtain further information about the breach.

Kentucky’s Breach Notification Law

Kentucky became the 47th state to pass a breach notification law.  Governor Steve Beshear signed H.B. 232 into law on April 10, 2014, and the law went into effect on July 14, 2014.  The new law will require any individual or business entity that conducts business in Kentucky and maintains computerized data that includes Personal Information to notify residents of Kentucky of a Breach of Security.  A “Breach of Security” is an unauthorized acquisition of unencrypted and unredacted computerized data that compromises the security, confidentiality, or integrity of Personal Information maintained by the individual or business entity and actually causes, or leads the individual or business entity to reasonably believe has caused or will cause, identity theft or fraud against any resident of Kentucky.

“Personal Information” means an individual’s first name or first initial and last name combined with any one or more of the following data elements, when the name or data is not redacted:

  •          Social Security number;
  •          driver’s license number; or
  •          account number, credit or debit card number, in combination with any security code, access code, or password that would permit access to an individual’s financial account.

The timing of the breach notification shall comply with the following requirements:

  •          The breach notification shall be made in the most expedient time possible and without unreasonable delay, consistent with the legitimate needs of law enforcement or any measure necessary to determine the scope of the breach and restore the reasonable integrity of the data system.
  •          The breach notification may be delayed if a law enforcement agency determines that notification will impede a criminal investigation.  The notification shall be made promptly after the law enforcement agency determines that it will not compromise the investigation.

With respect to the manner of the breach notification, the notice may be provided by one of the follow methods:

  •          written notice;
  •          electronic notice, if the notice provided is consistent with the provisions regarding electronic records and signatures set forth in Section 7001 of Title 15 of the United States Code; or
  •          substitute notice, if the individual or business entity demonstrates that the cost of providing notice would exceed $250,000, or that the affected class of subject persons to be notified exceeds 500,000, or that the individual or business entity does not have sufficient contact information.  Substitute notice shall consist of the following: (a) email notice, when the individual or business entity has an email address for the subject persons; (b) conspicuous posting of the notice on the individual or business entity’s website, if the individual or business entity maintains a website; or (c) notification to major statewide media.

Notwithstanding the above, any individual or business entity that maintains its own notification procedures as part of an information security policy for the treatment of “Personal Information,” and is otherwise consistent with the timing requirements, shall be deemed to be in compliance with the notification requirements of the Kentucky statute if the individual or business entity notifies the subject persons in accordance with its policies in the event of a breach of security of the system.

Will Unearthing the FTC’s Data Security Standards Help the Health Care Industry?

Posted in HIPAA

As a regulatory lawyer, I frequently find myself parsing words and phrases crafted by legislators and agencies that, all too often, are frustratingly vague or contradictory when applied to a particular real-world and perhaps unanticipated (at the time of drafting) scenario.  So when an agency crafting guidance for a regulated industry has advisors on hand who have first-hand knowledge and expertise about particular real-world occurrences, such as data security breaches, it would seem that agency would be in an ideal position to create relevant, clear, and sufficiently detailed guidance that the affected industry could use to prevent certain occurrences and achieve compliance with the agency’s requirements.

As described in posts on our HIPAA, HITECH & HIT blog, the Federal Trade Commission (FTC) has brought numerous enforcement actions against businesses based on its decision that the businesses’ data security practices were “deceptive” or “unfair” under Section 5 of the FTC Act.  When I last checked the FTC’s website, there were 54 cases listed under the “Privacy and Security” topic and “Data Security” subtopic, one of which is the LabMD case filed on August 29, 2013.  Blog readers may have “discerned” (as do smart businesses when reviewing these cases and trying to figure out what the FTC’s data security “standards” might be) that I am intrigued with the LabMD case.  My intrigue arises, in part, from the stark contrast between the FTC and the Department of Health and Human Services (HHS) and the way these agencies identify data security standards applicable to regulated entities.  Of course, HHS’s standards apply specifically to the subset of data that is protected health information (PHI) – precisely the type of data involved in the LabMD case – but that hasn’t stopped the FTC from insisting that its own “standards” also apply to covered entities and business associates regulated by HIPAA.

The latest development in the LabMD case is particularly intriguing.  On May 1, 2014, FTC Chief Administrative Law Judge D. Michael Chappell granted LabMD’s motion to compel deposition testimony as to “what data security standards, if any, have been published by the FTC or the Bureau [of Consumer Protection], upon which … [FTC] Counsel intends to rely at trial to demonstrate that … [LabMD’s] data security practices were not reasonable and appropriate.”  The FTC had fought to prevent this testimony, arguing that the “FTC’s “data security standards” are not relevant to” the factual question of whether LabMD’s data security procedures were “unreasonable” in light of the FTC’s standards.

The FTC does publish a “Guide for Business” on “Protecting Personal Information” on its website.  This “Guide” is very basic (15 pages in total, with lots of pictures), and includes bullet points with tips such as “Don’t store sensitive consumer data on any computer with an Internet connection unless it’s essential for conducting your business.”  The “Guide” does not reference HIPAA, and does not come close to the breadth and depth of the HIPAA regulations (and other HHS published materials) in terms of setting forth the agency’s data security standards.

LabMD’s Answer and Defenses to the FTC’s Complaint was filed on September 17, 2013.  In that document, LabMD admits to having been contacted in May of 2008 by a third party, Tiversa, claiming that it had obtained an “insurance aging report” containing information about approximately 9,300 patients.  Tiversa, a privately-held company that provides “intelligence services to corporations, government agencies and individuals based on patented technologies” and can “locate exposed files … and assist in remediation and risk mitigation,” boasts an impressive advisory board.  According to Tiversa’s website, advisory board member Dr. Larry Ponemon “has extensive knowledge of regulatory frameworks for managing privacy and data security including … health care,” and “was appointed to the Advisory Committee for Online Access & Security” for the FTC.

Perhaps the FTC might consult with Dr. Ponemon in crafting data security standards applicable to the health care industry, since Tiversa apparently identified LabMD’s data security breach in the first place.  If (as published by the Ponemon Institute in its “Fourth Annual Benchmark Study on Patient Privacy and Data Security”) criminal attacks on health care systems have risen 100% since the Ponemon Institute’s first study conducted in 2010, the health care industry remains vulnerable despite efforts to comply with HIPAA and/or discern the FTC’s data privacy standards.  Bringing Dr. Ponemon’s real-world experience to bear in crafting clear and useful FTC data privacy standards (that hopefully complement, not contradict, already-applicable HIPAA standards) might actually help protect PHI from both criminal attack and discovery by “intelligence service” companies like Tiversa.

FTC Updates COPPA FAQs to Address Student Privacy Issues

Posted in COPPA

On Tuesday, April 22nd, the Federal Trade Commission announced that it has updated its “Complying with COPPA: Frequently Asked Questions: A Guide for Business and Parents and Small Entity Compliance Guide” to address consent for the collection of student information.

The recent updates to Section M, repeated in full below with the entire FAQs available here, focuses on the disclosure use of students’ data by third party website and web service providers in the education setting.  The rights of parents under COPPA to be informed and notified of such use is front and center.

The updates come after many schools have set the standard of disclosure by creating Acceptable Use Policies and otherwise disclosing to parents how their child’s information is disclosued and used.

The full, revised Section M follows:

M. COPPA AND SCHOOLS

1. Can an educational institution consent to a website or app’s collection, use or disclosure of personal information from students?

Yes. Many school districts contract with third-party website operators to offer online programs solely for the benefit of their students and for the school system – for example, homework help lines, individualized education modules, online research and organizational tools, or web-based testing services. In these cases, the schools may act as the parent’s agent and can consent to the collection of kids’ information on the parent’s behalf. However, the school’s ability to consent on behalf of the parent is limited to the educational context – where an operator collects personal information from students for the use and benefit of the school, and for no other commercial purpose. Whether the website or app can rely on the school to provide consent is addressed in FAQ M.2 below. FAQ M.5 provides examples of other “commercial purposes.”

Whether the operator gets consent from the school or the parent, the operator must still comply with other COPPA requirements. For example, the operator must provide the school with all the required notices, as noted above, and must provide parents, upon request, a description of the types of personal information collected; an opportunity to review the child’s personal information and/or have the information deleted; and the opportunity to prevent further use or online collection of a child’s personal information.

In addition, the school must consider its obligations under the Family Educational Rights and Privacy Act (FERPA), which gives parents certain rights with respect to their children’s education records. FERPA is administered by the U.S. Department of Education. For general information on FERPA, see http://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html. Schools also must comply with the Protection of Pupil Rights Amendment, which is also administered by the Department of Education. See http://www2.ed.gov/policy/gen/guid/fpco/index.html.

2. Under what circumstances can an operator of a website or online service rely upon an educational institution to provide consent?

Where a school has contracted with an operator to collect personal information from students for the use and benefit of the school, and for no other commercial purpose, the operator is not required to obtain consent directly from parents, and can presume that the school’s authorization for the collection of students’ personal information is based upon the school having obtained the parents’ consent. However, the operator must provide the school with full notice of its collection, use, and disclosure practices, so that the school may make an informed decision. See FAQ M.6 below.

If, however, an operator intends to use or disclose children’s personal information for its own commercial purposes in addition to the provision of services to the school, it will need to obtain parental consent. Operators may not use the personal information collected from children based on a school’s consent for another commercial purpose because the scope of the school’s authority to act on behalf of the parent is limited to the school context.

Where an operator gets consent from the school rather than the parent, the operator’s method must be reasonably calculated, in light of available technology, to ensure that a school is actually providing consent, and not a child pretending to be a teacher, for example.

3. Who should provide consent – an individual teacher, the school administration, or the school district?

As a best practice, we recommend that schools or school districts decide whether a particular site’s or service’s information practices are appropriate, rather than delegating that decision to the teacher. Many schools have a process for assessing sites’ and services’ practices so that this task does not fall on individual teachers’ shoulders.

4. When the school gives consent, what are the school’s obligations regarding notifying the parent?

As a best practice, the school should consider providing parents with a notice of the websites and online services whose collection it has consented to on behalf of the parent. Schools can identify, for example, sites and services that have been approved for use district-wide or for the particular school. In addition, the school may also want to make the operators’ direct notices regarding their information practices available to interested parents. This allows the parent to assess the site’s or service’s practices and to exercise their rights under COPPA – for example, to review the child’s personal information. Many school systems have implemented Acceptable Use Policies for Internet Use (AUPs) to educate parents and students about in-school Internet use; the school could maintain this information on a website or provide a link to the information at the beginning of the school year.

5. What information should a school seek from an operator before entering into an arrangement that permits the collection, use, or disclosure of personal information from students?

In deciding whether to use online technologies with students, a school should be careful to understand how an operator will collect, use, and disclose personal information from its students. Among the questions that a school should ask potential operators are:

•What types of personal information will the operator collect from students?

•How does the operator use this personal information?

•Does the operator use or share the information for commercial purposes not related to the provision of the online services requested by the school? For instance, does it use the students’ personal information in connection with online behavioral advertising, or building user profiles for commercial purposes not related to the provision of the online service? If so, the school cannot consent on behalf of the parent.

•Does the operator enable parents to review and have deleted the personal information collected from their children? If not, the school cannot consent on behalf of the parent.

•What measures does the operator take to protect the security, confidentiality, and integrity of the personal information that it collects?

•What are the operator’s data retention and deletion policies for children’s personal information?

6. I’m an educator and I want students in my school to share information for class projects using a publicly available online social network that permits children to participate with prior parental consent. Can I register students in lieu of having their parents register them?

This question assumes that your school hasn’t entered into an arrangement with the social network for the provision of school-related activities, but rather that you intend to use a service that is more broadly available to children and possibly other users. The Commission has recognized the school’s ability to act in the stead of parents in order to provide in-school Internet access. However, where the activities and the associated collection or disclosure of children’s personal information will extend beyond school-related activities, the school should, as a best practice, effectively notify parents of its intent to allow children to participate in such online activities before giving consent on parents’ behalf.

The Wild West of Data Breach Enforcement by the Feds

Posted in FTC, HIPAA

Imagine you have completed your HIPAA risk assessment and implemented a robust privacy and security plan designed to meet each criteria of the Omnibus Rule. You think that, should you suffer a data breach involving protected health information as defined under HIPAA (PHI), you can show the Secretary of the Department of Health and Human Services (HHS) and its Office of Civil Rights (OCR), as well as media reporters and others, that you exercised due diligence and should not be penalized. Your expenditure of time and money will help ensure your compliance with federal law.

Unfortunately, however, HHS is not the only sheriff in town when it comes to data breach enforcement. In a formal administrative action, as well as two separate federal court actions, the Federal Trade Commission (FTC) has been battling LabMD for the past few years in a case that gets more interesting as the filings and rulings mount (In the Matter of LabMD, Inc., Docket No. 9357 before the FTC). LabMD’s CEO Michael Daugherty recently published a book on the dispute with a title analogizing the FTC to the devil, with the byline, “The Shocking Expose of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine, and Small Business.” Daugherty issued a press release in late January attributing the shutdown of operations of LabMD primarily to the FTC’s actions.

Among many other reasons, this case is interesting because of the dual jurisdiction of the FTC and HHS/OCR over breaches that involve individual health information.

On one hand, the HIPAA regulations detail a specific, fact-oriented process for determining whether an impermissible disclosure of PHI constitutes a breach under the law. The pre-Omnibus Rule breach analysis involved consideration of whether the impermissible disclosure posed a “significant risk of financial, reputational, or other harm” to the individual whose PHI was disclosed. The post-Omnibus Rule breach analysis presumes that an impermissible disclosure is a breach, unless a risk assessment that includes consideration of at least four specific factors demonstrates there was a “low probability” that the individual’s PHI was compromised.

In stark contrast to HIPAA, the FTC files enforcement actions based upon its decision that an entity’s data security practices are “unfair”, but it has not promulgated regulations or issued specific guidance as to how or when a determination of “unfairness” is made. Instead, the FTC routinely alleges that entities’ data security practices are “unfair” because they are not “reasonable” – two vague words that leave entities guessing about how to become FTC compliant.

In 2013, in an administrative action, LabMD challenged the FTC’s authority to institute these type of enforcement actions. LabMD argued, in part, that the FTC does not have the authoritiy to bring actions under the “unfairness” prong of Section 5 of the FTC Act. LabMD further argued that there should only be one sheriff in town – not both HHS and the FTC. Not surprisingly, in January 2014, the FTC denied the motion to dismiss, finding that HIPAA requirements are “largely consistent with the data security duties” of the FTC under the FTC Act.The opinion speaks of “data security duties” and “requirements” of the FTC Act, but these “duties” and “requirements” are not spelled out (much less even mentioned) in the FTC Act. As a result, how can anyone arrive at the determination that the standards are consistent? Instead, entities that suffer a data security incident must comply with the detailed analysis under HIPAA, as well as the absence of any clear guidance under the FTC Act.

In a March10, 2014 ruling, the administrative law judge ruled that he would permit LabMD to depose an FTC designee regarding consumers harmed by LabMD’s allegedly inadequate security practices. However, the judge also ruled that LabMD could not “inquire into why, or how, the factual bases of the allegations … justify the conclusion that [LabMD] violated the FTC Act.” So while the LabMD case may eventually provide some guidance as to the factual circumstances involved in an FTC determination that data security practices are “unfair” and have caused, or are likely to cause, consumer harm, the legal reasoning behind the FTC’s determinations is likely to remain a mystery.

In addition to the challenges mounted by LabMD, Wyndham Worldwide Corp., has also spent the past year contesting the FTC’s authority to pursue enforcement actions based upon companies’ alleged “unfair” or “unreasonable” data security practices. On Monday, April 7, 2014, the United States District Court for the District of New Jersey sided with the FTC and denied Wyndham’s motion to dismiss the FTC’s complaint. The Court found that Section 5 of the FTC Act permits the FTC to regulate data security, and that the FTC is not required to issue formal rules about what companies must do to implement “reasonable” data security practices.

These recent victories may cause the “other sheriff” – the FTC – to ramp up its efforts to regulate data security practices. Unfortunately, because it does not appear that the FTC will issue any guidance in the near future about what companies can do to ensure that their data security practices are reasonable, these companies must monitor closely the FTC’s actions, adjudications or other signals in an attempt to predict what the FTC views as data security best practices.