Innovative health care-related technology and developing telemedicine products have the potential for dramatically changing the way in which health care is accessed.  The Federation of State Medical Boards (FSMB) grappled with some of the complexities that arise as information is communicated electronically in connection with the provision of medical care and issued a Model Policy in April of 2014 to guide state medical boards in deciding how to regulate the practice of “telemedicine”, a definition likely to become outdated as quickly as the next technology or product is developed.

Interestingly, the development and use of medical devices and communication technology seems to outpace agency definitions and privacy laws as quickly as hackers outpace security controls.  So how can we encourage innovation and adopt new models without throwing privacy out with the bathwater of the traditional, in-person patient-physician relationship?  A first step is to see and understand the gaps in privacy protection and figure out how to they can be narrowed.

HIPAA does not protect all information, even when the information is clearly health information and a specific individual can be identified in connection with the health information.   A guidance document issued jointly by the U.S. Department of Health and Human Services (HHS) and the Food and Drug Administration (FDA) on October 2, 2014 (FDA Guidance Document) contains the agencies’ “non-binding recommendations” to assist the medical device industry with cybersecurity.  The FDA Guidance Document defines “cybersecurity” as “the process of preventing unauthorized access, modification, misuse or denial of use, or the unauthorized use of information that is stored, accessed, or transferred from a medical device to an external recipient.”  If my medical device creates, receives, maintains, or transmits information related to my health status or condition, it’s likely I expect that information to be secure and private – but unless and until my doctor (or other covered entity or business associate) interfaces with it, it’s not protected health information (PHI) under HIPAA.

The FSMB’s Model Policy appropriately focused on the establishment of the physician-patient relationship.  In general, HIPAA protects information created, received, maintained or transmitted in connection with that relationship.  A medical device manufacturer, electronic health application developer, or personal health record vendor that is not a “health care provider” or other covered entity as defined under HIPAA, and is not providing services on behalf of a  covered entity as a business associate, can collect or use health-related information from an individual without abiding by HIPAA’s privacy and security obligations.  The device, health app, or health record may still be of great value to the individual, but the individual should recognize that the information it creates, receives, maintains or transmits is not HIPAA-protected until comes from or ends up with a HIPAA covered entity or business associate.

The FDA Guidance Document delineates a number of cybersecurity controls that manufacturers of FDA-regulated medical devices should develop, particularly if the device has the capability of connecting (wirelessly or hard-wired) to another device, the internet, or portable electronic media.  Perhaps these controls will become standard features of medical devices, but they might also be useful to developers of other types of health-related products marketed to or purchased by consumers.  In the meantime, though, it’s important to remember that your device is not your doctor, and HIPAA may not be protecting the health data created, received, maintained or transmitted by your medical device.

Imagine you have completed your HIPAA risk assessment and implemented a robust privacy and security plan designed to meet each criteria of the Omnibus Rule. You think that, should you suffer a data breach involving protected health information as defined under HIPAA (PHI), you can show the Secretary of the Department of Health and Human Services (HHS) and its Office of Civil Rights (OCR), as well as media reporters and others, that you exercised due diligence and should not be penalized. Your expenditure of time and money will help ensure your compliance with federal law.

Unfortunately, however, HHS is not the only sheriff in town when it comes to data breach enforcement. In a formal administrative action, as well as two separate federal court actions, the Federal Trade Commission (FTC) has been battling LabMD for the past few years in a case that gets more interesting as the filings and rulings mount (In the Matter of LabMD, Inc., Docket No. 9357 before the FTC). LabMD’s CEO Michael Daugherty recently published a book on the dispute with a title analogizing the FTC to the devil, with the byline, “The Shocking Expose of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine, and Small Business.” Daugherty issued a press release in late January attributing the shutdown of operations of LabMD primarily to the FTC’s actions.

Among many other reasons, this case is interesting because of the dual jurisdiction of the FTC and HHS/OCR over breaches that involve individual health information.

On one hand, the HIPAA regulations detail a specific, fact-oriented process for determining whether an impermissible disclosure of PHI constitutes a breach under the law. The pre-Omnibus Rule breach analysis involved consideration of whether the impermissible disclosure posed a “significant risk of financial, reputational, or other harm” to the individual whose PHI was disclosed. The post-Omnibus Rule breach analysis presumes that an impermissible disclosure is a breach, unless a risk assessment that includes consideration of at least four specific factors demonstrates there was a “low probability” that the individual’s PHI was compromised.

In stark contrast to HIPAA, the FTC files enforcement actions based upon its decision that an entity’s data security practices are “unfair”, but it has not promulgated regulations or issued specific guidance as to how or when a determination of “unfairness” is made. Instead, the FTC routinely alleges that entities’ data security practices are “unfair” because they are not “reasonable” – two vague words that leave entities guessing about how to become FTC compliant.

In 2013, in an administrative action, LabMD challenged the FTC’s authority to institute these type of enforcement actions. LabMD argued, in part, that the FTC does not have the authoritiy to bring actions under the “unfairness” prong of Section 5 of the FTC Act. LabMD further argued that there should only be one sheriff in town – not both HHS and the FTC. Not surprisingly, in January 2014, the FTC denied the motion to dismiss, finding that HIPAA requirements are “largely consistent with the data security duties” of the FTC under the FTC Act.The opinion speaks of “data security duties” and “requirements” of the FTC Act, but these “duties” and “requirements” are not spelled out (much less even mentioned) in the FTC Act. As a result, how can anyone arrive at the determination that the standards are consistent? Instead, entities that suffer a data security incident must comply with the detailed analysis under HIPAA, as well as the absence of any clear guidance under the FTC Act.

In a March10, 2014 ruling, the administrative law judge ruled that he would permit LabMD to depose an FTC designee regarding consumers harmed by LabMD’s allegedly inadequate security practices. However, the judge also ruled that LabMD could not “inquire into why, or how, the factual bases of the allegations … justify the conclusion that [LabMD] violated the FTC Act.” So while the LabMD case may eventually provide some guidance as to the factual circumstances involved in an FTC determination that data security practices are “unfair” and have caused, or are likely to cause, consumer harm, the legal reasoning behind the FTC’s determinations is likely to remain a mystery.

In addition to the challenges mounted by LabMD, Wyndham Worldwide Corp., has also spent the past year contesting the FTC’s authority to pursue enforcement actions based upon companies’ alleged “unfair” or “unreasonable” data security practices. On Monday, April 7, 2014, the United States District Court for the District of New Jersey sided with the FTC and denied Wyndham’s motion to dismiss the FTC’s complaint. The Court found that Section 5 of the FTC Act permits the FTC to regulate data security, and that the FTC is not required to issue formal rules about what companies must do to implement “reasonable” data security practices.

These recent victories may cause the “other sheriff” – the FTC – to ramp up its efforts to regulate data security practices. Unfortunately, because it does not appear that the FTC will issue any guidance in the near future about what companies can do to ensure that their data security practices are reasonable, these companies must monitor closely the FTC’s actions, adjudications or other signals in an attempt to predict what the FTC views as data security best practices.

[This blog posting was previously posted on the HIPAA, HITECH and Health Information blog.]

The recent release of the HIPAA/HITECH “mega rule” or “omnibus rule” has given bloggers and lawyers like us plenty of topics for analysis and debate, as well as some tools with which to prod covered entities, business associates and subcontractors to put HIPAA/HITECH-compliant Business Associate Agreements (“BAAs”) in place. It’s also a reminder to read BAAs that are already in place, and to make sure the provisions accurately describe how and why protected health information (“PHI”) is to be created, received, maintained, and/or transmitted. 

If you are an entity that participates in the Medicare Shared Savings Program as a Medicare Accountable Care Organization (“ACO”), your ability to access patient data from Medicare depends on your having signed the CMS Data Use Agreement (the “Data Use Agreement”). Just as covered entities, business associates, and subcontractors should read and fully understand their BAAs, Medicare ACOs should make sure they are aware of several Data Use Agreement provisions that are more stringent than provisions typically included in a BAA and that may come as a surprise. Here are ten provisions from the Data Use Agreement worth reviewing, whether you are a Medicare ACO or any other business associate or subcontractor, as these may very well resurface in some form in the “Super BAA” of the future:

 

1.         CMS (the covered entity) retains ownership rights in the patient data furnished to the ACO.

 

2.         The ACO may only use the patient data for the purposes enumerated in the Data Use Agreement.

 

3.         The ACO may not grant access to the patient data except as authorized by CMS.

 

4.         The ACO agrees that, within the ACO and its agents, access to patient data will be limited to the minimum amount of data and minimum number of individuals necessary to achieve the stated purposes.

 

5.         The ACO will only retain the patient data (and any derivative data) for one year or until 30 days after the purpose specified in the Data Use Agreement is completed, whichever is earlier, and the ACO must destroy the data and send written certification of the destruction to CMS within 30 days.

 

6.         The ACO must establish administrative, technical, and physical safeguards that meet or exceed standards established by the Office of Management and Budget and the National Institute of Standards and Technology.

 

7.         The ACO acknowledges that it is prohibited from using unsecured telecommunications, including the Internet, to transmit individually identifiable, bidder identifiable or deducible information derived from the patient files. 

 

8.         The ACO agrees not to disclose any information derived from the patient data, even if the information does not include direct identifiers, if the information can, by itself or in combination with other data, be used to deduce an individual’s identity.

 

9.         The ACO agrees to abide by CMS’s cell size suppression policy (which stipulates that no cell of 10 or less may be displayed).

 

And last, but certainly not least:

 

10.       The ACO agrees to report to CMS any breach of personally identifiable information from the CMS data file(s), loss of these data, or disclosure to an unauthorized person by telephone or email within one hour.

 

 

While the undertakings of a Medicare ACO and the terminology in the Data Use Agreement for protection of patient data may differ from those of covered entities, business associates and subcontractors and their BAAs under the HIPAA/HITECH regulations, they have many striking similarities and purposes. 

 

The following was recently posted in substantially the same form on the Fox Rothschild LLP HIPAA, HITECH and Health Information Technology blog.

Elizabeth Litten and Michael Kline write:

 

We have posted several blogs, including those here and here, tracking the reported 2011 theft of computer tapes from the car of an employee of Science Applications International Corporation (“SAIC”) that contained the protected health information (“PHI”) affecting approximately 5 million military clinic and hospital patients (the “SAIC Breach”).  SAIC’s recent Motion to Dismiss (the “Motion”) the Consolidated Amended Complaint filed in federal court in Florida as a putative class action (the “SAIC Class Action”) highlights the gaps between an incident (like a theft) involving PHI, a determination that a breach of PHI has occurred, and the realization of harm resulting from the breach. SAIC’s Motion emphasizes this gap between the incident and the realization of harm, making it appear like a chasm so wide it practically swallows the breach into oblivion. 

 

SAIC, a giant publicly-held government contractor that provides information technology (“IT”) management and, ironically, cyber security services, was engaged to provide IT management services to TRICARE Management Activity, a component of TRICARE, the military health plan (“TRICARE”) for active duty service members working for the U.S. Department of Defense (“DoD”).  SAIC employees had been contracted to transport backup tapes containing TRICARE members’ PHI from one location to another.

 

According to the original statement published in late September of 2011 ( the “TRICARE/SAIC Statement”) the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the TRICARE/SAIC Statement said that there was no financial data, such as credit card or bank account information, on the backup tapes. Note 17 to the audited financial statements (“Note 17”) contained in the SAIC Annual Report on Form 10-K for the fiscal year ended January 31, 2012, dated March 27, 2012 (the “2012 Form 10-K”), filed with the Securities and Exchange Commission (the "SEC”), includes the following:

 

There is no evidence that any of the data on the backup tapes has actually been accessed or viewed by an unauthorized person. In order for an unauthorized person to access or view the data on the backup tapes, it would require knowledge of and access to specific hardware and software and knowledge of the system and data structure.  The Company [SAIC] has notified potentially impacted persons by letter and is offering one year of credit monitoring services to those who request these services and in certain circumstances, one year of identity restoration services.

 

While the TRICARE/SAIC Statement contained similar language to that quoted above from Note 17, the earlier TRICARE/SAIC Statement also said, “The risk of harm to patients is judged to be low despite the data elements . . . .” Because Note 17 does not contain such “risk of harm” language, it would appear that (i) there may have been a change in the assessment of risk by SAIC six months after the SAIC Breach or (ii) SAIC did not want to state such a judgment in an SEC filing.

 

Note 17 also discloses that SAIC has reflected a $10 million loss provision in its financial statements relating to the  SAIC Class Action and various other putative class actions respecting the SAIC Breach filed between October 2011 and March 2012 (for a total of seven such actions filed in four different federal District Courts).  In Note 17 SAIC states that the $10 million loss provision represents the “low end” of SAIC’s estimated loss and is the amount of SAIC’s deductible under insurance covering judgments or settlements and defense costs of litigation respecting the SAIC Breach.  SAIC expresses the belief in Note 17 that any loss experienced in excess of the $10 million loss provision would not exceed the insurance coverage.  

 

Such insurance coverage would, however, likely not be available for any civil monetary penalties or counsel fees that may result from the current investigation of the SAIC Breach being conducted by the Office of Civil Rights of the Department of Health and Human Services (“HHS”) as described in Note 17.

  

Initially, SAIC did not deem it necessary to offer credit monitoring to the almost 5 million reportedly affected individuals. However, SAIC urged anyone suspecting they had been affected to contact the Federal Trade Commission’s identity theft website. Approximately 6 weeks later, the DoD issued a press release stating that TRICARE had “directed” SAIC to take a “proactive” response by covering a year of free credit monitoring and restoration services for any patients expressing “concern about their credit as a result of the data breach.”   The cost of such a proactive response easily can run into millions of dollars in the SAIC Breach. It is unclear the extent, if any, to which insurance coverage would be available to cover the cost of the proactive response mandated by the DoD, even if the credit monitoring, restoration services and other remedial activities of SAIC were to become part of a judgment or settlement in the putative class actions.

 

We have blogged about what constitutes an impermissible acquisition, access, use or disclosure of unsecured PHI that poses a “significant risk” of “financial, reputational, or other harm to the individual” amounting to a reportable HIPAA breach, and when that “significant risk” develops into harm that may create claims for damages by affected individuals. Our partner William Maruca, Esq., artfully borrows a phrase from former Defense Secretary Donald Rumsfeld in discussing a recent disappearance of unencrypted backup tapes reported by Women and Infants Hospital in Rhode Island. If one knows PHI has disappeared, but doubts it can be accessed or used (due to the specialized equipment and expertise required to access or use the PHI), there is a “known unknown” that complicates the analysis as to whether a breach has occurred. 

 

As we await publication of the “mega” HIPAA/HITECH regulations, continued tracking of the SAIC Breach and ensuing class action litigation (as well as SAIC’s SEC filings and other government filings and reports on the HHS list of large PHI security breaches) provides some insights as to how covered entities and business associates respond to incidents involving the loss or theft of, or possible access to, PHI.   If a covered entity or business associate concludes that the incident poses a “significant risk” of harm, but no harm actually materializes, perhaps (as the SAIC Motion repeatedly asserts) claims for damages are inappropriate. When the covered entity or business associate takes a “proactive” approach in responding to what it has determined to be a “significant risk” (such as by offering credit monitoring and restoration services), perhaps the risk becomes less significant. But once the incident (a/k/a, the ubiquitous laptop or computer tape theft from an employee’s car) has been deemed a breach, the chasm between incident and harm seems to open wide enough to encompass a mind-boggling number of privacy and security violation claims and issues.

The Centers for Medicare & Medicaid Services (CMS) recently published proposed rules setting forth the “Stage 2” criteria that eligible providers (EPs), eligible hospitals (EHs), and critical access hospitals (CAHs) (referred to herein collectively as “providers”) would be required to meet in order to qualify for Medicare and/or Medicaid incentive payments for the use of electronic health records (EHRs) (“Stage 2 Proposal”).  The Stage 2 Proposal is a small-font, acronym-laden, tediously-detailed 131-page document that modifies and expands upon the criteria included in the “Stage 1” final rule published on July 28, 2010 and is likely to be of interest primarily to providers concerned with  receiving or continuing to receive added payments from CMS for adopting and “meaningfully using” EHR. 

 

The Stage 2 Proposal is not, at first glance, particularly relevant reading for those of us generally interested in issues involving the privacy and security of personal information — or even for those of us more specifically interested in the privacy and security of protected health information (PHI).  Still, two new provisions caught my attention because they measure the meaningful use required for provider incentive payments based not simply on the providers’ use of EHR, but on their patients’ use of it.  

One provision of the Stage 2 Proposal would require a provider to give at least 50% of its patients the ability to timely "view online, download, and transmit" their health information ("timely" meaning within 4 business days after the provider receives it) (and subject to the provider’s discretion to withhold certain information).   Moreover, it would require that more than 10% of those patients (or their authorized representatives) actually view, download or transmit the information to a third party.  There’s an exception for providers that conduct a majority (more than 50%) of their patient encounters in a county that doesn’t have 50% or more of "its housing units with 4Mbps broadband availability as per the most recent information available from the FCC” (whew!) as of the first day of the applicable EHR reporting period.

For a continuation of this post, please refer to our sister blog at http://hipaahealthlaw.foxrothschild.com/

 

By Elizabeth Litten

The widely publicized pre-Christmas breach of confidential data held by Stratfor Global Intelligence Service (“Stratfor”), a company specializing in data security, reminded me that very little (if any) electronic information is truly secure. If Stratfor’s data can be hacked into, and the health information of nearly 5 million military health plan (TRICARE) members maintained by multi-billion dollar Department of Defense contractor Science Applications International Corporation (SAIC) (the subject of a five-part series of blog postings found on Fox Rothschild’s HIPAA, HITECH and HIT Blog. Parts 12,34 and 5 ) can be accessed, can we trust that any electronically transmitted or stored information is really safe?  

I had the pleasure of having lunch with my friend Al yesterday, an IT guru who has worked in hospitals for years. Al understands and appreciates the need for privacy and security of information, and has the technological expertise to know where and how data can be hacked into or leaked out. Perhaps not surprisingly, Al does not do his banking on-line, and tries to avoid making on-line credit card purchases. 

Al and I discussed the proliferation of the use of iPhones and other mobile technology by physicians and staff in hospitals and other settings, a topic recently discussed in a newsletter published by the American Medical Association. Quick access to a patient’s electronic health record (EHR) is convenient and may even be life-saving in some circumstances, but use of these mobile devices creates additional portals for access to personal information that should be protected and secured. Encryption technology and, perhaps most significantly, use of this technology, barely keeps pace with the exponential rate at which we are creating and/or transmitting data electronically.  

On the other hand, trying to reverse the exponential growth of electronic communications and transactions would be futile and probably counter-productive. The horse is out of the gate, and expecting it to stop mid-stride and retreat back with a false-start call is irrational. The horse will race ahead just as surely as my daughter will text and check her Facebook page, my son will recharge his iPad, and I will turn around and head back to my office if I forget my iPhone. We want and need technology, but seem to forget or fail to fully understand the vast, unprotected and ever-expanding universe into which we send information when we use this technology. 

If we expect breaches or, at least, question our assumptions that personal information will be protected, perhaps we will get better at discerning how and when we disclose our personal information. An in-person conversation or transaction (for example, when Al goes to his bank in person or when a physician speaks directly to another physician about a patient’s care) is less likely to be accessed and used inappropriately than an electronic one. We can better assess the risks and benefits of communicating information electronically when we appreciate the security frailties inherent in electronic communication and storage. 

Perhaps Congress should take the lead in enacting laws that will help protect against data breaches that could compromise “critical infrastructure systems” (as proposed in the “PRECISE Act” introduced by Rep. Daniel E. Lungren (R-CA)), but more comprehensive, potentially expensive, and/or use-impeding cybersecurity laws might have the effect of tripping the racehorse mid-lap rather than controlling its pace or keeping it safely on course.