Roger Severino, director of the Department of Health and Human Services’ Office of Civil Rights, told HIMSS18 conference attendees this week that he plans no slowdown in HIPAA enforcement.

“I come from the Department of Justice Office for Civil Rights; I bring that mindset to OCR. We’re still looking for big, juicy egregious cases” for enforcement, Severino said, according to this report in Data Breach Today. That doesn’t mean smaller companies should assume they are off the radar, he added.

He said 2017 was OCR’s second biggest year for HIPAA settlements with $19.4 million collected, second only to 2016 in which OCR collected nearly $25 million.

On our HIPAA & Health Information Technology Blog, associate Ankita Patel discusses how Millennials’ embrace of newer forms of social media such as Snapchat and Instagram poses HIPAA challenges for health care organizations.

“With just a few taps and swipes, an employee can post a seemingly innocuous disclosure of PHI. Interns and residents of the younger generation may innocently upload a short-term post (be it a picture for two-seconds or an eight-second long video) of a busy hospital room or even an innocent ‘selfie’ without realizing that there is visible and identifiable PHI in the corner,”  Ankita writes.

It’s an intriguing read exploring the intersection of health care and privacy law, social sharing and the rapid pace of technological change. Read the full post here.

Physicians have their hands full on the best of days. It’s not difficult to imagine why using a voice assistant such as Amazon’s Alexa or Apple’s Siri might be attractive.

In fact, a recent survey showed nearly one in four physicians uses the assistants for work-related purposes, such as researching prescription drug dosing. It’s likely many are unaware of the information security dangers they pose.

In an interview with SCG Health Blog, Fox Rothschild attorneys Elizabeth Litten and Michael Kline explain that the labor-saving devices pose a bevy of data privacy and security risks, and offer doctors six helpful tips for protecting their practices.

In its ongoing guidance* initiatives, the Office for Civil Rights (OCR) has continued to interpret key obligations within the HIPAA Privacy and Security Rules (45 C.F.R. Part 160, 162, and 164) (HIPAA Rules). Most recently, the OCR has added FAQ details about cloud service providers (CSPs) as business associates (Cloud Guidance) under HIPAA Rules. It should be noted that all CSPs, despite varying levels of functionality and service, are viewed equally in the Cloud Guidance.

OCR first addressed whether a CSP is a business associate if it stores encrypted Protect Health Information (PHI) with access to the encryption key.

CSPs Are Business Associates Despite Encryption Practices

OCR made clear that when a CSP handles electronic PHI (ePHI) – transmits, creates, maintains or receives ePHI – the CSP enters into the status of a “Business Associate” per HIPAA Rules despite handling encrypted data without an encryption key. Even though a CSP cannot view the ePHI, the fact that it handles and/or maintains that data makes it a Business Associate. OCR reasons that encryption limits viewing of ePHI but cannot protect it from malicious software corruption or assure its access at all times – two requirements that must be fulfilled under the HIPAA Security Rule.

However, OCR added that CSPs dealing with encrypted ePHI without an encryption key does meet Security Rule obligations for both a Covered Entity and CSP because of the safeguard measures of the Covered Entity. OCR explained:

[I]f a customer implements its own reasonable and appropriate user authentication controls and agrees that the CSP providing no-view services need not implement additional procedures to authenticate (verify the identity of) a person or entity seeking access to ePHI, these Security Rule access control responsibilities would be met for both parties by the action of the customer.

Notably, a CSP will not be held responsible for compliance shortfalls that arise from its Covered Entity/Business Associate customers. Relevant compliance responsibility agreements that protect the CSP will also remain valid. OCR added additional interpretations about Privacy Rule requirement of CSPs performing “no-view services.” A CSP may not disclose or use PHI unless the Business Associate Agreement (BAA) and Privacy Rule permit those actions. A CSP is not authorized to restrict its Business Associate or Covered Entity customer gaining access to its ePHI.

PHI Storage and Retention Does Not Make a CSP a ‘Mere Conduit’

OCR, in another FAQ, goes on to clarify that a CSP is not a “mere conduit,” a designation that would provide exemptions from HIPAA Rules for Business Associations.** A conduit exception is only made for very specific cases – a CSP is a conduit if it its services are limited to transmission only and does not involve any data storage beyond the functions needed to properly execute its transmission services. By these standards, a CSP is a Business Associate if it uses both transmission and data retention services.

Business Associate Status Extends to Downstream CSPs

CSPs worried that in cases where a BAA is not formed, they may not be aware of services provided to a Business Associate or a downstream subcontractor. OCR states that if a CSP provides services that make it a Business Associate, the CSP assumes Business Associate liabilities. Although, per OCR, when a CSP lacks “actual or constructive knowledge that a covered entity or another business associate is using its services to create, receive, maintain, or transmit ePHI,” the CSP should address all HIPAA compliance shortcomings within 30 days of noticing this circumstance. Acting within that timeframe affords the CSP a liability waiver of sorts, and the OCR may extend the timing by an additional 30 days based on the specific issues of noncompliance. If it is shown that the CSP willfully neglected investigating the potential for this circumstance, it will not afforded similar corrective opportunities. A CSP should record all attempts and achievements to comply with HIPAA Rules if it find itself in noncompliance, or remove or protect the ePHI in question.

ePHI Audits, Offshoring, and Maintenance and Cloud Security

Audit Requirements: OCR affirms that HIPAA Rules obligate Covered Entities and Business Associates to document and possess security assurances from contractors and vendor as BAAs. Auditing those entities is not required.

Offshoring: Concerns arise when CSPs store or retain data in servers beyond the U.S., which affects security and HIPAA enforcement. Notably, OCR points that offshoring is neither prohibited nor addressed in HIPAA Rules, but data storage beyond U.S. borders obligates the CSP and all contracting parties to acknowledge the added vulnerabilities in their risk management plans and analyses as part of their HIPAA Security Rule.

ePHI Maintenance: A CSP does not have to maintain ePHI beyond the services it agreed to provide. The OCR mentions that the HIPAA Privacy Rule requires a BAA that addresses whether a CSP must return or eliminate ePHI at the expiration of the BAA. If the return or removal of data is not possible, the CSP is obligated to secure, conceal and protect the data in a way that adequately addresses the reason it cannot return or destroy the data.

Important Notes

  • CSPs typically utilize Service Level Agreements (SLA) that contain language which affects HIPAA compliance. SLAs address service performance details regarding system availability/reliability, data back-up and recovery and data return/termination requirements. OCR advised that BAAs and SLAs should be in line with each other and executable under HIPAA Rules. Further, SLAs cannot restrict a Covered Entity from gaining access to its own PHI, and SLA conditions that violate HIPAA Rules will form noncompliance issues for the Covered Entity.
  • A CSP must have security reporting policies for its Covered Entity and Business Associate customers that comply with the Security Rule and the Breach Notification Rule.
  • OCR will not make any kind of recommendation for technology and products that offer HIPAA-compliant cloud services.
  • Mobile devices may be used the same way as non-cloud means by Covered Entities and Business associates to access CSP-stored ePHI. The BAA addressing ePHI access via mobile device should require the CSP to have satisfactory physical and technical safeguards that maintain all necessary data protection and security.

*See, OCR Guidance on Ransomware, July 11, 2016 and OCR Guidance for Long Term Care Facilities May 2016.

**See OCR’s analysis of the “conduit” exemption at 78 Fed. Reg. 5565, 5571 (January 25, 2013).

A small single-site compounding pharmacy in Colorado has reached a $125,000 settlement with the Department of Health and Human Services’ (DHHS) Office of Civil Rights (OCR) to address deficiencies in its HIPAA compliance program.

Under the resolution agreement, the $125,000 cost of which does not include time, expenses and legal fees associated with the investigation, Cornell Prescription Pharmacy will also adopt a corrective action plan.

It’s a stark reminder that no matter what the size of the company, taking proactive measures to protect patient information and making sure employees are trained on those measures reduces costs and limits exposure to regulatory enforcement and increasing state litigation around data breaches.

What happened

Cornell’s troubles started in January 2012 after a Denver TV news reporter found the records of 1,610 people in an unlocked, open, publically accessible container outside its offices. The intact records had not been shredded, and identities had not been stripped. Federal authorities launched an investigation of potential HIPAA violations.

That investigation led OCR to identify additional HIPAA violations, including a failure to implement HIPAA policies and procedures and to properly train its workforce.

Cornell’s settlement requires it to develop and implement written HIPAA policies and procedures, submit them to DHHS within 30 days, and implement them within 30 days of the agency’s approval. It must also get all of its employees to certify in writing that they have read, understand and will follow the new policies. The company must report back to DHHS on the status of implementation within 60 days of the policies’ approval, and annually for at least two years.

The settlement, combined with a similar $100,000 settlement reached recently with Phoenix Cardiac Surgery, demonstrates that size does not matter to OCR when it comes to HIPAA enforcement.

“Regardless of size, organizations cannot abandon protected health information or dispose of it in dumpsters or other containers that are accessible by the public,” said OCR Director Jocelyn Samuels.

Questions about HIPAA compliance or securing protected health information? Contact a member of Fox Rothschild’s Privacy & Data Security or Health Law practices.

Innovative health care-related technology and developing telemedicine products have the potential for dramatically changing the way in which health care is accessed.  The Federation of State Medical Boards (FSMB) grappled with some of the complexities that arise as information is communicated electronically in connection with the provision of medical care and issued a Model Policy in April of 2014 to guide state medical boards in deciding how to regulate the practice of “telemedicine”, a definition likely to become outdated as quickly as the next technology or product is developed.

Interestingly, the development and use of medical devices and communication technology seems to outpace agency definitions and privacy laws as quickly as hackers outpace security controls.  So how can we encourage innovation and adopt new models without throwing privacy out with the bathwater of the traditional, in-person patient-physician relationship?  A first step is to see and understand the gaps in privacy protection and figure out how to they can be narrowed.

HIPAA does not protect all information, even when the information is clearly health information and a specific individual can be identified in connection with the health information.   A guidance document issued jointly by the U.S. Department of Health and Human Services (HHS) and the Food and Drug Administration (FDA) on October 2, 2014 (FDA Guidance Document) contains the agencies’ “non-binding recommendations” to assist the medical device industry with cybersecurity.  The FDA Guidance Document defines “cybersecurity” as “the process of preventing unauthorized access, modification, misuse or denial of use, or the unauthorized use of information that is stored, accessed, or transferred from a medical device to an external recipient.”  If my medical device creates, receives, maintains, or transmits information related to my health status or condition, it’s likely I expect that information to be secure and private – but unless and until my doctor (or other covered entity or business associate) interfaces with it, it’s not protected health information (PHI) under HIPAA.

The FSMB’s Model Policy appropriately focused on the establishment of the physician-patient relationship.  In general, HIPAA protects information created, received, maintained or transmitted in connection with that relationship.  A medical device manufacturer, electronic health application developer, or personal health record vendor that is not a “health care provider” or other covered entity as defined under HIPAA, and is not providing services on behalf of a  covered entity as a business associate, can collect or use health-related information from an individual without abiding by HIPAA’s privacy and security obligations.  The device, health app, or health record may still be of great value to the individual, but the individual should recognize that the information it creates, receives, maintains or transmits is not HIPAA-protected until comes from or ends up with a HIPAA covered entity or business associate.

The FDA Guidance Document delineates a number of cybersecurity controls that manufacturers of FDA-regulated medical devices should develop, particularly if the device has the capability of connecting (wirelessly or hard-wired) to another device, the internet, or portable electronic media.  Perhaps these controls will become standard features of medical devices, but they might also be useful to developers of other types of health-related products marketed to or purchased by consumers.  In the meantime, though, it’s important to remember that your device is not your doctor, and HIPAA may not be protecting the health data created, received, maintained or transmitted by your medical device.

Imagine you have completed your HIPAA risk assessment and implemented a robust privacy and security plan designed to meet each criteria of the Omnibus Rule. You think that, should you suffer a data breach involving protected health information as defined under HIPAA (PHI), you can show the Secretary of the Department of Health and Human Services (HHS) and its Office of Civil Rights (OCR), as well as media reporters and others, that you exercised due diligence and should not be penalized. Your expenditure of time and money will help ensure your compliance with federal law.

Unfortunately, however, HHS is not the only sheriff in town when it comes to data breach enforcement. In a formal administrative action, as well as two separate federal court actions, the Federal Trade Commission (FTC) has been battling LabMD for the past few years in a case that gets more interesting as the filings and rulings mount (In the Matter of LabMD, Inc., Docket No. 9357 before the FTC). LabMD’s CEO Michael Daugherty recently published a book on the dispute with a title analogizing the FTC to the devil, with the byline, “The Shocking Expose of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine, and Small Business.” Daugherty issued a press release in late January attributing the shutdown of operations of LabMD primarily to the FTC’s actions.

Among many other reasons, this case is interesting because of the dual jurisdiction of the FTC and HHS/OCR over breaches that involve individual health information.

On one hand, the HIPAA regulations detail a specific, fact-oriented process for determining whether an impermissible disclosure of PHI constitutes a breach under the law. The pre-Omnibus Rule breach analysis involved consideration of whether the impermissible disclosure posed a “significant risk of financial, reputational, or other harm” to the individual whose PHI was disclosed. The post-Omnibus Rule breach analysis presumes that an impermissible disclosure is a breach, unless a risk assessment that includes consideration of at least four specific factors demonstrates there was a “low probability” that the individual’s PHI was compromised.

In stark contrast to HIPAA, the FTC files enforcement actions based upon its decision that an entity’s data security practices are “unfair”, but it has not promulgated regulations or issued specific guidance as to how or when a determination of “unfairness” is made. Instead, the FTC routinely alleges that entities’ data security practices are “unfair” because they are not “reasonable” – two vague words that leave entities guessing about how to become FTC compliant.

In 2013, in an administrative action, LabMD challenged the FTC’s authority to institute these type of enforcement actions. LabMD argued, in part, that the FTC does not have the authoritiy to bring actions under the “unfairness” prong of Section 5 of the FTC Act. LabMD further argued that there should only be one sheriff in town – not both HHS and the FTC. Not surprisingly, in January 2014, the FTC denied the motion to dismiss, finding that HIPAA requirements are “largely consistent with the data security duties” of the FTC under the FTC Act.The opinion speaks of “data security duties” and “requirements” of the FTC Act, but these “duties” and “requirements” are not spelled out (much less even mentioned) in the FTC Act. As a result, how can anyone arrive at the determination that the standards are consistent? Instead, entities that suffer a data security incident must comply with the detailed analysis under HIPAA, as well as the absence of any clear guidance under the FTC Act.

In a March10, 2014 ruling, the administrative law judge ruled that he would permit LabMD to depose an FTC designee regarding consumers harmed by LabMD’s allegedly inadequate security practices. However, the judge also ruled that LabMD could not “inquire into why, or how, the factual bases of the allegations … justify the conclusion that [LabMD] violated the FTC Act.” So while the LabMD case may eventually provide some guidance as to the factual circumstances involved in an FTC determination that data security practices are “unfair” and have caused, or are likely to cause, consumer harm, the legal reasoning behind the FTC’s determinations is likely to remain a mystery.

In addition to the challenges mounted by LabMD, Wyndham Worldwide Corp., has also spent the past year contesting the FTC’s authority to pursue enforcement actions based upon companies’ alleged “unfair” or “unreasonable” data security practices. On Monday, April 7, 2014, the United States District Court for the District of New Jersey sided with the FTC and denied Wyndham’s motion to dismiss the FTC’s complaint. The Court found that Section 5 of the FTC Act permits the FTC to regulate data security, and that the FTC is not required to issue formal rules about what companies must do to implement “reasonable” data security practices.

These recent victories may cause the “other sheriff” – the FTC – to ramp up its efforts to regulate data security practices. Unfortunately, because it does not appear that the FTC will issue any guidance in the near future about what companies can do to ensure that their data security practices are reasonable, these companies must monitor closely the FTC’s actions, adjudications or other signals in an attempt to predict what the FTC views as data security best practices.

[This blog posting was previously posted on the HIPAA, HITECH and Health Information blog.]

The recent release of the HIPAA/HITECH “mega rule” or “omnibus rule” has given bloggers and lawyers like us plenty of topics for analysis and debate, as well as some tools with which to prod covered entities, business associates and subcontractors to put HIPAA/HITECH-compliant Business Associate Agreements (“BAAs”) in place. It’s also a reminder to read BAAs that are already in place, and to make sure the provisions accurately describe how and why protected health information (“PHI”) is to be created, received, maintained, and/or transmitted. 

If you are an entity that participates in the Medicare Shared Savings Program as a Medicare Accountable Care Organization (“ACO”), your ability to access patient data from Medicare depends on your having signed the CMS Data Use Agreement (the “Data Use Agreement”). Just as covered entities, business associates, and subcontractors should read and fully understand their BAAs, Medicare ACOs should make sure they are aware of several Data Use Agreement provisions that are more stringent than provisions typically included in a BAA and that may come as a surprise. Here are ten provisions from the Data Use Agreement worth reviewing, whether you are a Medicare ACO or any other business associate or subcontractor, as these may very well resurface in some form in the “Super BAA” of the future:

 

1.         CMS (the covered entity) retains ownership rights in the patient data furnished to the ACO.

 

2.         The ACO may only use the patient data for the purposes enumerated in the Data Use Agreement.

 

3.         The ACO may not grant access to the patient data except as authorized by CMS.

 

4.         The ACO agrees that, within the ACO and its agents, access to patient data will be limited to the minimum amount of data and minimum number of individuals necessary to achieve the stated purposes.

 

5.         The ACO will only retain the patient data (and any derivative data) for one year or until 30 days after the purpose specified in the Data Use Agreement is completed, whichever is earlier, and the ACO must destroy the data and send written certification of the destruction to CMS within 30 days.

 

6.         The ACO must establish administrative, technical, and physical safeguards that meet or exceed standards established by the Office of Management and Budget and the National Institute of Standards and Technology.

 

7.         The ACO acknowledges that it is prohibited from using unsecured telecommunications, including the Internet, to transmit individually identifiable, bidder identifiable or deducible information derived from the patient files. 

 

8.         The ACO agrees not to disclose any information derived from the patient data, even if the information does not include direct identifiers, if the information can, by itself or in combination with other data, be used to deduce an individual’s identity.

 

9.         The ACO agrees to abide by CMS’s cell size suppression policy (which stipulates that no cell of 10 or less may be displayed).

 

And last, but certainly not least:

 

10.       The ACO agrees to report to CMS any breach of personally identifiable information from the CMS data file(s), loss of these data, or disclosure to an unauthorized person by telephone or email within one hour.

 

 

While the undertakings of a Medicare ACO and the terminology in the Data Use Agreement for protection of patient data may differ from those of covered entities, business associates and subcontractors and their BAAs under the HIPAA/HITECH regulations, they have many striking similarities and purposes. 

 

The following was recently posted in substantially the same form on the Fox Rothschild LLP HIPAA, HITECH and Health Information Technology blog.

Elizabeth Litten and Michael Kline write:

 

We have posted several blogs, including those here and here, tracking the reported 2011 theft of computer tapes from the car of an employee of Science Applications International Corporation (“SAIC”) that contained the protected health information (“PHI”) affecting approximately 5 million military clinic and hospital patients (the “SAIC Breach”).  SAIC’s recent Motion to Dismiss (the “Motion”) the Consolidated Amended Complaint filed in federal court in Florida as a putative class action (the “SAIC Class Action”) highlights the gaps between an incident (like a theft) involving PHI, a determination that a breach of PHI has occurred, and the realization of harm resulting from the breach. SAIC’s Motion emphasizes this gap between the incident and the realization of harm, making it appear like a chasm so wide it practically swallows the breach into oblivion. 

 

SAIC, a giant publicly-held government contractor that provides information technology (“IT”) management and, ironically, cyber security services, was engaged to provide IT management services to TRICARE Management Activity, a component of TRICARE, the military health plan (“TRICARE”) for active duty service members working for the U.S. Department of Defense (“DoD”).  SAIC employees had been contracted to transport backup tapes containing TRICARE members’ PHI from one location to another.

 

According to the original statement published in late September of 2011 ( the “TRICARE/SAIC Statement”) the PHI “may include Social Security numbers, addresses and phone numbers, and some personal health data such as clinical notes, laboratory tests and prescriptions.” However, the TRICARE/SAIC Statement said that there was no financial data, such as credit card or bank account information, on the backup tapes. Note 17 to the audited financial statements (“Note 17”) contained in the SAIC Annual Report on Form 10-K for the fiscal year ended January 31, 2012, dated March 27, 2012 (the “2012 Form 10-K”), filed with the Securities and Exchange Commission (the "SEC”), includes the following:

 

There is no evidence that any of the data on the backup tapes has actually been accessed or viewed by an unauthorized person. In order for an unauthorized person to access or view the data on the backup tapes, it would require knowledge of and access to specific hardware and software and knowledge of the system and data structure.  The Company [SAIC] has notified potentially impacted persons by letter and is offering one year of credit monitoring services to those who request these services and in certain circumstances, one year of identity restoration services.

 

While the TRICARE/SAIC Statement contained similar language to that quoted above from Note 17, the earlier TRICARE/SAIC Statement also said, “The risk of harm to patients is judged to be low despite the data elements . . . .” Because Note 17 does not contain such “risk of harm” language, it would appear that (i) there may have been a change in the assessment of risk by SAIC six months after the SAIC Breach or (ii) SAIC did not want to state such a judgment in an SEC filing.

 

Note 17 also discloses that SAIC has reflected a $10 million loss provision in its financial statements relating to the  SAIC Class Action and various other putative class actions respecting the SAIC Breach filed between October 2011 and March 2012 (for a total of seven such actions filed in four different federal District Courts).  In Note 17 SAIC states that the $10 million loss provision represents the “low end” of SAIC’s estimated loss and is the amount of SAIC’s deductible under insurance covering judgments or settlements and defense costs of litigation respecting the SAIC Breach.  SAIC expresses the belief in Note 17 that any loss experienced in excess of the $10 million loss provision would not exceed the insurance coverage.  

 

Such insurance coverage would, however, likely not be available for any civil monetary penalties or counsel fees that may result from the current investigation of the SAIC Breach being conducted by the Office of Civil Rights of the Department of Health and Human Services (“HHS”) as described in Note 17.

  

Initially, SAIC did not deem it necessary to offer credit monitoring to the almost 5 million reportedly affected individuals. However, SAIC urged anyone suspecting they had been affected to contact the Federal Trade Commission’s identity theft website. Approximately 6 weeks later, the DoD issued a press release stating that TRICARE had “directed” SAIC to take a “proactive” response by covering a year of free credit monitoring and restoration services for any patients expressing “concern about their credit as a result of the data breach.”   The cost of such a proactive response easily can run into millions of dollars in the SAIC Breach. It is unclear the extent, if any, to which insurance coverage would be available to cover the cost of the proactive response mandated by the DoD, even if the credit monitoring, restoration services and other remedial activities of SAIC were to become part of a judgment or settlement in the putative class actions.

 

We have blogged about what constitutes an impermissible acquisition, access, use or disclosure of unsecured PHI that poses a “significant risk” of “financial, reputational, or other harm to the individual” amounting to a reportable HIPAA breach, and when that “significant risk” develops into harm that may create claims for damages by affected individuals. Our partner William Maruca, Esq., artfully borrows a phrase from former Defense Secretary Donald Rumsfeld in discussing a recent disappearance of unencrypted backup tapes reported by Women and Infants Hospital in Rhode Island. If one knows PHI has disappeared, but doubts it can be accessed or used (due to the specialized equipment and expertise required to access or use the PHI), there is a “known unknown” that complicates the analysis as to whether a breach has occurred. 

 

As we await publication of the “mega” HIPAA/HITECH regulations, continued tracking of the SAIC Breach and ensuing class action litigation (as well as SAIC’s SEC filings and other government filings and reports on the HHS list of large PHI security breaches) provides some insights as to how covered entities and business associates respond to incidents involving the loss or theft of, or possible access to, PHI.   If a covered entity or business associate concludes that the incident poses a “significant risk” of harm, but no harm actually materializes, perhaps (as the SAIC Motion repeatedly asserts) claims for damages are inappropriate. When the covered entity or business associate takes a “proactive” approach in responding to what it has determined to be a “significant risk” (such as by offering credit monitoring and restoration services), perhaps the risk becomes less significant. But once the incident (a/k/a, the ubiquitous laptop or computer tape theft from an employee’s car) has been deemed a breach, the chasm between incident and harm seems to open wide enough to encompass a mind-boggling number of privacy and security violation claims and issues.

The Centers for Medicare & Medicaid Services (CMS) recently published proposed rules setting forth the “Stage 2” criteria that eligible providers (EPs), eligible hospitals (EHs), and critical access hospitals (CAHs) (referred to herein collectively as “providers”) would be required to meet in order to qualify for Medicare and/or Medicaid incentive payments for the use of electronic health records (EHRs) (“Stage 2 Proposal”).  The Stage 2 Proposal is a small-font, acronym-laden, tediously-detailed 131-page document that modifies and expands upon the criteria included in the “Stage 1” final rule published on July 28, 2010 and is likely to be of interest primarily to providers concerned with  receiving or continuing to receive added payments from CMS for adopting and “meaningfully using” EHR. 

 

The Stage 2 Proposal is not, at first glance, particularly relevant reading for those of us generally interested in issues involving the privacy and security of personal information — or even for those of us more specifically interested in the privacy and security of protected health information (PHI).  Still, two new provisions caught my attention because they measure the meaningful use required for provider incentive payments based not simply on the providers’ use of EHR, but on their patients’ use of it.  

One provision of the Stage 2 Proposal would require a provider to give at least 50% of its patients the ability to timely "view online, download, and transmit" their health information ("timely" meaning within 4 business days after the provider receives it) (and subject to the provider’s discretion to withhold certain information).   Moreover, it would require that more than 10% of those patients (or their authorized representatives) actually view, download or transmit the information to a third party.  There’s an exception for providers that conduct a majority (more than 50%) of their patient encounters in a county that doesn’t have 50% or more of "its housing units with 4Mbps broadband availability as per the most recent information available from the FCC” (whew!) as of the first day of the applicable EHR reporting period.

For a continuation of this post, please refer to our sister blog at http://hipaahealthlaw.foxrothschild.com/