The American Interest
Policy, Politics & Culture
object(WP_Session)#92 (5) { ["session_id:protected"]=> string(32) "0ac46866b83096be3f994205a9b276be" ["expires:protected"]=> int(1413882726) ["exp_variant:protected"]=> int(1413882366) ["container:protected"]=> array(1) { ["ai_visit_counter"]=> int(0) } ["dirty:protected"]=> bool(true) }
Privacy Parts

A bill intended to help workers has morphed into a bureaucratic beast that endangers quality health care.

Published on November 1, 2006

Visiting the doctor has always been an exercise in filling out forms and waiting, but if you have been to a doctor’s office in the past three years, you have probably noticed a proliferation of forms, including a multi-page document warning you to “please review carefully” since it “describes how medical information about you may be used and disclosed.” You have probably also received a similar form from your insurer or your company’s benefits office. That document, sometimes called a “notice of privacy practices”, is the primary publicly visible tip of the Health Insurance Portability and Accountability Act (HIPAA), an immense regulatory iceberg that is wreaking havoc in the U.S. health care industry. If that document looks troublesome to you, try to imagine what HIPAA looks like from the other side of the receptionist’s desk.

Physicians, hospitals and health insurers are spending untold billions of dollars trying to comply with the medical-record privacy and security provisions of HIPAA. It is an opaque federal regulatory framework that places onerous and undefined restrictions primarily on hospitals and physicians, regulating how and when they can use and disclose—and the steps they must take to protect—medical information. While the underlying intent of the regulations is laudable, the language requirements are vague, and the penalties for failing to comply can be huge. And it’s driving physicians and hospital administrators crazy.

I saw a television commercial recently that showed a man on the roof of his house with his son. The son, holding a toilet scrubber, is attached to a winch over the chimney, and the father is about to lower him into the chimney to clean it. The point is, you should leave some things to experts. But most interesting was the small print on the screen: “Do not attempt.” Forget about all those car commercials that say “professional driver on closed course”; I assume that there are fools out there who will try to drive their cars the way they see them driven in commercials and adventure movies. But do we really need to tell people that they shouldn’t winch their kids into chimneys? The answer, apparently, is yes.

One of the primary purposes of government has to be protecting you from others. But the combination of nanny-state government regulation and ferocious personal-injury lawyers has layered on the protections so thickly that it’s gone from being silly to dangerous: silly, as in the warnings in the commercial, but dangerous when the desire to protect produces unintended consequences of harm. HIPAA is a stellar example of dangerous overprotection. It intends to protect your medical records from improper disclosure, but as a consequence it “protects” your medical records from many proper disclosures that could save your life.

Stories now abound of hospitals refusing even to acknowledge to family members whether their loved ones are patients there. Physicians or family members seeking medical records on a patient may be unable to get a prior medical provider to hand them over. Potentially life-saving information might not be provided because the physician or hospital can’t determine whether the disclosure would violate HIPAA. You can’t really blame the provider, though; in addition to being long and confusing, HIPAA carries not only substantial fines but even potential jail time.

The virtual impossibility of understanding and adhering to HIPAA was made plain in the wake of Hurricane Katrina. In that storm and flood, many hospital and nursing home residents were relocated in the chaos of the moment to available hospitals, often without notice to family members or caretakers. A few weeks later, when Hurricane Rita threatened landfall, hospital and nursing home administrators reacted to the Katrina scenario with an even more extensive health care evacuation. In many of these relocation situations, too, family members could not determine where their loved ones had been sent. Providers’ fear of disclosing information in a manner that would violate HIPAA was so high that the Federal regulatory authorities felt compelled to lift the HIPAA obligations of those in the storm zone.

How did we get here? How did a law intended to ensure the privacy of medical records morph into a regulatory nightmare that impedes good health care? It’s a story about good intentions, cross purposes, an opportunity to score cheap political points, and the inability to see that, sometimes, too much protection defeats itself.

Good Medicine

The Health Insurance Portability and Accountability Act of 1996 (originally known as the Kennedy-Kassebaum bill) was originally all about the “P” in the name. “Portability” refers to the concept of allowing an employee to carry over health insurance from one employer to another. Actually, that’s an oversimplification: Portability allows an employee to transport his “insurability” from one employer to another.

A little background is required to make sense of this. Most Americans get their health insurance from their employers. There’s no particularly good reason for this, other than the fact that during and after World War II, many employers used health insurance incentives to attract new employees when there were more jobs than workers. The concept of employer-provided insurance took off with the passage of ERISA (the Employee Retirement Income Security Act) in 1974, and today employer-provided health insurance is the norm.

Few people pay much attention to the way insurance works, but it’s actually very simple. Take homeowner’s fire insurance, for example. Most years, your house won’t burn down. In fact, most people’s houses will never burn down. But if yours did, you would suffer the financial loss of an asset worth hundreds of thousands of dollars. So a homeowner buys fire insurance against an improbable but very consequential event. Imagine now that a group of homeowners gets together and decides to pool their risk of loss due to fire. Say there are 1,000 of them, their houses all cost $100,000, and they expect that one house will burn down that year. If they each pay $100, there will be a sufficient pool of money to help the homeowner whose house burns down to rebuild. Most years, each homeowner will pay more in premiums than he will take out in claims, but that’s necessary when you’re talking about spreading out the risk of a catastrophic event.

That is the basic structure of a mutual insurance company. But what if, instead of a mutual group of homeowners sharing risk among themselves, there was an insurance company that figured out the actuarial risk of a fire occurring and the premium that could be charged to cover that risk? That company would need to have some reserves in place, but if it collected more in premiums than it paid out in claims (plus other operating costs), it would be a profitable enterprise.

What if some of the homeowners had houses that were more prone to fire than others? Should they pay more? Should the homeowner who smokes in bed pay a higher premium than the nonsmoker? What if one of the homeowners decides not to buy insurance one year, gambling on the likelihood that this year he won’t have a fire? If the homeowner knew when the fire was going to occur, or if he could buy his fire insurance as soon as the fire started, he could “free ride” during the low-risk years and only purchase insurance during high-risk years. That would not be fair to the other homeowners, but it would profit the freerider.

Health insurance works on the same basic concept (albeit with a few complications): Insurance companies try to keep premiums high enough to cover costs and return a profit. This exposes a basic fact of insurance that people tend to gloss over: Insurance companies don’t make money by paying claims; they make money by not paying claims.

Now, a health insurance company can’t just refuse to pay legitimate claims for covered expenses (in theory, at least). Policyholders will stop buying policies, and state regulators may step in. But insurers do seek justifiable ways to limit payments. They might claim that the care provided was not medically necessary, or that the treatment was experimental, or that pre-certification was needed before that procedure was done. Or they could claim that the health malady being treated was a “pre-existing condition.”

The concept of the pre-existing condition relates to the freerider situation discussed above. It isn’t fair that an individual should refuse to pay into the “risk pool” when he’s at low risk and only pay when he’s at high risk, since his premium costs are the same as everyone else’s. And he shouldn’t be able to wait until he is already sick to decide that he wants insurance, since he’s not assuming risk like everyone else in the system. Health insurance companies, with a keen eye on ways to reduce payouts, saw the “pre-existing condition” issue as a legitimate way to refuse to pay for coverage. If the disease or condition started after you bought insurance, then it should be covered; if you had it already when you showed up at the insurer’s door, the insurer can refuse to cover it. It’s a certainty, not a “risk”, at that point.

This makes sense as far as it goes, but it does not take into account the possibility that the person seeking the new insurance policy wasn’t freeriding, but merely switching from one risk pool to another. From the insurance company’s perspective, it doesn’t matter; it’s still an opportunity to deny a payout and keep more premium dollars. But since most insurance is offered as a benefit tied to your job, you may be stuck in “job lock.” Take, for instance, an employee in good health who has purchased insurance through his employer for years. The employee then develops an illness or condition that requires expensive medical care. The employee was in the risk pool prior to the illness, so it’s covered. But what if the employee wants to leave that job and take a better, higher-paying job? If the employee leaves, the new company’s insurer may refuse to cover him, saying the illness is a pre-existing condition. So the employee’s career is essentially paralyzed.

What if the new insurer was required by law to provide insurance? Making exclusions for pre-existing conditions illegal would fix the problem, but that would encourage even more freeriding: People would wait until they needed medical insurance and sign up only after they were sick. What if, instead of outlawing all pre-existing condition exclusions, they were only made applicable to true freeriders, and non-freeriders (people who had bought insurance before they knew they’d need it, like the employee in the example) weren’t subject to them? Those employees would be able to “transport” their insurable status from the old employer to the new employer and would not be locked into dead-end jobs simply because of their need for affordable health insurance. Thus was the concept of health insurance “portability” hatched.

Who could be opposed to this? Nobody, of course. That’s why you had a Northeastern liberal Democrat and a Midwestern conservative Republican as Senate co-sponsors. But that’s also why a whole lot of “other” stuff got piled into the Act.

The Little Engine That Could

In the spring of this year, I found myself listening to an NPR story about Congress passing a spending bill. The Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Hurricane Recovery, which in theory was an additional appropriation of funds for the troops fighting in Iraq and for the victims of Katrina, had been so laden with pork that it seemed to serve as the “bridge too far” that inspired the beginning of the “Porkbusters” movement. Senator Ted Stevens himself, the proponent of Alaska’s “bridge to nowhere”, was a primary Senate supporter of the legislation. When confronted with dissent over passing a spending bill with so much waste built into it, he declared that a vote against the bill was a vote against the troops.

This is just one example of a common tactic of America’s governing class: If you have a piece of legislation that nobody wants to stop (in this case, funding for the troops and hurricane victims) and some special interest business that would never get approval on its own, then attach the latter to the former and push them through together. No politician wants to vote against hurricane victims, the troops or insurance portability. So more often than not, we’re left with a new bridge, whether it goes anywhere or not.

So you have a piece of legislation—in this case about health insurance portability—that nobody could object to (except perhaps insurance companies, and nobody cares about their objections). That enlightened concept of “insurance portability” becomes a mighty little engine that can drag a heavily laden piece of legislation forward.

Back in 1996 a panoply of other health care-related policy initiatives found a spot in the train behind the portability engine. Health Savings Accounts? Sure, hook on up. New fraud and abuse protections? Get on board. And add into the mix, under the perverse subtitle of “administrative simplification”, a set of regulatory initiatives intended to increase electronic data interchange in the health care industry and standardize medical-record privacy and security.

The primary source of HIPAA’s troubles lies in these so-called AdminSimp provisions. The goals were good: Unlike most other major industries, the U.S. health care industry at the time remained tied to paper forms, which prevented standardization and digitization and sometimes resulted in transcription errors. Each insurance company had its own forms for physicians and hospitals to fill out, its own “explanation of benefits” form to send out in response to a request for payment, obtained different demographic data from different boxes on its forms, and so on. It was a world of maximum feasible non-standardized mayhem. Every provider had at least one person on staff to be the “insurance” person—someone who knew what information from which boxes on incoming forms belonged in which boxes on outgoing forms. Bigger providers, like hospitals, had whole staffs of insurance people. Conceptually, if the whole industry went to standardized forms for the most common transactions, more commerce could be done electronically, fewer administrative man-hours from insurance people would be required per unit of health care service, transactions could be processed faster, and the industry would operate at a more efficient level overall. Waste would be reduced and potentially life-threatening transcription or “crosswalking” errors could be eliminated.

However, there is a dark side to the push toward increased digitization: The more that information is stored, transmitted and processed electronically, the greater the potential for breaches of privacy or security, and therefore the greater the need for privacy and security protections. Imagine a drug company employee or some other miscreant who wants to identify every citizen of Des Moines who has diabetes so he can target-market them with his company’s latest diabetes treatment. The best place to find that information is in the medical records of doctors and hospitals in Des Moines, but if the information is in paper files, it will take forever to find what he’s looking for, and it would be impossible to snoop that way without getting caught. But if all Des Moines doctors and hospitals are storing and transmitting patient records in electronic format, the data thief can hack into these files and run a quick computer search for indications of diabetic patients (diagnoses, diagnosis codes, specific drug orders and the like). He could execute the search in seconds and nobody would ever know he had been there. That threat was the justification for the increased emphasis on medical-record privacy and security: If records were to be digitized, they had to be made more secure.

Thus HIPAA’s AdminSimp provisions simultaneously gave and took away. They helped health care industry participants save money on staff, but they layered in extra requirements for privacy and security. The staffers at the Department of Health and Human Services who wrote the regulations considered it a fair, reasonable and, in any event, necessary tradeoff.

Contrary to what you might expect, the privacy aspect of HIPAA also had a special interest lobby behind it. Conspicuous among the groups pushing for the privacy rule were Georgetown University’s Health Privacy Project and its director, Janlori Goldman (now a health care privacy expert at Columbia University). Goldman and others spoke to media members and congressional aides about the “many, many stories” of people being fired once their health information became known by their employers. They found a few verifiable examples of that actually happening, but not much more than that. Other stories told involved homeowners having their mortgages called because the bank found out about some health issue the homeowner was confronting. Privacy advocates never produced any large-scale studies, comprehensive surveys or other compelling evidence that such problems were widespread. If there had been such evidence, it certainly would have been quoted. Instead, the congressional record and the commentary to the HIPAA regulations identify lists of a handful of one-off, individual horror stories. But meaningful statistical evidence isn’t necessary when you’ve got a good story to tell. And forget whether the cure does anything to prevent the disease—in the cases of an employer improperly using health care information to terminate an employee or a bank calling a loan, unless the employer or banker is a physician, a hospital or a health insurer, HIPAA probably couldn’t prevent that anyway.

The AdminSimps Hangover

During the dorm-room days of the Clinton Administration, as Elizabeth Drew so ably described them, lots of misguided regulatory schemes were no doubt dreamed up, and HIPAA’s AdminSimps was surely one of them. While there were evidently “adults” still minding the store who nixed most of those plans, the HIPAA AdminSimps somehow managed to slip through. Maybe it slipped through because the most emblematic failure of the Clinton Administration (from a policy standpoint) was HillaryCare. Maybe it was because Donna Shalala, who had been running the Department of Health and Human Services for the entire Clinton Administration, saw this as a way to leave her mark. But judging by the timing of the publication of the privacy standards, something else seems to have been in play: an opportunity for a Democratic administration to enact a “progressive” policy while sticking the incoming Republican administration with the inevitable financial and political fallout.

The HIPAA statute was passed by Congress and signed by President Clinton in 1996. The law gave Congress a couple of years to enact its own privacy standards, but when Congress failed to do so, the obligation to determine the privacy standards passed to the Department of Health and Human Services. HHS had extremely broad latitude to come up with privacy standards, and while it theoretically had a deadline for drafting the standards, there was no requirement for the standards to come out when they did. So when did the HIPAA privacy rule come out?

The “Standards for Privacy of Individually Identifiable Health Information” were published in the Federal Register on December 28, 2000, less than a month before Clinton was set to leave office. If you hearken back to December 2000, you’ll remember something going on regarding George W. Bush, Al Gore, Florida and the U.S. Supreme Court. That’s right: In the superheated waning moments of the Clinton Administration, after it had been determined that George Bush would be following Bill Clinton into the White House, HHS published the HIPAA privacy rules, layering an enormous new regulatory burden on the health care industry, with an estimated annual cost of $10–20 billion. Like a burning bag of dog crap left on the front porch, Clinton set out a utopian regulatory scheme to fight a mostly imaginary evil, and Bush was going to get to stomp out the fire.

Not only did HHS place this regulatory mess into the Federal Register at the last moment, it didn’t do it right. An anti-regulatory rule passed by the 1994 Republican Congress, which was intended to discourage regulation drafting, requires that proposed new administrative regulations be sent to Congress for review concurrently with publication in the Federal Register. HHS failed to do this, so the privacy rule had to be re-published two months later.

The HIPAA privacy rule wasn’t the only mess the Clintons left on Bush’s doorstep. There was a huge regulatory dump by the outgoing Administration in its last few months, as can be seen by comparing the rate of newly introduced pages in the Federal Register. In the final quarter of Ronald Reagan’s last year in office (11/88–1/89), 14,447 pages were introduced. In George H.W. Bush’s last quarter (11/92–1/93), 20,147 pages were introduced. In the first three years of Bill Clinton’s second term, the number of new pages printed in the Federal Register during the same period averaged 17,622 pages. But in Clinton’s last quarter (11/00–1/01), the number jumped to 26,541. The same story is told even more dramatically if we look at the number of major rules promulgated by a presidential administration. In 1997 the Clinton Administration published 19 new major regulatory rules (major defined as rules with annual budgetary impact of $100 million or more). That number dropped to 16 in 1998, 15 in 1999. In 2000, the number rose to twenty, while in 2001 it was back to 16. Sixteen major rules for all of calendar 2001 wouldn’t seem out of line. But Clinton was only president for the first twenty days in 2001; yet his Administration managed to publish as many new major regulatory rules in those twenty days as it had in the 365 days of 1998, and even more that it had in all of 1999.

Not only that, the original HIPAA privacy rule was almost unworkable. Every health plan, provider or clearinghouse (but not other entities like employers or pharmaceutical manufacturers) was prohibited from using any health information until it received written consent from the individual. That made sense enough for primary-care doctors (who can get your signature before they see you), but it won’t work for other doctors. For example, if you see your internist (after signing a consent) and he notices that your blood pressure is elevated, he might refer you to a cardiologist and call your pharmacy with a prescription for blood pressure medication. The cardiologist wouldn’t be able to look at your file to see if you need to be scheduled for an appointment until you get there, since he can’t “use” your health information until you sign the consent. And the pharmacist can’t fill your prescription, since that would be a “use” prior to consent.

That problem was fixed (although there’s still a lawsuit floating around to re-instate the unworkable consent requirement), but there’s plenty still wrong with HIPAA. In many ways, it looks like HHS did not fully understand the industry it was trying to regulate. The regulations place obligations onto “health plans” that make perfect sense when applied to the Aetnas and Cignas of the world, but a much larger portion of the insurance industry is actually made up of individual employers’ self-funded health plans. These ERISA plans are much more cost-effective, and virtually every company that employs more than a few hundred employees has a self-funded plan. But HIPAA requires these plans to meet all its privacy and security requirements, even if doing so destroys the financial viability of these otherwise cost-effective plans.

Unaddressed Problems

There are a couple of major underlying problems that will keep any regulatory scheme for health information privacy from working as designed. The first is that while health information is potentially embarrassing, the vast majority of it is commonly shared with those around us. Everyone at work knows when you get sick, if you have a heart attack, if you’re pregnant. Every time you see someone walking down the street in a cast, you have been exposed to his “protected health information.” Sure, you wouldn’t want anyone to know about your erectile dysfunction, and you may not want them to know that it’s your inability to swing a bat, not your torn ACL, that keeps you off the company softball team. But do you really care if they know you had your gall bladder out last year? Celebrities and sports stars excluded, most of us don’t much care if our medical records—psychiatric data excepted—are publicly available. But the HIPAA regulatory scheme is designed to protect a minority by burdening the majority. Indeed, if you look at the major “health information” security and privacy breaches in the news recently, health care information as such isn’t the problem. Data thieves aren’t interested in your blood pressure medication or your antipsychotic prescriptions; they’re interested in your Social Security number.

Another underlying problem is that, while in some ways privacy can be beneficial to health care delivery (patients might not otherwise disclose all their information to their doctors), it is primarily a hindrance. If you want perfect medical-record privacy, don’t tell your personal medical information to your doctor. Don’t tell anyone. Of course, that’s not going to get you good health care. If you want the best health care, you should make sure your private health information—PHI to insiders—gets out to as many people as possible. That way every physician in the world can have input on your case, and perhaps someone, somewhere has the perfect cure or is the right match for an organ donor. Perfect medical-record privacy and optimal health care delivery are opposed, if not mutually exclusive.

The occasional horror stories notwithstanding, the health care industry (particularly physicians, hospitals and health care systems) was almost universally scrupulous in protecting medical-record privacy from unauthorized access and disclosure before HIPAA. There certainly were instances where PHI was used or disclosed in improper ways, but those instances were (and are) the very rare exception, not the rule. Burdening the entire health care delivery system to prevent these few instances is unwise and unnecessary.

More to the point, there is a non-regulatory way to cure those outlier problems: the tort system. While HIPAA’s grand regulatory infrastructure is loaded onto the health care system ostensibly to protect patients from improper uses or disclosures, HIPAA doesn’t even contain a private right of action. If your health care provider misuses or improperly discloses your PHI, you can’t sue him for violating your HIPAA rights. Only the Federal government can bring an action against a HIPAA violator. Of course, you may have a private cause of action for breach of confidentiality, breach of privacy or some other statutory or common-law claim, and the HIPAA standards are probably good evidence of what a reasonable person must do to protect the information. But that course of action existed before HIPAA ever reached the Federal Register, and it still does.

Finally, HIPAA’s privacy and security rules get in the way of other worthwhile initiatives, such as the development of electronic medical records and portability devices like smart chips. Providers are now afraid to switch to electronic medical records because they can’t be sure the new systems will not expose them to liability in the event of a security breach. Providers are hesitant to discuss their patients’ health information with family members and loved ones because the rules are confusing and physicians just want to play it safe. And while the worst fears that HIPAA would stifle medical research did not come true, it is still likely that some potential research participants have opted not to get involved due to HIPAA concerns.

Fortunately, the U.S. health care industry has always adapted quickly to heavy regulations, first of all because it is used to and accepts the need for government regulation from long before HIPAA, and second, because government is the single largest payer for health care services. Since HIPAA, and particularly since the AdminSimps kicked in five years ago, the industry has done a fairly good job of meeting the objectives of the law, despite the constant aggravations and hippopotamus-scale costs. Full compliance with the data security provisions is way behind schedule, but most health care providers are basically in compliance with the privacy provisions. But their success still adds up to failure: The vast sums of money wasted on compliance with a well-intentioned but misdirected and ultimately unnecessary law certainly could have been put to better use. We are hemorrhaging billions of scarce medical dollars each year for no good reason. With more than two full years left in this Administration’s tenure, fixing this problem is an achievement well within reach.

Jeff Drummond is a partner in the health care-practice group of Jackson Walker LLP, representing health care providers. He is a frequent speaker on health information technology issues and is author of the original HIPAA blog (