Showing posts with label personal health information. Show all posts
Showing posts with label personal health information. Show all posts

Monday, April 16, 2012

Privacy and Security Considerations for Emerging Health Information Exchanges: Notes from Utah and New York

Earlier this month the Utah Department of Health issued a press release describing a cyber attack on its server, in which hackers removed information for approximately 780,000 individuals. According the Department of Health, the information contained personal records of individuals within the state, including Medicaid and Children’s Health Insurance Plan recipients.

Permutations of this scenario- whether hacking into a computer server, losing a USB key, or a stolen laptop- are all familiar news headlines announcing a security breach of individuals' health and personal information. Human error and human opportunism make it likely that we will continue to see such information breaches in the future, despite steps to mitigate potential security threats.

As states begin to develop legislation and promulgate rules to govern their electronic health information exchanges (HIE), they should carefully balance residual security and privacy risks with the potential promises of a functional HIE when determining policies relating to how a system enters an individual’s electronic health record (EHR) and what portion of the EHR the state enters into the HIE.

Last month, the New York Civil Liberties Union (NYCLU) issued a report, Protecting Patient Privacy: Strategies for Regulating Electronic Health Records Exchange, which articulated numerous privacy, security, and functional concerns with the state’s emerging HIE. Currently, New York employs a blanket consent procedure for record access and enrolls patients of participating providers into the state's regional health information organizations (RHIOs). 

Among numerous concerns, NYCLU’s Report highlights two distinct issues with this approach:

(1) New York does not provide a mechanism for patients to limit sharing stigmatizing sensitive information such as substance abuse records or mental health treatment if they consent to participate in the exchange; and

(2) Although physicians must obtain consent to view patient information in the exchange, participating providers enter patient medical information into the exchange without patient consent and patients cannot opt-out of the record locator system.

The Office of the National Coordinator for Health Information Technology’s HIT Policy Committee has asserted that a form of granular control over health data can protect the confidentiality of narrow categories of sensitive health information while fostering patient autonomy, promoting trust in medical providers, and building confidence in the growing use of HIT. Although too much data segmentation or exclusion options could confuse patients and undermine the purpose of the HIE as a comprehensive record system, some groups, such as the NYCLU, argue that existing state law requires the capacity for granular control over statutorily identified categories of sensitive medical information. This assertion serves as a reminder that each state contains varied protected categories of sensitive medical information as well as different standards defining additional measures relating to sharing and accessing this information. Earlier this month, the New York Department of Health and the New York eHealth Collaborative established the State Health Information Network of New York Policy Committee to examine these and numerous other concerns over the state’s current policies and procedures governing the exchange.

Patients may also be wary of the security of their identifying records available in the HIE registry system, as a breach could reveal both personal information and the entirety of the patient’s medical records that providers have entered into the HIE. A breach of the HIE would not only invade the patient’s abstract notion of privacy over sensitive information, but could also expose the patient to quantifiable concrete harms such as identity theft, fraud, and the costs associated with investigation and mitigation.

Some victims involved in major medical security breaches have asserted that once information such as social security numbers, patient demographic information, and medical records are accessible in a breach, victims face an imminent and continuing risk arising from the security breach itself regardless of whether an outside party has used the information. Currently, some courts have ruled that even where a third party steals media containing patient information, if the victims cannot prove that a third party actually accessed or used the information, then claims for future financial harm arising from a security breach are insufficient to constitute an actionable injury. To address these legitimate concerns, jurisprudence should evolve with the recognition that potential third party use of this information may be difficult to identify and costly to monitor. Further, months may pass following the initial breach before victims notice fraudulent activity, such as in the substantial TRICARE data breach.

State legislatures should remain cognizant of both patients' desire for privacy and their corresponding wish to limit access to sensitive medical information as well as security concerns from both accidental as well as intentional breaches of patient information during the initiation or expansion of the state's HIE .
-Katherine Drabiak-Syed

Wednesday, June 17, 2009

In the Literature: Predictive Health 2.0

The recent double issue of The American Journal of Bioethics (Vol 9 6&7) includes two target articles (followed by open peer commentaries) on the ethical issues of direct-to-consumer (DTC) genomics and social networking.

The issue opens with an editorial by 23&Me's Andro R. Hsu, Joanna L. Mountain, Anne Wojcicki, and Linda Avey: "A pragmatic consideration of ethical issues relating to personal genomics." The editorial offers five points of discussion that the authors find relevant to the discussion of the ethical issues. Facebook users might be surprised to discover that the service is offered as an example of innovative data sharing policies; see point five: "A single data sharing policy cannot fit the needs of all".

The first "target article" reports the result of an attitudes survey about DTC; see: McGuire AL, Diaz CM, Wang T, Hilsenbeck SG. Social networkers' attitudes toward direct-to-consumer personal genome testing. Although the title suggests that "social networkers" are a focus of the article, in reality they are a convenient (or experimental?) survey population--the authors used Zoomerang and Facebook to reach the 1,080 respondents. Of the respondents, 47% reported a pre-existing knowledge of DTC genomics companies like 23&Me, Navigencs, and deCODEme; 6% reported having used one of these services and 64% reported a willingness to use one of the services in the future.

The second "target article" focuses on where all this might be leading; see: Lee SS, Crawley L. Research 2.0: social networking and direct-to-consumer (DTC) genomics. In addition to proposing that social network analysis could be used to explore the impact of these DTC genomics ventures on research, data sharing, and subject recruitment, the authors also ask: "What are the ethical and social implications of new social formations created through the sharing of personal genomic information?" In other words, how will the convergence of Web 2.0 and personal genomic information (PGI) change our social structures?

Commentaries on these articles include a few authored by friends of the PredictER program; see, for example:

Esposito K, Goodman K. Genethics 2.0: phenotypes, genotypes, and the challenge of databases generated by personal genome testing. pp. 19-21.

Caulfield T. Direct-to-consumer genetics and health policy: a worst-case scenario? pp. 48-50.

Other articles and publications of interest:

Genetic privacy and piracy. Nat Cell Biol. 2009 May;11(5):509. PubMed PMID:19404329.
Avard D, Silverstein T, Sillon G, Joly Y. Researchers' perceptions of the ethical implications of pharmacogenomics research with children. Public Health Genomics. 2009;12(3):191-201. PMID: 19204423.

Bombard Y, Veenstra G, Friedman JM, Creighton S, Currie L, Paulsen JS, Bottorff JL, Hayden MR; Canadian Respond-HD Collaborative Research Group. Perceptions of genetic discrimination among people at risk for Huntington's disease: a cross sectional survey. BMJ. 2009 Jun 9;338:b2175. PMID: 19509425.

Borry P, Howard HC, Sénécal K, Avard D. Health-related direct-to-consumer genetic testing: a review of companies' policies with regard to genetic testing in minors. Fam Cancer. 2009 Jun 2. PMID: 19488835.

Dokholyan RS, Muhlbaier LH, Falletta JM, Jacobs JP, Shahian D, Haan CK, Peterson ED. Regulatory and ethical considerations for linking clinical and administrative databases. Am Heart J. 2009 Jun;157(6):971-82. PMID: 19464406.

Forsberg JS, Hansson MG, Eriksson S. Changing perspectives in biobank research: from individual rights to concerns about public health regarding the return of results. Eur J Hum Genet. 2009 May 27. PMID: 19471310.

Goddard KA, Duquette D, Zlot A, Johnson J, Annis-Emeott A, Lee PW, Bland MP, Edwards KL, Oehlke K, Giles RT, Rafferty A, Cook ML, Khoury MJ. Public awareness and use of direct-to-consumer genetic tests: results from 3 state population-based surveys, 2006. Am J Public Health. 2009 Mar;99(3):442-5. PMID: 19106425.

Henrikson NB, Bowen D, Burke W. Does genomic risk information motivate people to change their behavior? Genome Med. 2009 Apr 2;1(4):37. PMID: 19341508.

Maliapen M. Clinical genomics data use: protecting patients privacy rights. Studies in Ethics, Law, and Technology. 2009;3(1):Article 1. Available at: http://www.bepress.com/selt/vol3/iss1/art1

Manion FJ, Robbins RJ, Weems WA, Crowley RS. Security and privacy requirements for a multi-institutional cancer research data grid: an interview-based study. BMC Med Inform Decis Mak. 2009 Jun 15;9(1):31. PMID: 19527521.

Mascalzoni D, Hicks A, Pramstaller PP. Consenting in population genomics as an open communication process. Studies in Ethics, Law, and Technology. 2009;3(1):Article 2. Available at: http://www.bepress.com/selt/vol3/iss1/art2

Rogowski WH, Grosse SD, Khoury MJ. Challenges of translating genetic tests into clinical and public health practice. Nat Rev Genet. 2009 Jun 9. PMID: 19506575.

Wilkinson RH. The single equality bill: a missed opportunity to legislate on genetic discrimination? Studies in Ethics, Law, and Technology. 2009;3(1):Article 3. Available at: http://www.bepress.com/selt/vol3/iss1/art3

Saturday, February 7, 2009

Will electronic medical records threaten my privacy? No, but…

I’ve been thinking a lot about privacy lately. For example, among the ways President Obama has indicated his commitment to a 21st century health care system, is “by computerizing medical records … saving countless lives and billions of dollars.”

His proposal is already underway in many communities around the country, including Indianapolis, whose Regenstrief Institute is a nationally recognized leader in the development and diffusion of electronic medical records [EMRs]. The conversion of millions of paper records to electronic records, and the organization of hospitals and physician groups to agree on how best to access and share these records, presents a number of logistical and technical challenges. None of these are insurmountable. Moreover, given sufficient resources and political will, it is likely that the President’s vision can be translated into reality sooner rather than later – so long as we can figure out how to handle the elephant in the room (and no, this is not the Republican caucus). The elephant is privacy – the idea that access to personal health information is something that we as individuals should be able to completely control, and that access by others (especially unauthorized third parties) constitutes a serious breach of personal space, let alone any negative repercussions from malicious use. But does the move to EMRs require a dramatic change in the ethics of privacy? Should people be more worried once their records are accessible to more health providers? How can they be sure that errors will be quickly corrected?

I thought I had completely settled views on these questions: namely that the risks from privacy invasion are potentially serious and people are entitled to be frightened. In the case of my personal health information, I have confidence that those experts working on the architecture for the system – the checks and balances, the encryption techniques, gateways, passwords, algorithms and who knows what else – will construct it with exactly those worries in mind. Interestingly, I’m more upset right now that in the past few days someone with plenty of time on their hands has figured out a way to upload a picture of me from the internet and create a brand new Facebook page using my name. This is creepy and it’s wrong. Should I be more worried about a breach in my electronic medical record that accidentally discloses to the world that Eric Meslin suffers from migraines (true by the way), or the Facebook hacker who convinces unsuspecting people to become “friends of Eric Meslin” in order to expose them to “wall-to-wall” postings that attribute opinions about privacy to me which aren’t my own?

--Eric M. Meslin