Home Healthcare Sen. Mark Warner Urges Google to Protect Patients and Ensure Ethical Deployment...

Sen. Mark Warner Urges Google to Protect Patients and Ensure Ethical Deployment of Health Care AI

"Amid reports of inaccuracies in a new medical chatbot, Warner pushes for answers"

659
0

From Sen. Mark Warner’s office:

WARNER URGES GOOGLE TO PROTECT PATIENTS AND ENSURE ETHICAL DEPLOYMENT OF HEALTH CARE AI

~ Amid reports of inaccuracies in a new medical chatbot, Warner pushes for answers ~  

WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged Google CEO Sundar Pichai to provide more clarity into his company’s deployment of Med-PaLM 2, an artificial intelligence (AI) chatbot currently being tested in health care settings. In a letter, Sen. Warner expressed concerns about reports of inaccuracies in the technology, and called on Google to increase transparency, protect patient privacy, and ensure ethical guardrails.

In April, Google began testing Med-PaLM2 with customers, including the Mayo Clinic. Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While the technology has shown some promising results, there are also concerning reports of repeated inaccuracies and of Google’s own senior researchers expressing reservations about the readiness of the technology. Additionally, much remains unknown about where Med-PaLM 2 is being tested, what data sources it learns from, to what extent patients are aware of and can object to the use of AI in their treatment, and what steps Google has taken to protect against bias.

“While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors,” Sen. Warner wrote.  

The letter raises concerns over AI companies prioritizing the race to establish market share over patient well-being. Sen. Warner also emphasizes his previous efforts to raise the alarm about Google skirting health privacy as it trained diagnostic models on sensitive health data without patients’ knowledge or consent. 

“It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI,” Sen. Warner continued. 

The letter poses a broad range of questions for Google to answer, requesting more transparency into exactly how Med-PaLM 2 is being rolled out, what data sources Med-PaLM 2 learns from, how much information and agency patients have over how AI is involved in their care, and more. 

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. In April, Sen. Warner directly expressed concerns to several AI CEOs – including Sundar Pichai – about the potential risks posed by AI, and called on companies to ensure that their products and systems are secure. Last month, he called on the Biden administration to work with AI companies to develop additional guardrails around the responsible deployment of AI. He has also introduced several pieces of legislation aimed at making tech more secure, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads. 

A copy of the letter can be found here are below. 

Dear Mr. Pichai,

I write to express my concern regarding reports that Google began providing Med-PaLM 2 to hospitals to test early this year. While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, the exacerbation of existing racial disparities in health outcomes, and an increased risk of diagnostic and care-delivery errors.

Over the past year, large technology companies, including Google, have been rushing to develop and deploy AI models and capture market share as the technology has received increased attention following OpenAI’s launch of ChatGPT. Numerous media outlets have reported that companies like Google and Microsoft have been willing to take bigger risks and release more nascent technology in an effort to gain a first mover advantage. In 2019, I raised concerns that Google was skirting health privacy laws through secretive partnerships with leading hospital systems, under which it trained diagnostic models on sensitive health data without patients’ knowledge or consent. This race to establish market share is readily apparent and especially concerning in the health care industry, given the life-and-death consequences of mistakes in the clinical setting, declines of trust in health care institutions in recent years, and the sensitivity of health information. One need look no further than AI pioneer Joseph Weizenbaum’s experiments involving chatbots in psychotherapy to see how users can put premature faith in even basic AI solutions.

According to Google, Med-PaLM 2 can answer medical questions, summarize documents, and organize health data. While AI models have previously been used in medical settings, the use of generative AI tools presents complex new questions and risks. According to the Wall Street Journal, a senior research director at Google who worked on Med-PaLM 2 said, “I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey.” Indeed, Google’s own research, released in May, showed that Med-PaLM 2’s answers contained more inaccurate or irrelevant information than answers provided by physicians. It is clear more work is needed to improve this technology as well as to ensure the health care community develops appropriate standards governing the deployment and use of AI.

Given these serious concerns and the fact that VHC Health, based in Arlington, Virginia, is a member of the Mayo Clinic Care Network, I request that you provide answers to the following questions. 

  1. Researchers have found large language models to display a phenomenon described as “sycophany,” wherein the model generates responses that confirm or cater to a user’s (tacit or explicit) preferred answers, which could produce risks of misdiagnosis in the medical context. Have you tested Med-PaLM 2 for this failure mode?
  2. Large language models frequently demonstrate the tendency to memorize contents of their training data, which can risk patient privacy in the context of models trained on sensitive health information. How has Google evaluated Med-PaLM 2 for this risk and what steps has Google taken to mitigate inadvertent privacy leaks of sensitive health information?
  3. What documentation did Google provide hospitals, such as Mayo Clinic, about Med-PaLM 2? Did it share model or system cards, datasheets, data-statements, and/or test and evaluation results?
  4. Google’s own research acknowledges that its clinical models reflect scientific knowledge only as of the time the model is trained, necessitating “continual learning.” What is the frequency with which Google fully or partially re-trains Med-PaLM 2? Does Google ensure that licensees use only the most up-to-date model version?
  5. Google has not publicly provided documentation on Med-PaLM 2, including refraining from disclosing the contents of the model’s training data. Does Med-PaLM 2’s training corpus include protected health information?
  6. Does Google ensure that patients are informed when Med-PaLM 2, or other AI models offered or licensed by, are used in their care by health care licensees? If so, how is the disclosure presented? Is it part of a longer disclosure or more clearly presented?
  7. Do patients have the option to opt-out of having AI used to facilitate their care? If so, how is this option communicated to patients?
  8. Does Google retain prompt information from health care licensees, including protected health information contained therein? Please list each purpose Google has for retaining that information.
  9. What license terms exist in any product license to use Med-PaLM 2 to protect patients, ensure ethical guardrails, and prevent misuse or inappropriate use of Med-PaLM 2? How does Google ensure compliance with those terms in the post-deployment context? 
  10. How many hospitals is Med-PaLM 2 currently being used at? Please provide a list of all hospitals and health care systems Google has licensed or otherwise shared Med-Palm 2 with.
  11. Does Google use protected health information from hospitals using Med-PaLM 2 to retrain or finetune Med-PaLM 2 or any other models? If so, does Google require that hospitals inform patients that their protected health information may be used in this manner?
  12. In Google’s own research publication announcing Med-PaLM 2, researchers cautioned about the need to adopt “guardrails to mitigate against over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 as well as when it particularly should and should not be used? What guardrails has Google incorporated through product license terms to prevent over-reliance on the output?
********************************************************


Sign up for the Blue Virginia weekly newsletter

Previous articleThey’re All Bad, But Where Exactly Does Every Virginia Republican Candidate for State Senate, House of Delegates Stand on Women’s Reproductive Freedom?
Next articleFormer VA Lt. Gov. Bill Bolling (R): “None of these [Republican] candidates [for U.S. Senate] have a snowballs chance of defeating Kaine in a presidental election year.”