The Mayo Clinic research hospital, among others, have been participating in testing of Google’s Med-PaLM 2 since April, according to an article published in The Wall Street Journal this morning.
PaLM 2, which was unveiled at Google I/O in May of this year, has a variation called Med-PaLM 2. The language model supporting Google’s Bard is PaLM 2.
According to WSJ, an internal email it received stated that Google thinks its modified approach can be especially useful in nations with “more limited access to doctors.”
The AI tool was trained on a selected collection of demos from medical experts, which according to Google will make it more adept at engaging in healthcare-related conversations than chatbots like Bard, Bing, and ChatGPT.
Google Medical AI Tool Med-PaLM 2 Accuracy Issues
The research Google published in May (pdf) demonstrating that Med-PaLM 2 still has some accuracy problems that we are accustomed to finding in large language models is also mentioned in the paper.
In the study, doctors judged Google’s both medical AI tools answers to be less accurate and contain more unrelated material than those of other doctors.
However, Med-PaLM 2 fared roughly as well as the real doctors in practically every other criterion, including demonstrating signs of reasoning, answers that were supported by agreement, and showing no trace of improper comprehension.
Customers testing Med-PaLM 2 will have control over their encrypted data, according to the WSJ, and Google won’t have access to it.
According to Greg Corrado, senior research director at Google, the new AI tool is still in its infancy. Despite not wanting it to be a part of his own family’s “healthcare journey,” Corrado said he thinks the AI tool “takes the places in healthcare where AI can be beneficial and expands them by 10-fold.”
To read our blog on “Google’s “Help Me Write” feature makes emails reply easier,” click here.