la classic case To balance the costs and benefits of science, researchers are grappling with how artificial intelligence in medicine can and should be applied to clinical patient care, although they know there are examples where patients’ lives are at risk. coming.
The question took center stage at a recent University of Adelaide seminar, part of the Research Tuesdays lecture series, entitled ‘Antidote AI’.
As artificial intelligence becomes more sophisticated and useful, we see it more and more in everyday life. From AI traffic control and ecological studies to machine learning to find the origin of a Martian meteorite and reading Arnhem Land petroglyphs, the possibilities seem endless for AI research.
Perhaps some of the most promising and controversial uses for artificial intelligence lie in the medical field.
The genuine excitement of clinicians and artificial intelligence researchers at the prospect of AI helping patient care is palpable and honorable. After all, medicine is about helping people and the ethical basis is “do no harm”. AI is certainly part of the equation to improve our ability to treat patients in the future.
Khalia Primer, PhD student at Adelaide Medical School, points out many medical areas where AI is already causing a stir. “AI systems uncover critical health risks, detect lung cancer, diagnose diabetes, classify skin conditions and determine the best drugs to fight neurological disorders.
“We may not need to worry about the emergence of radiology machines, but what safety issues do need to be considered as machine learning and medical science converge? What risks and potential harms should health professionals be aware of and what solutions can we develop table to ensure that this exciting field continues to develop?” asks Primera.
These challenges are compounded, Primer says, by the fact that “the regulatory environment is struggling to keep up” and “AI training for health professionals is virtually nonexistent.”
As both a clinician by training and an AI researcher, Dr. Lauren Oakden-Rayner, Senior Research Fellow at the Australian Institute for Machine Learning (AIML) at the University of Adelaide and Director of Medical Imaging Research at the Royal Adelaide Hospital, balances the pros and cons of AI in medicine.
“How do we talk about AI?” she asks. One way is to emphasize that AI systems perform as well or even better than humans. The second way is to say that AI is not intelligent.
“You could call this the AI ’hype’ position and the AI ’contrarian’ position,” Oakden-Rayner says. “People have now made a career out of being in one of these positions.”
Oakden-Rayner explains that both views are true. But how can both be correct?
The problem, according to Oakden-Rayner, is the way we compare AI to humans. A reasonably understandable baseline given we to be human, but the researcher emphasizes that this only serves to confuse the AI scape by anthropomorphizing AI.
Oakden-Rayner points to a 2015 study in comparative psychology – the study of non-human intelligences. That research showed that pigeons could be trained for a tasty treat to detect breast cancer on mammograms. In fact, the pigeons only needed two to three days to achieve an expert performance.
Of course no one will claim that pigeons are as smart as a trained radiologist. The birds have no idea what cancer is or what they are looking at. “Morgan’s Canon” – the principle that the behavior of a non-human animal should not be interpreted in complex psychological terms if it can instead be interpreted with simpler concepts – says that we should not assume that a non-human intelligence is something clever does if there is a simpler explanation. This certainly applies to AI.
Oakden-Rayner also tells of an AI who looked at a photo of a cat and correctly identified it as a cat — before being completely sure it was a photo of guacamole. That’s how sensitive AI is to pattern recognition. The hilarious mix of cat and guacamole recreated in a medical setting becomes much less funny.
This leads Oakden-Rayner to ask, “Does that endanger patients? Does that pose any security issues?”
The answer is yes.
An early AI tool used in medicine was used to look at mammograms just like the pigeons. In the early 1990s, the tool was given the green light for use in detecting breast cancer in hundreds of thousands of women. The decision was based on lab experiments that showed radiologists improved their detection rates when using the AI. Amazing, right?
Twenty-five years later, a 2015 study looked at the application of the program in practice and the results were not as good. In fact, women were worse off where the tool was in use. The conclusion for Oakden-Rayner is that “often these technologies do not work as we expect”.
In addition, Oakden-Rayner notes that there are 350 AI systems on the market, but only about five have been subjected to clinical trials. And AI tends to perform worst for patients most at risk, in other words, the patients who need the most care.
AI has also proved problematic when it comes to different demographics. Commercially available facial recognition systems have been shown to perform poorly in black people. “The companies that actually took that on board went back and fixed their systems by training on more diverse data sets,” Oakden-Rayner notes. “And these systems are now much more equal in their output. No one thought of even trying that when they originally built and marketed the systems.”
Much more concerning is an algorithm used by judges in the US to determine sentencing, bail, parole, and to predict the likelihood of recidivism in individuals. The system is still in use despite media reports from 2016 that it was more likely to be wrong in predicting a black person would reoffend.
So, where does this leave things for Oakden-Rayner?
“I’m an AI researcher,” she says. “I’m not just someone who pokes holes in AI. I really like artificial intelligence. And I know that the vast majority of my talk is about the damage and the risks. But the reason I’m like this is because I’m a clinician, so we need to understand what can go wrong so we can prevent it.”
The key to making AI more secure, according to Oakden-Rayner, is establishing standards of practice and guidelines for publishing clinical trials involving artificial intelligence. And, she believes, this is all very doable.
Professor Lyle Palmer, Lecturer in Genetic Epidemiology at the University of Adelaide and also Senior Research Fellow at AIML, highlights South Australia’s role as a center for AI research and development.
If there’s one thing you need for good artificial intelligence, he says, it’s data. Various data. And a lot of it. South Australia is an excellent location for large population studies, given the large amount of medical history in the state, Palmer says. But he also echoes Oakden-Rayner’s sentiments that these tests should include different samples to capture the differences in different demographics.
“It would be cool if everyone in South Australia had their own homepage where all their medical results were posted and we could involve them in medical research and a whole range of other activities around things like health promotion,” says Palmer. says excitedly. “All this is possible. We have had the technology to do this for centuries.”
Palmer says this technology is particularly advanced in Australia, especially in South Australia.
This historical data can help researchers determine, for example, the longevity of a disease to better understand what causes diseases in different individuals.
For Palmer, AI will be critical in medicine given the “tough times” in healthcare, including in the drug delivery pipeline, where many treatments don’t reach the people who need it.
AI can do great things. But, as Oakden-Rayner warns, comparing it to humans is a mistake. The tools are only as good as the data we give them and even then they can make a lot of bizarre mistakes because of their sensitivity to patterns.
Artificial intelligence is sure to transform medicine (slower than some have suggested in the past, it seems). But just as the new technology itself is intended to care for patients, the human makers of the technology must ensure that the technology itself is safe and does no more harm than good.