The risk of “medical error” takes on a new and more worrying meaning when the errors aren’t human, but the motives are.
In an article published in the journal Science, US researchers highlight the increasing potential for adversarial attacks to be made on medical machine-learning systems in an attempt to influence or manipulate them.
Due to the nature of these systems, and their unique vulnerabilities, small but carefully designed changes in how inputs are presented can completely alter output, subverting otherwise reliable algorithms, the authors say.{%recommended 7197%}
And they present a stark example – their own success using “adversarial noise” to coax algorithms to diagnose benign moles as malignant with 100% confidence.
The Boston-based team, which was led by Samuel Finlayson from Harvard Medical School, brought together specialists in health, law and technology.
In their article, the authors note that adversarial manipulations can come in the form of imperceptibly small perturbations to input data, such as making “a human-invisible change” to every pixel in an image.
“Researchers have demonstrated the existence of adversarial examples for essentially every type of machine-learning model ever studied and across a wide range of data types, including images, audio, text, and other inputs,” they write.
To date, they say, no cutting-edge adversarial attacks have been identified in the healthcare sector. However, the potential exists, particularly in the medical billing and insurance industry, where machine-learning is well established. Adversarial attacks could be used to produce false medical claims and other fraudulent behaviour.
To address these emerging concerns, they call for an interdisciplinary approach to machine-learning and artificial intelligence policymaking, which should include the active engagement of medical, technical, legal and ethical experts throughout the healthcare community.
“Adversarial attacks constitute one of many possible failure modes for medical machine-learning systems, all of which represent essential considerations for the developers and users of models alike,” they write.
“From the perspective of policy, however, adversarial attacks represent an intriguing new challenge, because they afford users of an algorithm the ability to in-fluence its behaviour in subtle, impactful, and sometimes ethically ambiguous ways.”
Originally published by Cosmos as Researchers warn medical AI is vulnerable to attack
Nick Carne
Nick Carne is the editor of Cosmos Online and editorial manager for The Royal Institution of Australia.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.