What do you consider to be the real risks of artificial intelligence?
I can understand why people are scared of strong AI, the one from the movies. I don’t feel that it’s a threat, however, because so much more would have to happen before a system became a danger to humanity. Of course, weak AI also poses risks – those lie in the possibility of errors. No system is perfect. If a system is good, it will provide the right answer in perhaps 99 percent of cases.
Another point concerns the ability to trace the computer’s decision. This is not just a problem in medicine; it applies in every setting where decision-making systems are used. A few studies have investigated how systems can explain themselves. We also want our systems to be able to assess themselves and show how reliable the result is. Especially in medicine, where we have a large number of special cases, it’s important to have software that can say: “Okay, I’ve never seen a case like this patient before.” With that in mind, we optimize our AI systems for safety and reliability, and for cognitive effort when it comes to understanding how the AI arrives at its results.[2]
What qualifies a medical technology manufacturer to become involved in AI?
We have been involved in machine learning since the 1990s and already have 45 AI-based products on the market. As a medical technology manufacturer, we have in-depth expertise in departmental and clinical workflows, and we possess medical knowledge linking our imaging and laboratory technology. In addition, we can rely on long-standing close collaborations with leading healthcare facilities around the world. They support our research with anonymized data and are also keen to help us develop and test our AI-based products – because they already see the advantages that AI brings to their daily routine. We only launch new solutions once we have tested them very, very carefully, and in collaboration with customers as part of scientific studies and publications.
How exactly do these AI systems learn?
We have over a billion anonymized clinical images, findings, and reports from different body regions and imaging modalities. That’s millions of images, plus metainformation such as examination date or acquisition parameters. This also includes information about where the data come from, who the contact person is, and how they can be used and for what purpose. As an example, we’ve used our AI-Rad Companion Chest CT1 to examine thorax CT scans. This required thousands of thorax CT images from different patients who had been treated in different hospitals around the world. We acquired these from our customer network and from certified brokers. Experienced clinical radiologists and technicians annotated them by hand. This means that multiple experts – from our clinical collaboration partners as well as our own employees – used the mouse to click on the images and define, for instance, what pulmonary emphysema or a lung nodule look like. The computer can then learn from these examples and apply the extracted concepts to new data. The quality of these annotations and data has a major impact on the quality of the final product.
Your AI systems must require an enormous amount of computing power. How are you dealing with this?
We have built our own supercomputing infrastructure, based on NVIDIA GPUs, to develop our AI software. Our Sherlock AI supercomputer provides 24 petaflops of performance and runs over 600 deep-learning experiments daily.
What are the greatest challenges facing AI in healthcare?
We have already discussed one challenge – the initial skepticism from physicians and patients. Another problem is that the medical technology market as a whole is already very strictly regulated. And in addition to complying with existing legislation, manufacturers and hospitals must now also find joint, adequate responses to rules governing privacy and cybersecurity. However, I personally think the main problem is that data in the healthcare system are scattered across so many different locations and systems. Connecting them requires an enormous amount of technical effort. Initial standards for this are being developed, and hopefully they will be increasingly used and help to simplify the process.