The latest developments in artificial-intelligence hardware and software are revolutionizing medical device development, wherein the technology is now widely used to process, predict, and visualize medical data in real time. AI is used in more than 350 FDA-approved medical devices and countless more that offer operational savings in the health-care environment.
“The fact that we can now put artificial intelligence in instruments [means] magic starts to happen,” said Kimberly Powell, vice president of health care at Nvidia Corp.
For example, Caption Health has developed an ultrasound system that uses AI to provide guidance and interpretation right on the ultrasound image for the operator. The additional guidance means that a broad range of health-care workers can perform ultrasounds, not just specialist sonographers. After selecting the type of scan to be performed, the system walks the operator through getting a high-quality picture, including showing the direction to move the probe with arrows.
“You’re doing computer vision constantly on the probe data that’s coming in, so you can direct them where to go,” said Powell. “Our anatomy is similar enough that you can get them in the zone, and then when it detects whatever they are trying to measure, it can stop and automatically take the measurement.”
It is the first system of its type to gain FDA approval. Crucially, it is a physically small system that can be moved around a hospital as needed or, combined with easing the requirement for a trained professional, it can be more easily used in the field or in parts of the world where fewer trained sonographers are available.
“We now have the computing architecture that can go inside that device and do a lot of AI right in that device to provide the user experience and the fidelity of information insights that you can capture from the data,” Powell said.
AI is also helping to miniaturize MRI and CT scanners. MRI machines today typically require infrastructure that fills a whole room. Hyperfine has built an AI-enabled MRI machine that is portable — it can be wheeled around the hospital to the patient’s bedside or into the operating room. The system, named Swoop, has been FDA-approved and is already in use, serving patients in remote parts of Canada and further afield.
In this case, AI enables good results from the portable MRI scanner by correcting for noisier images. This means that lower field strength and lower-quality sensors can be used, with AI making up the difference in image fidelity. Powell compares this technology to smartphone filters, which can fake smooth skin in selfies: Because we know what the end result should look like, this makes it easier to correct for noise, she said.
Powell said that AI is also revolutionizing surgery, especially for modern, minimally invasive surgery techniques in which the surgeon has only a camera view into the body.
“You can add a lot of really powerful information on that camera view — don’t cut this vessel, what is this anatomy over here — and you can really help orient the doctor,” she said. “And [surgeons] can now train in these simulation environments to know exactly the procedure they’re going to do and the trajectory they’re going to take.”
Patient privacy
Is there a worry that noisy images augmented by AI will start seeing things that aren’t really there?
Powell said that while false positives are possible, the new breed of AI-powered devices goes through the same rigorous regulatory process as any other medical devices, including clinical trials: “From a quality assurance and regulatory perspective, they go through the same due diligence they have to go through in the clinical trial phase; they have to provide the evidence. The FDA and regulatory bodies have the same very strict ruling of how you define whether this is performing the way it’s supposed to perform.”
There are more than 350 FDA-approved medical devices that have functionality based on AI. Powell pointed out that many more applications are not on the regulatory pathway, including AI applications that aid operational efficiency, of which several thousand are in use today.
“The amount of work that humans have to do from start to finish to acquire a medical image is a lot,” she said. This includes the amount of potential errors and the amount of time, so there is a lot of potential to improve operational efficiency, she added. “You’re still presenting the information to a physician who is going to accept or decline the recommendation.”
As with existing computer-vision technologies, wherever images of patients are captured, there are privacy concerns to be addressed. Performing AI in the device itself in real time, without having to transmit images to the cloud for processing, helps.
“The fact of the matter is that we’ve been digitizing patient data for two decades; it’s just [a question of] how that is data flowing and who’s getting access to it,” Powell said. “The appropriate regulation has to be set in place, and business data agreements and all of that have to be very carefully looked after, both for the purveyor of the data and anybody who’s receiving it. That’s just fact.”
Powell points out that for AI-assisted robotic surgery, images taken within the body have nothing to identify the patient. Clara Holoscan, Nvidia’s AI computing platform for medical devices, also includes de-identification steps such as automatically deleting images that contain skin and hair (which could potentially be used to identify the patient) — these images are useless to the robot surgeon anyway.
“It will take trust from patients, but I think in the end, we’re all getting more and more used to [being caught on camera],” Powell said, adding that even doorbells today use cloud-based AI processing on video footage. “But for all the companies I’ve worked with, de-identification is top of the list of AIs that they develop first.”
Development limitations
What are the limitations on using AI in medical devices today? Powell lists three key areas: processing performance, AI training, and AI algorithms.
The first is suitably powerful compute platforms that can handle complex AI in real time without sending data to the cloud, which is why Nvidia has developed Clara Holoscan.
Clara Holoscan is one of Nvidia’s three robotics platforms (the others are Drive, for self-driving vehicles, and Isaac, for robots that function in the human environment). It includes hardware (based on the Nvidia Jetson AGX Orin) and software tailored to the development of medical devices.
“We call it a robotics platform because it’s really meant to create real-time intelligent instruments,” said Powell, adding that while robots may perform surgery unaided in the future, future X-ray machines or medical microscopes may also be classified as robots if they have a robotic radiologist or scientist inside, looking at the pictures for anomalies.
Clara Holoscan enables the connection of any kind of medical sensor — be it an ultrasound, endoscope, or CT machine — with powerful AI compute that can be done in real time. Other medical-specific features include high-speed I/O, AI processing for medical physics, medical images and data, and acceleration for 3D graphics rendering.
“The nature of real time in this environment is that you’re literally helping humans in the loop become better at their job,” Powell said. “Clara Holoscan is not only that … it’s also creating a Tesla moment for medical devices.”
Powell described how medical devices used to have a shelf life of perhaps 10 years. With Nvidia’s platform, new AI algorithms can be created and uploaded as necessary to make the machines smarter. This can be done over the air. The result is medical device makers shifting their business models toward software-as-a-service.
“Now they have a computing platform that is not only very AI-capable and can run these real-time applications but can be remotely updated,” Powell said. “It’s almost like bringing cloud capability to your medical device so that new applications can be deployed, and these sensors can get better and better and better, every few weeks that go by. That’s tremendously exciting for them from an economical perspective.”
Training data
Another thing holding back AI in medical devices is the limited amount of training data available for training medical AIs, particularly those that look for rare diseases or conditions. The solution here is more AI, specifically using AI to create synthetic training data for specific diseases.
For example, Nvidia and King’s College London recently announced that they used Nvidia’s Cambridge-1 supercomputer to create a dataset of 100,000 synthetic images of brains that can be used to build AIs to accelerate the understanding of dementia, Parkinson’s disease, and other brain diseases. AI can create images of brains for specific segments of the population that may be underserved in real datasets, such as women or young people, or people with particular diseases. The same team is hoping to expand coverage to any part of the human anatomy in any mode of medical imaging (MRIs, PET scans, X-rays, etc).
The final limiting factor is the development of medical AI algorithms. To tackle this, Nvidia has built a medical-specific AI framework called MONAI. This AI framework contains all the tools needed to label data, create synthetic data, train models, validate models in the real world, and then connect them to the Clara Holoscan platform for deployment. This open-source platform is optimized for the unique formats, resolutions, and metadata of medical imaging.
“It used to be that only the very rich and famous in their AI research labs could do this stuff — we try to make it very available,” Powell said, adding that when new medical papers are published, they are added into MONAI as quickly as possible. “We put it right into this open-source science framework so the world can rapidly reproduce it and build upon it. This is why the pace of innovation has gotten so quick — it’s because of this open-science, open-innovation world that the AI community really instilled upon the world, which is good.
“I think AI should be democratized in order for it to remain safe and productive,” she added.