Source: Emerj | April 24, 2019
Author: Niccolo Mejia
AI applications in healthcare are becoming more common for white-collar automation and diagnostics. However, medical robotics is an area that may be marginally underdeveloped. This is likely because of regulations concerning automated surgery.
In this article, we cover how AI software is finding its way into medical robotics now and how it might in the future with more investment and when the density of AI talent at medical robotics companies increases. Specifically, we explore:
- AI for Medical Robotics – What’s Possible, and What’s Being Used by healthcare clients right now. We found little to no case studies showing a health network or hospital’s success with AI-based medical robotics.
- The State of AI at Medical Robotics Vendors, including the AI talent at medical robotics companies and a discussion of how to vet a vendor on whether or not its software is truly leveraging AI.
We begin our exploration of A-based medical robots with an overview of how they’re being used now.
AI for Medical Robotics – What’s Possible and What’s Being Used
Theoretically, multiple approaches to developing AI software could work for automating medical robotics. For example, one could use machine vision to guide the robot to problem areas and make it aware of mistakes or patient bodily reactions.
Currently, the medical robotics sector does not have many visible use cases in terms of fully automated surgery or other medical procedures. This is because regulations dictate that a recognized professional administer these procedures. Issues such as liability are harder to resolve with AI because it is usually unclear exactly how an AI application came to its conclusion.
Most medical robots are used for precision operations during non-invasive surgery. This use case nearly prohibits full automation with AI, as no one likely wants to “let loose” an AI software onto the human body. Additionally, a machine learning model built to operate a medical robot with dozens of moving arms and tools would need to be extensively trained on labeled videos of surgeries. This requires thousands of digitally labeled surgical videos before implementation.
A healthcare company may take months to acquire enough data to properly train a machine learning model to perform robotic surgery well enough that it would not be considered a liability. Even if a company did collect all that data, regulations may still need to change before the software can be used to fully automate surgeries.
That said, there are still medical robots for automating other healthcare processes such as diagnostics. For example, Indian software company Sigtuple purportedly created an AI-based telepathology system that automates their smart microscopes to take pictures and send them to the cloud.
Sigtuple’s software is called Shonit, and it consists of smart microscopes, or microscopes fitted to a movable robotic base which are connected to a smartphone camera. The software runs from an app on the smartphone, which also connects it to the cloud. The microscope slides around on its robotic base, which allows the lens to hover over an area of a sample dish and take multiple pictures.
Those pictures are then saved to the smartphone and sent to the cloud to be labeled. The cloud satellite that receives these pictures uses machine vision to label them according to blood cell count and any anomalies within the blood. Then, the pictures are sent to a remote pathologist who can diagnose based on these pre-labeled high-resolution images. Healthcare company workers using the software would then only need to wait for the pathologist to send a response with their diagnosis.
The 3-minute video below explains how the Shonit software can scan blood smears, send them to the cloud for analysis, and then to a pathologist so they may diagnose any illnesses found: