New AI Research Foreshadows Autonomous Robotic Surgery

A robot commonly used and manually manipulated by surgeons for routine operations can now autonomously perform key surgical tasks as precisely as humans.

Researchers at Johns Hopkins and Stanford Universities, revealed they have integrated a vision-language model (VLM)—trained on hours of surgical videos—with the widely-used da Vinci robotic surgical system.

Once connected with the VLM, da Vinci’s tiny grippers, or “hands,” can autonomously perform three critical surgical tasks: carefully lifting body tissue, using a surgical needle, and suturing a wound.

Unlike traditional robot training methods—which require detailed programming covering every component part of a robot’s movement—the retrofitted da Vinci robots performed zero-shot surgical tasks using only imitation learning. Relying only upon its vision language model, the robot imitated what doctors in surgical videos had done.

The results offer a glimpse into what possible future surgeries conducted entirely by autonomous robots could look like.

“It’s amazing that these robots can now autonomously perform these very complex tasks,” said Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “Coding robots so they can literally operate from imitation learning is a major paradigm shift in robotics and is, I think, where the future lies for autonomous surgical robots.”

To train their model, the researchers used NVIDIA GeForce RTX 4090 GPUs, PyTorch, and NVIDIA CUDA-X libraries for AI.

The researchers unveiled their findings in Munich in November, at the Conference on Robot Learning. To conduct their study, the roboticists used the da Vinci robotic surgical system, which can feature up to four robotic arms and is used by surgeons globally for a variety of laparoscopic surgeries.

To train their VLM, Kim and his colleagues connected miniature video cameras to the arms of three da Vinci robots Johns Hopkins University owns and lent to the researchers for their experiment.

Using small silicon pads doctors typically use to practice surgical techniques, Kim and his colleagues manipulated the robots like surgeons do during laparoscopic surgeries.

Kim recorded around 20 hours of video of him manipulating the da Vinci’s grippers—which together are about the size of a penny—to perform three procedures: lifting a facsimile of human tissue, manipulating a surgical needle, and tying knots with surgical thread.

He also recorded the relevant kinematic data correlated with manual manipulation of the grippers. That kinematic data included precise information about the angles and pressure Kim used when manipulating the robot during each surgical step.

After training their VLM with the surgical video and kinematic data, the researchers connected their model with the da Vinci robots and instructed the robots to perform the three surgical tasks.

The researchers ran their experiments on pieces of chicken and pork—animal flesh the robot had never encountered which mimics the look and feel of human tissue.

To their delight, it performed the surgical procedures in a zero-shot environment nearly flawlessly.

One of the surprises, according to Kim, was how the robot autonomously problem-solved unanticipated challenges.

At one point, the grippers accidentally dropped a surgical needle and despite never being explicitly trained to do so, picked it up and continued with its surgical task.

“We never trained the model on pig or chicken tissue, or to pick up a needle if it’s dropped,” said Kim. “We were thrilled it worked in this completely novel environment outside of its training distribution and could operate autonomously.”

Kim is already working on a new paper outlining the results of more recent experiments deploying the robots on animal cadavers. He is also developing additional training data that can be used to expand the capabilities of the da Vinci robots.

 

Source:NVDIA

 

PHP Code Snippets Powered By : XYZScripts.com