Upscaling robot-assisted interventional tool manipulations based on multimodal endovascular data analysis

Student thesis: PhD Thesis

Abstract

The integration of robotic technology has advanced endovascular intervention towards a new paradigm. Unlike traditional endovascular intervention that require operators to wear heavy protective suits and expose themselves to prolonged X-ray radiation, innovative master-slave robotic systems are being developed for endovascular intraluminal procedures. Surgeons supervise instrument positioning from outside the operating room, using a master robot to control a slave robot for operations such as guidewire delivery and stent release. This robotic approach offers enhanced safety and precision by eliminating human-induced errors such as shaking, while also reducing radiation exposure and operating time to improve surgical efficiency and reduce complications. Interventional robots, presently in development, are designed for treating endovascular diseases by manoeuvring through stenosis in endovascular paths. These designs have led to the creation of intuitive manipulation models for robot-assisted surgeries. The lack of haptic feedback significantly affects task performance in anatomical spaces. To counteract this, visual information is used to improve intuitive manipulation. This involves multi-sensor data modelling and visual perception to ensure accurate tool manipulation. Statistical analysis using multi-sensor helps identify manipulation patterns, reaching an accuracy of 93.96% in distinguishing between successful and unsuccessful robot-assisted tasks across fourteen patterns, revealing the internal relevance between tool manipulation and systems for specific robot-assisted surgical tasks.
The effectiveness and safety of tool manipulation rely heavily on the seamless collaboration between the surgeon and the robotic system. Intuitive manipulation plays an important role in improving the performance of surgical tasks using robot assistance, influencing both the force or speed of manipulation and the degree of cooperation between the operator and the robot. This process involves utilizing machine learning based on manipulation patterns to assess the operator-robot synergy, aiming to calculate the synergy ratio between the actions of the operator and the real-time response of the robot. This study used a convolution neural network considering factors such as no delay, constant delay, and variable delay to calculate the synergy ratio for precise prediction of the operator’s pattern of manipulation associated with the movement of the controlled robot. Simulations with a vascular interventional robot indicate that the model performs excellently in recognizing manipulation patterns and calculating the synergy ratio. In addition, operators experienced in manual percutaneous coronary interventions show significantly improved cooperative performance with the robotic system over inexperienced operators, achieving synergy ratios of 89.66%, 90.28% and 91.12% in the three delay considerations. Experiments involving animals and simulations with multi-sensor data-driven modelling demonstrate that intuitive manipulation significantly impacts robot-assisted surgical task performance and operator-robot synergy. Thus, improving intuitive manipulation provides significant aid in accurate and safe instrument delivery in robot-assisted interventional surgeries.
The master-slave vascular interventional robots assisted surgery primarily depends on the surgeon’s use of real-time 2- or 3-dimensional medical imaging to match the patient’s anatomy with the images, ensure precise tool positioning, and enhance manipulation to reduce contact with nontarget tissues. Thus, accurate visual perception of the endovascular instrument’s trajectory for providing guidewire position and direction details to offset the absence of tactile feedback, is crucial for instrument navigation, reducing vessel wall injury. To achieve this, an eight-neighbourhood-based deep neural network was designed to detect the guidewire endpoint and its maximum bending region. The method operates in two phases. The first phase involves the design of an improved U-Net network, which segments the guidewire to identify regions containing endpoints, limiting interference from other anatomical elements and imaging noise. The second phase involves extracting skeletons, removing bifurcation points, and repairing breaks using pixel correlations in eight-neighbourhood zones.
Initial results demonstrate that the eight-neighbourhood strategy achieves a mean pixel error of 2.02 pixels on a rabbit dataset and 2.13 pixels on a porcine dataset, outperforming state-of-the-art approaches. This approach, reliant on pixel adjacency relationships based on segmentation quality, performs best when the segmentation is strong, showing few false negatives and false positives. However, the detection results are unsatisfactory mainly due to poor segmentation performance. To further improve visual perception of surgical instruments, a multi-branch feature fusion with a triple-pyramid network was designed to refine surgical instrument segmentation, aiding surgical decision-making, determining procedural stages, and identifying critical surgical zones. This model utilises an encoder-decoder architecture that features a sophisticated Visual Geometry Group 13 encoder for improved edge and texture detection, along with a triple pyramid decoder that improves feature maps. The method attains a mean intersection-over-union of 95.54% in a multimodal fusion dataset, delivering crucial visual input for robotic endovascular interventions.
In conclusion, this thesis focuses on developing multi-sensor-based modelling methods for surgeon intuitive manipulation behaviours and visual perception techniques to enhance tool manipulation in an underactuated master-slave vascular interventional robotic system with spatial flexibility. Key areas of emphasis include the creation of efficient models for highly accurate robot tool manipulation along spatially flexible paths, particularly for accessing complex and narrow endovascular pathways. The studies are grounded in a developed vascular interventional robotic system and aim to establish an intuitive manipulation model that clarifies the relationship between the surgeon’s manipulation and the performance of robot-assisted surgical tasks. Additionally, visual-based modelling is employed to enhance the visual perception of interventional instruments. By enhancing tool manipulation, this approach facilitates the safe and precise delivery of catheters and guidewires through single-port minimal invasion, enabling access to lesion sites along various endovascular pathways.
Date of Award13 Jul 2025
Original languageEnglish
Awarding Institution
  • University of Nottingham
SupervisorBoon Giin Lee (Supervisor), Lei Wang (Supervisor) & Jiang Liu (Supervisor)

Keywords

  • Master-slave robotic system
  • robot-assisted endovascular interventional surgery
  • sensory force feedback
  • intuitive manipulation modelling
  • pattern recognition
  • synergy performance
  • visual perception
  • guidewire endpoint detection
  • interventional instrument segmentation

Cite this

'