Automatic tool segmentation and tracking during robotic intravascular catheterization for cardiac interventions
Original Article

Automatic tool segmentation and tracking during robotic intravascular catheterization for cardiac interventions

Olatunji Mumini Omisore1,2#^, Wenke Duan1,2#, Wenjing Du1,2, Yuhong Zheng1, Toluwanimi Akinyemi1,3, Yousef Al-Handerish1,3, Wanghongbo Li1, Yong Liu1, Jing Xiong1, Lei Wang1,2

1Research Centre for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; 2CAS Key Laboratory for Health Informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; 3University of Chinese Academy of Sciences, Beijing, China

#These authors contributed equally to this work.

^ORCID: 0000-0002-9740-5471.

Correspondence to: Professor Lei Wang, Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China. Email: wang.lei@siat.ac.cn.

Background: Cardiovascular diseases resulting from aneurism, thrombosis, and atherosclerosis in the cardiovascular system are major causes of global mortality. Recent treatment methods have been based on catheterization of flexible endovascular tools with imaging guidance. While advances in robotic intravascular catheterization have led to modeling tool navigation approaches with data sensing and feedback, proper adaptation of image-based guidance for robotic navigation requires the development of sensitive segmentation and tracking models without specificity loss. Several methods have been developed to tackle non-uniform illumination, low contrast; however, presence of untargeted body organs commonly found in X-ray frames taken during angiography procedures still presents some major issues to be solved.

Methods: In this study, a segmentation method was developed for automatic detection and tracking of guidewire pixels in X-ray angiograms. Image frames were acquired during robotic intravascular catheterization for cardiac interventions. For segmentation, multiscale enhancement filtering was applied on preprocessed X-ray angiograms, while morphological operations and filters were applied to refine the frames for pixel intensity adjustment and vesselness measurement. Minima and maxima extrema of the pixels were obtained to detect guidewire pixels in the X-ray frames. Lastly, morphological operation was applied for guidewire pixel connectivity and tracking in segmented pixels. Method validation was performed on 12 X-ray angiogram sequences which were acquired during in vivo intravascular catheterization trials in rabbits.

Results: The study outcomes showed that an overall accuracy of 0.995±0.001 was achieved for segmentation. Tracking performance was characterized with displacement and orientation errors observed as 1.938±2.429 mm and 0.039±0.040°, respectively. Evaluation studies performed against 9 existing methods revealed that this proposed method provides more accurate segmentation with 0.753±0.074 area under curve. Simultaneously, high tracking accuracy of 0.995±0.001 with low displacement and orientation errors of 1.938±2.429 mm and 0.039±0.040°, respectively, were achieved. Also, the method demonstrated higher sensitivity and specificity values compared to the 9 existing methods, with a relatively faster exaction time.

Conclusions: The proposed method has the capability to enhance robotic intravascular catheterization during percutaneous coronary interventions (PCIs). Thus, interventionists can be provided with better tool tracking and visualization systems while also reducing their exposure to operational hazards during intravascular catheterization for cardiac interventions.

Keywords: Guidewire segmentation; intravascular catheterization; cardiac interventions; robotic catheter systems; pixel tracking; minimally invasive surgery (MIS)


Submitted Oct 06, 2020. Accepted for publication Jan 29, 2021.

doi: 10.21037/qims-20-1119


Introduction

The discovery of X-rays in 1895 led to a paradigm shift in surgical navigation. This has motivated the development of different techniques for segmentation and tracking of surgical tools in the fields of computer-assisted interventions. Recent advances in X-ray imaging such as computed tomography (CT) provide high quality three-dimensional (3D) imaging with views of the functional anatomy and pathology of a patient’s internal organs during interventional procedures (1). While attempting to work around the loss of vision that is inherent to minimally invasive surgery (MIS), interventionists can be overexposed to different hazards in interventional rooms (2). Thus, minimizing the procedural times is a vital requirement for cardiac interventions. While the use of ultrasound and magnetic resonance imaging techniques for interoperative visualization during intravascular interventions has been established (3), concerted efforts are needed to improve the suitability of the fluoroscopic imaging approach.

Cardiovascular diseases from thrombosis, atherosclerosis, and aneurism in the cardiovascular system are major causes of global mortality, accounting for nearly 31% of deaths annually (4). Unlike open-heart surgery, intravascular catheterization has been a routine method applied for cardiovascular interventions (5). The procedure requires navigation of flexible endovascular tools, mainly guidewires and catheters, from peripheral points such as the radial and femoral vessels into the mediastinum, while visualization of the procedure is achieved using fluoroscopy. Extensive studies have been undertaken to develop robotic catheter systems with multi-axial degrees of freedom (DoFs) for safe and skillful intravascular catheterization (6). Similarly, X-ray imaging has been a primary imaging technique used during cardiac diagnosis. Research efforts have focused on vessel analysis, morphological interpretation, and tool-vessel segmentation during manual cardiac interventions. In robotic catheterization, issues include nonlinear proximal-to-distal motions transmitted along an endovascular tool, non-uniform illumination during X-ray angiography, tool-vessel structural similarities, and ineffective distal force sensing; which make catheterization time consuming, difficult, and unsafe (6-9).

Despite evidence supporting the use of medical imaging in percutaneous coronary interventions (PCIs), global adoption use of robot-assisted PCIs is limited, although the number of procedures performed annually continues to increase. This exposes both surgeons and patients to radiation and orthopedic hazards in the intervention room (2). Retrieving the shape and motion of the endovascular tools in sequences of X-ray angiograms is vital for tool visualization and characterization of the catheterization procedures during an intervention. An in-depth review of segmentation methods and evaluation metrics has been presented (9). At first, image sequences are preprocessed with data normalization, contrast enhancement, and noise suppression to improve frame quality. Normalization is mostly addressed with techniques like global contrast normalization, zero-phase whitening, and gamma corrections. A multi-scale filter for reducing noise artifacts in angiographic images has previously been proposed (10). With this approach, vessels and endovascular tools can be detected in variant scales using eigenvalues and vectors of the Hessian matrix. Similarly, a multi-scale segmentation method for line-like structures has been adopted for tool segmentation in 2D and 3D medical images (11). These approaches are sensitive to artifacts from blob-like components that have similarities with the endovascular tools and vessels in an angiogram. A novel image filter for edge-preserving operation was proposed by some authors and has been adopted for catheter segmentation in many studies. Being a guided filtering approach, content of a guiding image is considered to form local linear models for image smoothing; however, it can simultaneously cause edge blurring in filtered images (12).

Regarding noise suppression, methods based on directional filter banks have been proposed for image enhancement and proper segmentation in recent studies. For instance, a probability map with local directional geometry was proposed for automatic vessel segmentation and catheterization in X-ray angiograms (13). The use of directional filter bank was presented for fast and accurate vessel extraction without time-consuming down-sampling and re-sampling procedures (14). Directional filter banks provide better segmentation results compared to the conventional Hessian-based methods (15). Some authors have tackled the drawbacks of edge blurring and intensity heterogeneity in X-ray images by combining a vessel enhancement method with directional filter banks and vessel similarity (10,13). An issue of automatic segmentation is the removal of non-vessel artifacts after vessel enhancement. Morphological methods can be used to address this problem. A vesselness measure based on Hessian-based guided filtering was adopted for detection and enhancement of vital regions within X-ray angiograms (16). A border detection method has been proposed to avoid confusion of diaphragm borders that are found in angiograms (17).

Numerous studies have focused on tool segmentation such that tool navigation can be processed and visualized in X-ray angiograms with little effort. Nevertheless, studies on automatic endovascular tool segmentation and tracking during robotic intravascular catheterization are still limited. In recent years, available works include estimation of centerline pixels and caliber pixels in segmented vessels of an angiogram (18). For catheter segmentation, an object classifier was proposed for the suitable features of catheters when around the centerline in 2D angiogram images. A fast and automatic graphics processing unit (GPU)-based method was presented for tracking guidewire pixels in fluoroscopic image sequences (19). Also, in a related study, graph-based optimization modeling was approached by using a sequence of pixel segments to formulate and track guidewires in X-ray angiograms (20). To reduce X-ray exposure and improve surgical safety, a visual-based scale-adaptive algorithm was developed in a previous study for guidewire tracking (21). The tool segmentation and tracking methods reviewed were categorized as model-based and learning-based approaches. While both utilized pattern recognition, learning-based segmentation and tracking approaches have yet to be studied for robot-assisted intravascular interventions. A recurrent residual network was applied for segmentation and catheter tracking during endovascular aneurysm repair (22). Relatedly, a model based on deep neural networks for guidewire tracking during robotic navigation was proposed (23). Successive localized frames were utilized for tracking; however, the study did not involve the use of X-ray image frames.

Some researchers developed a real-time catheter segmentation method for robotic endovascular intervention using optical flow-guided warping (24). The model could segment and track catheters in 2D X-ray sequences using only raw ground-truth for training. Evaluation of the study was carried out with phantom-based angiograms. While learning-based systems can learn essential features from input data, the effective application of endovascular tool segmentation and tracking requires images from mammalian subjects and undertaking the tedious manual ground-truth labeling. Despite the above-mentioned studies, automatic guidewire segmentation and tracking is still a challenge during robotic intravascular interventions. Thus, in this paper, we propose a robust method for automatic segmentation and tracking of guidewire during robotic intravascular catheterization. This study is part of an incremental project on the development of a data-guided interventional robotic system for cyborg autonomy during PCIs. The robotic system follows an iterative prototyping from early generations proposed for intravascular cardiac interventions (5,25,26).

Robotic motion control was based on modeling direct navigation of a master device and mapping its scaled displacement to operate a bed-side slave robot for intravascular catheterization of guidewire. The presented segmentation method adopts morphological operators and a multi-scale enhancement filter for processing X-ray frames and vesselness measurement, while minima and maxima extrema were obtained for valley or ridge classification to delineate guidewire pixels from the background. Also, morphological dilation was applied with structuring elements to connect and track the guidewire pixels in segmented images. The robot was integrated to initiate cyborg catheterization with closed-loop multimodal control. The rest of this paper is organized that: design of the robotic catheter system developed for intravascular interventions and the proposed segmentation and tracking method are introduced in the Methods section; the experiments carried out for data acquisition, method validation, results, and evaluation studies are analyzed and discussed in the Result section; lastly, the conclusion and outline of future works are presented in the Conclusions section.


Methods

Design of the robotic catheter system

The robotic system designed for intravascular interventions in this study is displayed in Figure 1. This system was aimed at reducing interventionists’ exposure to operational hazards during interventions. The platform was designed for seamless remote catheterization of flexible endovascular tools during cardiac interventions. As part of the iterative prototyping, the current platform includes both master and slave mechanisms for teleoperated navigation of endovascular tools (i.e., guidewire and catheter) during intravascular interventions. The mechanisms are robotic devices capable of motion control of 2 or more DoFs for axial and radial navigation of endovascular tools. The devices were designed with isomorphic similarities to how interventionists use their hands and fingers for tool manipulation during intravascular interventions. The core components are presented below.

Figure 1 Computer-aided design (CAD) model of the robotic catheter system.

Master control device

The master device is a portable 2-DoF robotic mechanism 45×19×13 cm3 in size and located at the shielded remote cockpit where operators sit to issue manipulation control. The device has smart knob and clamp controllers for guiding axial and radial catheterization control, maintaining a fixed guidewire position, and changing orientation of the flexible tool when clamped during interventions. A proximal force sensor is fixed for sensing and feedback of the robot-tool grasp force. The knob has a magnetostrictive electromagnetic bar to provide motion (position and orientation) control data. Control signals are decoded from the potentiometer sensor bar and transmitted to the slave device. The latter was designed to drive endovascular tools with axial feed accuracy ≤0.5 mm and radial feed accuracy ≤0.5° during cardiac catheterization. A dedicated minimal-delay (less than 100 ms) control protocol based on sampled communication signals was employed. Axial, radial, and hybrid motions were implemented to mimic natural operations in the catheterization labs. Interventionists can use the master device, which is coupled with the display unit from a control station, to guide the bed-side (i.e. slave) robotic device during intravascular cardiac interventions.

Slave control device

The slave control device is a 4-DoF robotic device 57×22×16 cm3 in size located nearby inside the operation room. Aside from the 2-DoF motion coordination that the “main knob” in the master device provides, a tool clamp knob is also included in the design, as shown in Figure 1. The former, regarded as the main DoF manipulator knob, can be used to issue axial and radial control signals for operations of the slave robot, whereas the latter, a tool clamp knob, is for grasping and setting vertical orientation of the flexible endovascular tool in the slave robot. The design mechanism of the slave robot is an improvement over its initial prototypes (5,25,26). Unlike the earlier generations, the current prototype has an isomorphic design with in-built sensing units for unified surgeon–robot cohort. Also, the control system utilizes a distributed minimal delay (≈0.1 s) protocol based on motion variables sampled from TCP/IP communication. Axial translation, radial rotation, and their hybrid were implemented to mimic natural operational methods in the interventional rooms. Similar to power distribution in the second generation (5), the robotic mechanism operates on an in-built 8,000 mAh lithium battery pack. As shown in Figure 1, the slave control device is operated under X-ray fluoroscopy to produce angiograms for real-time visualization of the procedures. During catheterization with the robotic system, contrast dye can be injected to aid acquisition of angiograms with navigation views of the endovascular tools and possibly the blood vessels. Either 2D or 3D X-ray images, which allow interventionist visualization, can be acquired at low or high resolutions and with or without angiography subtraction.

Guidewire segmentation and tracking

Unlike the existing works presented in Section 1, a novelty of the proposed segmentation and tracking method stems from the use of intensity-based pixel analysis for image preprocessing, vesselness measurement, and segmentation and tracking of endovascular tools in X-ray angiograms. The edge-preserving method was developed for automatic segmentation and tracking of guidewire during intravascular cardiac catheterization. This omits the computation costs required for centerline estimation and optimization procedures present in previous works (15-21). By adopting the process illustrated in Figure 2, the proposed segmentation and tracking method started with in-vessel extraction of guidewire pixels in X-ray angiograms based on multi-scale differential geometric approaches. The method consists of 4 sequential phases, and details of the procedures at each phase are discussed herein.

Figure 2 Schematic view of proposed tool segmentation and tracking method in X-ray angiograms.

Sequential frame preprocessing

Sequences of X-ray angiograms captured with the robotic system can be employed for visualizing and analyzing movement of the guidewire in patient vessels. However, the frames mostly have low contrast intensities with non-uniform pixel representation which makes it hard to differentiate pixels of guidewire from those of background and/or blood vessels.

To address this challenge, multi-scale top-hat transformation was adopted to distinctly enhance pixel contrast in each frame. Pixel details are extracted as differences between an image frame and its morphological opening with respect to structured neighboring elements. The latter is a binary valued neighborhood obtained with the morphological dilation and erosion operations briefly introduced in this subsection. Suppose X(yY,zZ) is a grayscale image obtained after adjusting the intensity values of an input angiogram, grayscale expansion (27) is performed with the local maximum operator in Eq. 1 on a structuring element, H(y,z).

XH(y,z)=max{X(yy,zz)+H(y,z)|(y,z)DH}

Where y and z coordinates are of pixel in the grayscale image (X), while y' and z' are the structuring element's pixel location at which the structuring element fits the input image. DH is the domain of a flat structuring element, and local maximum operator is considered as the pixel neighborhoods determined by the shape of DH. The structuring element rotated around and traversed all locations in the image, while values of the rotated structural element were added to pixel values of the image to determine the maxima of translated positions. Similarly, grayscale corrosion was performed on X(y,z) by replacing the addition operation in the right-hand side of Eq. [1] with subtraction (13). We considered the local minimum operator from the neighborhood of a given pixel. Hence, X(y,z) would be outside the domain of the grayscale image −:+. Morphological opening and closing were performed to sequester pixel values of the foreground and background. These regions were achieved by performing the contrast enhancement operations, given in Eq. [2], on the frames. The definitions of Xfg and Xbg are shown in Eqs. [3,4]. These produced angiograms with distinct bright and dark regions obtained from morphological opening () and () closing operators at different scales of the structuring elements.

Xen(y,z)=X+XfgXbg

Xfg(y,z)=X(y,z)XH(y,z)

Xbg(y,z)=X·H(y,z)X(y,z)

Pixels representing the guidewire were searched by subtracting the first image frame from subsequent ones. However, an angiogram sequence shares similar background pixel values with slight inter-frame variations. These changes are due to non-uniform lighting and subjects’ cardiac movements during fluoroscopy. Thus, a preprocessing stage was observed to eliminate artifacts introduced by fluoroscopy. Since the noise features are dissimilar and cause abnormal morphology with varying statistics across the resulting frame, a denoising step was incorporated. For this, global pixel normalization operation was performed to convert angiogram pixel values into a common space. This partially dealt with pixel variations caused by diverse illumination exposures and enhanced segmentation of anatomical and pathological structures in an angiogram. Linear normalization, shown in Eq. [5], was adopted for dynamic range expansion by stretching the frame’s histogram to have a proper grayscale distribution.

Xen˜=(NmaxNmin)11+eXenβα+Nmax

Where α and β define the width of the input intensity range and the intensity around which the range is centered. Furthermore, median filtering was adapted for nonlinear noise elimination. This was employed to improve the signal-to-noise ratio of the enhanced image frames and dim out the remaining artifacts.

Vesselness measurement and analysis

After preprocessing the image frames, measurement of the pixels’ vesselness was staged to further process the angiograms. The preprocessing procedures were proposed as filtering the image frames to explore the geometric features of anatomical and pathological structures in an enhanced angiogram image. A measurement scale varying within a certain range can be used to detect tubular structures even at different sizes. To determine a vesselness map, we used multi-scale enhancement filtering. The anisotropic filtering method was employed to enhance line structures in the enhanced angiographic images starting with the Taylor series expansion in the neighborhood of point xo, as in Eq. [6]. This approximated the geometric structures in the enhanced images up to a second order using its gradient vector (o,σ), and the Hessian matrix computed on point xo at a scale σ. Furthermore, matrix (Ho,σ) was defined using Eq. [7], where its entries were obtained by convolving the derivatives of Gaussian function along vertical and horizontal directions on image Xen˜.

Xv˜(xo+δxo,σ)Xen˜(xo,σ)+δxoTo,σ+δxoTHo,σδxo

H=[XyyXzyXyzXzz]

The second derivative of the Gaussian kernel was used at a given scale to measure the pixel contrast values of the kernel probes around the scale ranges −σ:+σ along the direction of the derivative. As proposed in Eq. 11, local behavior of the image Fen˜ obtained in Eq. [5] was derived as a convolution (*) of point xo in the image with derivatives of its Gaussians at a scale s {Eq. [8]}. The γ factor was introduced to ensure that response of the differential operators induced normalized derivatives for fair comparison at multiple scales. Analyzing the Hessian information gives a rational classification that can enrich vessel segmentation in the enhanced angiograms. Thus, pixel vesselness was computed with respect to the eigenvalues and eigenvectors of Ho,σ at multiple scales.

xXen˜ (x,σ)=σγXen˜(x)*x1(2πσ2)e(x22σ2)

Eigenvalues that correspond to the kth normalized eigenvector u^σ,k  of Ho,σ computed at σ are displayed in Eq. [9]. The eigenvalues are combined into a discriminant function that has a maximum response for structures behaving as a tube at scale σ. Thus, the vesselness index acquired from the different values of σ in Eq. [10] can be analysed to obtain a final estimate of vesselness with Eq. [9].

λσ,k=u^σ,kTHo,σu^σ,k

Xen˜(S)={0ifλ2>0exp(RB22β2)×(1exp(S22c2))else

XV(γ)=maxSminSSmin(Xen˜(s,γ))

Where RB is a ratio denoting the largest cross-sectional area that accounts for deviation from a blob-like structure and simultaneously distinguishes between line and plate patterns in the second order structure of the image (xXen˜); S is a local degree measure of the cross sectional blobness of the image structure. The factors are parameterized as RB |λ1||λ2|, and Skλk2, while β and c are used to regulate the sensitivity of structure contrast values with respect to noise artifacts and background, respectively. Also, λ1 and λ2 are eigenvalues of the Hessian matrix of the Gaussian filtered image in decreased absolute order sorting. Maximum filtering response can be obtained at a scale that approximately matches sizes of vessels to detect. This can be used to extract the principal directions in which local second order structure of the image decomposes. In general, increasing values of β and c increases the response to features of the tubular structures in an image being processed and suppresses the noise and texture in the background, simultaneously. Thus, the parameters should be effectively tuned to values that keep the guidewire detectable in the angiogram image frames. Intensity values of the pixels are further remapped by adjusting contrast of the images. This heuristic step yielded contrast enhanced frames, with geometry of the vessel-like structures in the angiograms becoming more visible afterwards.

Guidewire detection

Segmentation of guidewire pixels was performed by filtering the intensity values of tool pixels and other structures at neighboring pixels at unique thresholds. It was observed that intensities of the guidewire pixels are lighter and appear as valley points in a neighborhood. Results from the vesselness geometrical discrimination yield processed image frames with valley-like and ridge-like hallmarks. The general extrema detection method, as discussed by Rasche (28), was adopted to find positions where adjacent values are simultaneously maximum and minimum. This localization technique was applied along the four major principal orientations (i.e., axial and diagonal directions) of an image’s pixels in order to observe the extrema. For simplicity, it was assumed that the ridge or river components of an image were the pixels that traverse the image with maximum and minimum extrema.

The vessels were taken as valleys that enclose the endovascular tool at the highest extrema. Processing of only the tubular structures in the image was used to bypass the time-consuming process of edge-linking used in conventional edge detection methods. Thus, existing methods and criteria (28,29) that had been defined for evaluating ridges and valleys were adopted. For instance, the presence of a maximum extrema can be observed in Eq. 12 by comparing the center pixel p, as a point (p0) in a given n×n neighbourhood, with a value of X˜v (y,z) higher than its 2 neighbouring axial and diagonal pixel values, respectively. Equally, the 2 signs in Eq. [12] are inverted for the minimum extrema case. The count per pixel is determined as σX˜vd¯ to find the valleys and their ridges; where d represents direction (axial or diagonal) of the adjacent pixels being compared.

X˜vd¯={(p1<X˜v(y,z))(p2>X˜v(y,z))1else0

After the maximum and minimum extrema are obtained, probability that a pixel is for a background or guidewire is computed by finding the ridges and valleys in the processed image. For this, the 2D crease condition (a0X˜vnD 2xLx|y·u^(p0) ) is adopted (30). Valleys in the image are also determined as the loci of the extrema using Eq. 13. This can be used for detecting the endovascular tool which moves along as the vessel ridge is being cannulated. Parameter Lx|yn are the local extrema in a spatial domain of the 2D function f(x,y) for a point p0 in the processed image, μv is the threshold value used to detect the valley pixels, and λn|n=1,2 and u^ are eigenvalues and eigenvectors of the processed image (31).

X˜vd={λn1(p0)>μvifp0isvalleyλn1(p0)<μvifp0isridge

Tool tracking

Following the guidewire detection approach discussed above, pixels of the tool are merged to track the endovascular tool during catheterization. For this, the processed image with marked ridge pixels is dilated. This shift-invariant operation {Eq. [14]} expands the disparate pixels in the image with morphological operations and structuring elements. The value X˜v (p0)e is a translation of the pixel at point (p0) in X˜v , and e  SE is the set of structuring elements applied. Thus, the ridge pixels become bigger with unique intensity, while the broken and disjointed areas are filled and connected by spaces smaller than the structuring element.

X˜v(p0)SE=eSEX˜v(p0)e

A connectivity step is applied to validate if guidewire pixels in the segmented image are properly connected, while the unconnected pixels are left as noise artifacts. Tool movement is determined as displacement and orientation between consecutive frames and highlighted at the tip to notify interventionists about the navigation process. The tool tip is initialized with coordinates of an introducer sheath used for path creation and guiding the endovascular tool at the vascular entry point during intravascular interventions. Since vascular cannulation actually starts from a rear position of the introducer, this point will be constant across all frames obtained during a catheterization procedure. Thus, the point’s coordinates are taken as the tip of the guidewire in the first frame while the tool displacement is computed as a vector defined on the 2 points {X˜v (pi),X˜v (pj)}, as shown in Eq. [15]. Also, orientation is taken as the vector direction using the dot product in Eq. [16]. Thus, positions of the tool tip in every consecutive n frame can be analyzed and visualized by the interventionists with lesser burden while tracking the tool’s movement in specific frames.

pij=X˜v (pi)2+X˜v (pj)2

rij=cos1(X˜v (pi)  X˜v (pj)|X˜v (pi)|  ·  |X˜v (pj)|)


Results

To validate the proposed guidewire segmentation and tracking method, 12 in vivo robotic intravascular catheterizations were performed, and X-ray angiograms acquired during the trials were saved. The experimental details are provided herein.

In vivo experiment setup

Several in vivo trials were carried out with the master–slave robotic system used for PCI by cannulating a vascular pathway in a rabbit (weight: 2.5 kg). In this trial, robotic catheterization was performed by navigating the auricle-to-coronary arterial path in a 22-week-old Chordata animal. Based on this setup, the robotic catheterization was performed by navigating a 0.014” guidewire (Abott Vascular, Diegem, Belgium) from auricle entry up into vessels of the heart of the rabbit. Guidewire selection was based on properties of the targeted vascular pathway, such as diameter and tortuousness. Animal care followed the procedures advocated and approved by the Ethical and Use Committee of the Institution (SIAT-IACUC-200528-A1289). A total of 40 µg/kg of anticholinergics was used to pre-medicate the rabbits intramuscularly with 2.5 mg/kg of isofluorate, intravenously. A total of 5 operators from Shenzhen Institutes of Advanced Technologies, Chinese Academy of Sciences participated in the robotic catheterization tasks with varying intravascular tool manipulation skills.

The robotic system was successfully used to catheterize the guidewire along the auricle-to-coronary vessel of the rabbit while X-ray angiograms were acquired. Details of the master–slave system are discussed in the Introduction section. A control program was developed for teleoperation of the master-slave robot and implemented with python programming language. A cone-beam computed tomography (CBCT) system was used for X-ray fluoroscopy during the robotic catheterization. The CBCT includes two-dimensional imaging with circular sliding axis movement. In this study, the CBCT was equipped with a 70 kV (and 5 mA) generator and a flat detector for generating X-ray 2D angiograms. The imaging system was developed using C++ with Microsoft Foundation Class (MFC) library in Microsoft Visual Studio 2013 (Microsoft Corp., Redmond, WA, USA), and ran on a workstation with Intel duo processor Core i3-3420 (Intel Corp., Santa Clara, CA, USA) at 3.10 GHz each, and 16 GB RAM. The angiograms generated were used to validate the proposed method for segmentation and tracking of guidewire during the catheterization procedures.

Experimental results

This section is focused on the validation outcomes obtained when the proposed method was used for automatic guidewire segmentation in an in vivo robotic catheterization dataset. The method consists of two major procedures: segmentation and tracking. The results of each step in the two procedures are presented and discussed in the following subsections.

Segmentation results

The segmentation procedure involves detection and delineation of guidewire in the X-ray angiograms. Thus, the environment in which the model was implemented is discussed along with the appropriate parameters chosen for the implementation. The method was implemented in Matlab version R2019a (The MathWorks, Natick, MA, USA) installed on a Lenovo desktop computer with an Intel duo processor Core i7-6700 (Intel Corp.) at 3.40 GHz each, and 32 GB RAM. In the validation study, we applied the proposed method on the dataset which consists of 12 X-ray sequences with a total of 13,689 image frames. The angiographic images were projected with 1,440×1,560 pixels and 0.18×0.18 mm2 resolution. The multiscale detection top-hat transformation was done with diamond-shaped structuring elements with a size of 7, while line structuring elements with a size of 17 were used for the morphological dilation applied for tacking the guidewire during the catheterization process. To differentiate background pixels from those of the guidewire, heuristic pixel threshold adjustment was performed, while threshold values of 0.05 and 1 were applied to map out the low and high contrast pixels for ridge detection, respectively. The multi-scale vessel enhancement was done with scale layers =7, scale ratio =2, β =0.5, and c =15. These values were defined as maximal filters over all pixel scales of the images.

The segmented frames obtained with the proposed method were used for guidewire detection and tracking in the 12 X-ray sequences. The results obtained for two sequences are shown in Figure 3. Each row presents the chosen frames in a sequence while the 4 columns display the original X-ray angiogram along with the results achieved at different stages of the procedure. The segmented guidewires are represented by white colored pixels appearing within the background pixels (black color) in each plot. Results from cases of the first and last trials’ sequences in Figure 3 are given. These are sample trials in which focused movements of pull and push (the main catheterization tasks) were used for tool delivery during intravascular interventions. The 2 catheterization operations in the figure are push operation from the first data sequence and pull operation from the last data sequence in Table 1. In both trials, the guidewire was catheterized along different in vivo paths by different subjects. It can be observed in the guidewire segmentation results that the proposed method could successfully detect the majority of guidewire pixels in each frame of the 2 sequences. Further, the result shows that the majority of the background pixels were identified when the inter-frame subtraction operation, shown in Figure 3C, was performed as the first procedure. This left limited speckles and noises which were identified and removed during vesselness measurement (Figure 3C). By applying the extrema detection mechanism on intensity values of the processed images, pixels of the guidewire were separated from other line-like objects in the angiograms. Results of this step are shown in the fourth and eighth columns of Figure 3C. Similar results were achieved for the remaining 12 sequences, but these are not displayed due to space limitation. A total of 6 binary pixel classification metrics namely, accuracy (ACC), sensitivity (SEN), specificity (SPE), negative predictive value (NPV), false discovery value (FDV), and Matthews correlation coefficient (MCC) were used as defined in Eq. 17 to validate the segmentation performance. The parameters used in the computation were derived from the 4 outcome metrics: true positive (TP), the number of guidewire pixels that were correctly detected; false positive (FP), the number of background pixels that were classified as guidewire pixels; true negative (TN) the number of background pixels that were correctly classified; and false negative (FN), the number of guidewire pixels that were incorrectly classified. Mean values of the evaluation metric obtained for the 12 sequences are presented in Table 1. Simultaneously having high SEN and SPE values indicated that the method could detect the guidewire and background pixels distinctively in the angiograms. The false discovery predictive rate was defined as a likelihood value to account for incorrectly detected pixels.

Figure 3 Segmentation of guidewire in X-ray sequences acquired during push and pull auricle-to-coronary intravascular catheterization in rabbits. The “L” captions indicate image segmentation results for the push operation in the first X-ray sequence, while “R” captions are the segmentation results for the push operation in the last X-ray sequence. (A) Original angiogram; (B) initial preprocessing results; (C) vesselness measurement results; (D) guidewire tool detection results.
Table 1
Table 1 Mean segmentation performance
Full table

Furthermore, MCC was used to provide a validation with balanced measure in the presence of class imbalance. Presence of class imbalance in the dataset could be taken as a reason for having very high accuracy values and lower sensitivity, simultaneously. The high ratio of background pixels in the angiograms caused high true-negative values which made the accuracy values very high. The specificity values should be explained to avoid misinterpreting the proposed method as a non-perfect detector of guidewire pixels if its segmentation performance is evaluated based on sensitivity. Performance of the method can be further interpreted by the relatively high NPV. If the high accuracy and specificity are due to the considerably high NPVs, the moderately high FDV shows that the method could discover misclassification from type 1 error in null hypothesis testing. The MCC values could indicate that the segmentation method is nearly stable, as it shows the magnitude of correlation between the binary classes; that is, values of the ground-truth pixels and those of pixels detected in the segmented frames.

Perfs{ACC=TP+TNTP+FP+TN+FNSEN=TPTP+FNSPE=TNTN+FPNPV=(1FNFN+TN)FDV=(FPTP+FP)MCC=TP×TNFP×FN(TP+FP)(TP+FN)(TN+FP)(TN+FN)

It is important to state that the performance varied with the number of guidewire pixels in each angiogram during the push and pull operations, respectively. The poor validation results obtained in the cases of some metrics such as SEN and MCC agree with the outcomes of a few previous studies (32).

Guidewire tracking results

The validation results obtained when the proposed method was used for tool tracking are discussed in this section. Subsequent to guidewire segmentation, tracking results from the proposed method were also validated. For this, the motion of guidewire structure in the angiograms was realized by tracing tip displacement and orientation of the segmented guidewire in the angiogram. First, tip displacement and orientation of the guidewire were computed to trace the segmented tool pixels in each angiogram. Tracking results were obtained at a frame interval of 50 in the 12 sequences reported in Figure 3. As presented in Figure 4, tracking was indicated with a cyan mark along the guidewire pixel border in each angiogram. Samples of the first and last sequences in the catheterization trials are shown in Figure 4B and Figure 4D, respectively.

Figure 4 Tracking of guidewire pixels in X-ray angiograms during push (L) and pull (R) of auricle-to-coronary catheterization sequences in Figure 3. (A) Original angiogram. (B) Tracked guidewire pixels are marked in cyan color.

The plots show that the proposed method successfully tracked the guidewire tip upon sufficient tool segmentation from the image sequences. Nevertheless, pixel connectivity was observed as a major issue in many frames; thus, performance of the tracking procedure was considered. Inter-frame displacement and orientation were used to compute the cannulation translation length and rotation degrees which are overlaid in each plot of Figure 4B and Figure 4D.

Tracking performance was validated with the 3 different metrics, including tracking error (ERR), connectivity (CON), and (Area), as defined in Eq. [18]. The δ(∙) operator was used to find variance of guidewire displacement and orientation between segmented and ground-truth images. Connectivity index has been considered as a major metric used to validate tracking performance in some applications (9,33). We modified the definition to avoid error summation in cases when guidewire pixels were not found in ground-truth frames. Connectivity index is the percentage ratio of the number intersections at corresponding locations of guidewire pixels (Fi) with respect to ground-truth pixels (G) in an ith segmented frame and the manually labelled image. Area is a percentage metric used to validate the guidewire pixel tracking overlap between Fi and G; the is a morphological dilation. It was realized that the outcome of the tracking metrics depends on performance of the segmentation. The mean tracking accuracy, connectivity, and tracking area obtained for the 12 sequences are presented in Table 2. The tracking error includes both the average displacement and orientation errors.

Table 2
Table 2 Mean tracking performance
Full table

Despite the proposed method shows a good tracking ability, high tracking error values were obtained in some angiograms such as the 12th sequence. This could be attributed to presence of noise speckles at points where guidewire pixels intersect in segmented and ground-truth images. The catheterization trials involved the clockwise and counter-clockwise radial motions. These can be observed from the guidewire’s tip pose data, typically, its orientation.

Validating the guidewire’s response to radial operations directed by the robot may not be reflected with tracking error. Thus, more metrics are required to further validate the tool tracking function of the method. The proposed method was further validated based on the error and connectivity indices defined in Eq. [18]. As presented in Table 2, the connectivity index indicated that over 74% of the actual guidewire pixels in the segmented frames were linked, while the highest mean connectivity value was obtained in the 9th sequence. Likewise, the proposed method showed that more than 60% of the segmented guidewire areas can be tracked with respect to the reference frames. Thus, both images were understood to have minimal mean differences along the ridges and edges of the tool. Details of the tracking performances presented in Table 2 show that the proposed method could track the guidewire pixels in each angiogram frame (Fi) with respect to pixels that are at the corresponding locations of the ground-truth image (G). For accessibility, the motion details are included in Figure 4B. After analysis of the segmentation and tracking results, it could be observed that the proposed method was able to classify the majority of the pixels in the angiogram and properly segment the pixels belonging to the guidewire or cannulated vessel away from the background pixels.

Err= (|δ(pGFi)|,|δ(rGFi)|)[a]Con=max(0,|card(G)card(Fi)|card(G))[b]Area= card(((G)Fi)((Fi)G))card(GFi)[c]

Figure 4(LB and RB) shows that the proposed method presented high false-positive rate responses for pixels that had very close intensity values to the background image. This led to a low tracking area index in the segmentation results. Also, this reflected low positive predictive rates. Lower connectivity of guidewire pixels was observed in some frames (Figure 4, LA2, LA3, and RB7); however, the tracking was able to detect the tip of the guidewire in most cases. Effect of artifacts presenting as noise was observed in some cases, as shown in Figure 4 (RB7).

Furthermore, inter-frame motion variables such as displacement and orientation of the guidewire were computed using Eqs. [14] and [15], respectively, as displayed in the top-right corner of the plots in Figure 4 (LB1-7, RB1-7). In this section, we have demonstrated that tool detection and tracking are two different steps that are needed for robotic intravascular catheterization when the motion control is driven by image-based approaches.

Comparison with existing methods

An evaluation study was performed to compare the results of the proposed method with the results obtained from 9 existing methods that were developed for segmentation of blood vessel and endovascular tools in angiographic images. A total of 9 existing methods that have been reported on this subject in previous studies were selected, implemented, and validated.

Details of the implementation and validation performed for the existing methods are as described in the evaluation study. Table 3 presents some details of results for the 12 sequences. A blood vessel enhancement strategy was proposed based on differential evolution optimization of Boltzmann univariate to tune Gabor filters (34). Optimal values of the single-scale filtering parameters in the study, that is filter width, kernel elongation, and orientation reported as 12, 2.548, and 45, respectively, were adopted to segment guidewire pixels in the 12 sequences during our evaluation. Another method employed for evaluation was the trainable COSFIRE (combination of shifted filter responses) filter which has been proposed for delineation of blood vessels in retinal images (32). The Gaussian-based method was chosen, as it is versatile with proven capability in defining any vessel-like pattern. The vessel points were selected in an automatic process with a tuple of Gaussian standard deviation (σ) and polar coordinates (ρ) with respect to the center of the filter. Values of the parameters used for evaluation were σ =4.4 and ϕ =8. The factors α =0.7 and σo =30 were constants used to adjust the filter’s configuration. A third approach used to evaluate our method was the multiscale retinal vessel segmentation which was based on the line tracking method (35). The tracking-based method starts with a group of pixels derived under an intensity selection rule, maps all pixels that are assumed to belong to a vessel at different scales, and terminates when a cross-sectional profile condition becomes invalid. This approach shares some similarities with ours. For instance, morphological operations and median filtering are applied in the initial stages to restore disconnected vessel lines and eliminate noises.

Table 3
Table 3 Performance evaluation of the proposed and existing segmentation methods
Full table

For the multi-scale tracking done in this present study, we used initial and final scales of 3 and 11, respectively, along with and a step-size of 2. A constant threshold of 15 was used while other parameters were adopted as reported in another study (35). The general technique proposed for segmenting and characterizing blood vessels in Heneghan et al. was also used to evaluate this study (36). We adopted a publicly available Matlab implementation of this method and Coye’s method (37,42). The default parameters provided in the studies were adopted.

The methods used for comparison included the single-scale matched filter response (38). This was combined with length filtering for vessel-like segmentation, and involves selecting a threshold value that maximizes the local entropy. Also, the approach proposed by Kang et al. introduces a segmentation procedure based on a similarity degree between pixels of the gray-scale detection response (39). Furthermore, the approach of Nguyen et al., in which vessel pixels are segmented using multiline detector response with a fixed threshold, was also employed for evaluation (40). A threshold value of 0.56 was selected to maximize the classification accuracy using a training set. The detector scale was set as 15 and the step-size was set as 2. Lastly, in the method by Qian et al., multiscale response of a top-hat operator was used to enhance vessel-like structures (41). A total of 3 different morphological operators with disk of sizes 1, 3, and 7 were applied. The methods were developed with capabilities to delineate the cardiac and retinal vascular structures. For fair comparison with our method, the existing segmentation methods were implemented in a Matlab environment with the same dataset used in this study (1).


Discussion

In the comparative study, mean accuracy (mean-ACC), dice similarity score (DSC), and positive predictive value (PPV) were metrics added for clarity on the performance evaluation (32,43). It was observed that the proposed method had a better performance over all other existing methods except in terms of FDV. However, the multiscale top-hat approach by Qian et al. conversely provided the highest incorrect detection rate of the pixels which was reflected in its sensitivity (41). On another hand, most of the other classification methods presented moderately high sensitivity and very high specificity, and this has been at the cost of the significantly high number of background pixels over the guidewire pixels. Moreover, this caused all the methods to yield lower DSC, PPV, and MCC values with the ground-truth. The results obtained from each of the methods are plotted in Figure 4 and were further analyzed to validate the performance by applying each of the existing segmentation methods to the 12 trials. The cases of the specific frames in the last sequence are presented in the figure. These include the results obtained from the proposed and existing methods chosen for this evaluation study. The original angiograms and actual ground-truth frames of the respective images are displayed in successive rows for ease of visualization. The segmentation results are plotted as the contrast between the guidewire and background pixels. It can be seen that the segmentation results obtained by the newly proposed method were better than those of the existing methods. For instance, detection and delineation of the guidewire that was achieved with the previous methods were seriously affected by occurrence of irregular edges, noise speckles, and holes. Also, the methods demonstrated low connectivity of the vessel-like structure (i.e., guidewire) in the angiograms. However, the methods of Qian et al. used repairing processes to fill holes in the disconnected pixels in the guidewire (41). Nevertheless, the methods could not accurately delineate the guidewire pixels from the background pixels for appropriate segmentation in all cases. This is a comparable challenge to those encountered through the methods of Heneghan et al. and Nguyen et al. but at lower degrees (36,40).

For an in-depth comparison, the 4 binary indicators were determined for statistical hypothesis testing while both type I and type II errors were analyzed for the segmentation results, as presented in the first row of Figure 5. Details of the mean values obtained for 4 binary indicators based on the 12 trials are presented in Table 4. It is possible to perform additional comparative analysis of the proposed and existing methods. For instance, true-positive and false-negative values of the methods were obtained to illustrate their segmentation abilities. The data in Table 4 show that the proposed method yielded the highest hit rates (i.e., TP and TN values) along with lowest type I (FP) and type II (FN) errors. Consequently, this enabled the method to yield the highest TP and FN values, which in turn conferred the highest AUC. The method in Azzopardi et al. provided the lowest AUC which can be attributed to high type I and type II (FN) errors from the null hypothesis evaluation (32). The receiver operating characteristic (ROC) curves of these methods were plotted for the first frame in the sequence. As shown in Figure 6, the proposed method had the widest AUC. Despite class imbalance in the dataset, analysis of the case study shows that the proposed method can rank a randomly chosen guidewire pixel more positively than a randomly chosen background pixel at an estimation probability of 0.285. On the contrary, the Gaussian-based method in Azzopardi et al. had relatively low incidence of true-positive and false-positive rates which affected the accuracy, sensitivity, and AUC of the segmentation method (32). Hence, these are essential reasons for the high FP and FN rates in the edges of image pixels which constituted a reduced PPV for the methods in Table 3.

Figure 5 Evaluation of the proposed and existing segmentation methods. The comparative analysis is presented with the angiogram sequence in the last trial displayed as (A) the original angiogram frames, #105, #205, #305, #405, #505, #605, and #705, and (B) their respective ground-truth images. This is followed by the segmentation results obtained with (C) the proposed method; and the methods presented in (D) Cervantes-Sanchez et al. (34), (E) Azzopardi et al. (32), (F) Vlachos & Dermatas (35), (G) Heneghan et al. (36), (H) Coye (37), (I) Chanwimaluang & Fan (38), (J) Kang et al. (39), (K) Nguyen et al. (40), (L) Qian et al. (41), and (M) Li et al. (44). All the methods, apart from our proposed method and that of Heneghan et al. (36), could not achieve total delineation of the front-ground and background pixels. Actually, the study by Heneghan et al. (36) also employed image preprocessing with morphological operations performed to emphasize linear structures and differentiate tangible vascular structures in the angiograms. However, these caused the guidewire pixels to be severely affected in our dataset. The method of Nguyen et al. (40) also performed better on the dataset in terms of guidewire delineation. The tool pixels are segmented perfectly by using the multiline detector response with a fixed threshold of 0.56; however, it had a similar problem observed in Heneghan et al. (36)’s methods but to an even higher degree. Thus, the results attest to the versatility of our method. The ground-truth images were manually marked with LabelMe toolbox which is a graphical image annotation tool written in Python and publicly available via https://github.com/wkentaro/labelme. The masks pixels were exported with red color from the toolbox and converted to white for fast matrix matching during validation and evaluation studies.
Table 4
Table 4 Performance evaluation based on binary indicators*
Full table
Figure 6 Comparative analysis of the proposed and existing methods based on ROC curves using the first frame in the last catheterization trial.

Furthermore, the proposed model was also evaluated with datasets obtained from both robot-assisted intervention and one with a conventional intravascular catheterization. Both were in vivo angiograms obtained in rabbits. In this case, guidewire navigation was performed manually without using the master–slave robotic system. Sequences of the X-ray frames obtained were saved and processed as explained in the Methods Section. The proposed model was employed without any modification and the results obtained are presented in Figure 7. This includes eight image frames that were arbitrarily chosen at random incremental intervals: frames #55, #155, #255, #455, #505, #805, #905, and #1055. The original angiograms are presented in Figure 7A, while the guidewire pixels segmented from the angiograms are displayed as the middle panel (Figure 7B), and the guidewire pixel tracking results obtained with the method are presented in the right panel of Figure 7C. We can conclude that the robotic system does not negatively affect tool catheterization under X-ray. Rather, it saved the interventionists from exposure to radiation.

Figure 7 Evaluation of the proposed segmentation and tracking methods with an in vivo angiogram dataset obtained during conventional catheterization. (A) Ground-truth angiogram, (B) segmentation result, (C) guidewire tracking with the tracked pixels marked in cyan color. Details of the tool displacement and orientation were computed for the inter-frame motion as overlaid in each plot. The time laps provided for every consecutive frame show segmentation time was slightly longer in each successive frame.

Further studies are needed to validate how significantly the robotic system can reduce radiation exposure, and if it could reduce intravascular catheterization procedural time during cardiac interventions. As the procedure follows a general convention in which the segmentation models are used to classify the pixels as guidewire or background, we conducted intra-observer model variability analysis. This was done by comparing pixels in the ground-truth with the model results. The reliability and variance measures of the segmentation models were defined as a function of the binary indicators, as presented in Table 3. As shown in Figure 8, the proposed method yielded an intra-class correlation coefficient with model agreement of 45.18% and ±0.10 variability. The values represent average reliability and consistency of the models across all frames in the 12 angiogram sequences. The logic of intra-class measurement for correlation analysis is that the reliability of a good model should feature relatively little cumulative variance between observed and actual models, irrespective of the sources. This expresses the difficulty of excluding variability from non-uniform illumination received by multiple frames in angiogram sequences, and following a unique labelling process for the independent frames. The results of Figure 7 could be interpreted that the proposed segmentation model was able to characterize frame pixels in the segmented images close to the ground-truth data than other existing models. Nevertheless, sources of variance between the ground-truth and results from the segmentation models could be attributed to performance of the models and presence of random or unfiltered image artifacts. Similarly, analysis of the computation time required for the segmentation procedure was also considered to evaluate the methods.

Figure 8 Intraobserver model variability analysis between ground-truth and segmentation results.

For this purpose, the average execution time taken to segment guidewire pixels in all the angiograms was obtained with the time function in Matlab. As presented in Table 5, the proposed method took an average execution time of 1 second to segment guidewire pixels per angiogram. The multiscale response of the top-hat operator occurred as the fastest approach, followed by the methods in Coye and Chanwimaluang, and Fan, with lower execution times (37,38,41). One reason our proposed approach had a longer execution time could be due to it employing multiscale operations during frame preprocessing and vesselness detection steps. Nonetheless, the proposed method was faster than the other approaches. These findings suggest that the proposed method has good potential and can be used for segmenting blood vessels and tools with similar structures, including guidewires and catheters, while providing real-time motion visualization and intraoperative tracking during robotic catheterization.

Table 5
Table 5 Comparison based on average execution time
Full table

Conclusions

Cardiovascular diseases are a major cause of death worldwide. Recently, robotic catheter systems have been being developed for intravascular interventions; however, this modern approach has required more catheterization time due to several challenges. For instance, X-ray angiograms produced under nonuniform illumination, tool-vessel structural similarities, and lack of distal data sensing make robotic intravascular catheterization a difficult and potentially unsafe method for intravascular PCIs. Additionally, surgeons are still exposed to operational hazards during these procedures. Thus, in this study, a model-based approach was proposed for automatic segmentation and tracking of guidewire pixels in X-ray angiograms acquired during robotic intravascular catheterization for cardiovascular interventions. The proposed segmentation method was based on multiscale enhancement filtering of preprocessed X-ray frames in the sequences. The angiograms were refined with morphological operations and filters for pixel smoothing and vesselness measurement, while minima and maxima extrema were obtained to classify the pixels as valleys, representing part of the guidewire or ridges to denote the background. Then, morphological dilation was applied with structuring elements to connect and track the guidewire pixels in segmented images. Validation of the proposed method on an in vivo X-ray dataset acquired during 12 trials showed a segmentation accuracy of 0.995±0.001, a tracking displacement error of 1.938±2.429, and tracking orientation error of 0.039±0.040.

Furthermore, comparative studies were performed to evaluate the proposed approach with 9 existing vessel segmentation methods. The newly developed method showed the best performance in terms of the numerous metrics but had a relatively low FDR. Results from the comparative study showed some advantages of the proposed approach over the other methods. In addition, the proposed method could enhance robotic catheterization for faster interventions while providing surgeons with better tracking and visualization systems. Currently, the proposed method only achieved an average processing time of 1.048±0.066 seconds per image, which corresponds to about 1 FPS. However, the frame rate of our CBCT machine operated at an average of 10 frames per second. Thus, the method needs to be optimized to enable more accurate segmentation and real-time tracking.

In the future, methods that use deep convolutional neural networks shall be examined for improved performance. Furthermore, the in vivo study only included guidewire motions in simple vessels of rabbits, which are smaller animals. More studies using large animals such as pigs or monkeys are predicted to assist further validation of the proposed method. Also, adoption of the segmentation and tracking method to an online robotic catheter system may achieve increased cybernetic and autonomous tool manipulation during robotic intravascular catheterization.


Acknowledgments

Funding: This work was supported in parts by the National Key Research and Development program of China (#2019YFB1311700); National Natural Science Foundation of China (#U1713219, #61950410618); the Shenzhen Natural Science Foundation (#JCYJ20190812173205538); and CAS President’s International Fellowship Initiative.


Footnote

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/qims-20-1119). The authors have no conflicts of interest to declare.

Ethical Statement: This study was approved by the Ethics Committee of Shenzhen Institutes of Advanced Technology, (no. SIAT-IACUC-200528-A1289). The catheterization trials were performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments. Informed consent was provided by all participants.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Grignon B, Mainard L, Delion M, Hodez C, Oldrini G. Recent advances in medical imaging: anatomical and clinical applications. Surg Radiol Anat 2012;34:675-86. [Crossref] [PubMed]
  2. Santos JS, Uusi-Simola J, Kaasalainen T, Aho P, Venermo M. Radiation Doses to Staff in a Hybrid Operating Room: An anthropomorphic phantom study with active electronic dosimeters. Eur J Vasc Endovasc Surg 2020;59:654-60. [Crossref] [PubMed]
  3. Kato S, Kitagawa K, Ishida N, Ishida M, Nagata M, Ichikawa Y, Katahira K, Matsumoto Y, Seo K, Ochiai R, Kobayashi Y, Sakuma H. Assessment of coronary artery disease using magnetic resonance coronary angiography: a national multicenter trial. J Am Coll Cardiol 2010;56:983-91. [Crossref] [PubMed]
  4. Omisore OM, Han S, Zhou T, Al-Handarish Y, Du W, Ivanov K, Wang L. Learning-based Parameter Estimation for Hysteresis Modeling in Robotic Catheterization. Annu Int Conf IEEE Eng Med Biol Soc 2019;2019:5399-402. [Crossref] [PubMed]
  5. Omisore OM, Han SP, Ren LX, Wang GS, Ou FL, Li H, Wang L. Towards Characterization and Adaptive Compensation of Backlash in a Novel Robotic Catheter System for Cardiovascular Intervention. IEEE Trans Biomed Circuits Syst 2018;12:824-38. [Crossref] [PubMed]
  6. Mahmud E, Naghi J, Ang L, Harrison J, Behnamfar O, Pourdjabbar A, Reeves R, Patel M. Demonstration of the Safety and Feasibility of Robotically Assisted Percutaneous Coronary Intervention in Complex Coronary Lesions Results of the CORA-PCI. JACC Cardiovasc Interv 2017;10:1320-7. [Crossref] [PubMed]
  7. Mansour M, Lakkireddy D, Packer D, Day JD, Mahapatra S, Brunner K, Reddy V, Natale A. Safety of catheter ablation of atrial fibrillation using fibre optic-based contact force sensing. Heart Rhythm 2017;14:1631-6. [Crossref] [PubMed]
  8. Omisore OM, Han SP, Jing X, Li H, Li Z, Wang L. A Review on Flexible Robotic Systems for Minimally Invasive Surgery. IEEE Transactions Systems Man Cybernetics-Systems 2020;1-14. [Crossref]
  9. Moccia S, De Momi E, El-Hadji S, Mattos LS. Blood vessel segmentation algorithms —Review of methods, datasets and evaluation metrics. Comput Methods Programs Biomed 2018;158:71-91. [Crossref] [PubMed]
  10. Frangi AF, Niessen WJ, Vincken KL, Viergever MA. Multiscale vessel enhancement filtering. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Springer Verlag, 1998:130-7.
  11. Lorenz C, Carlsen IC, Buzug TM, Fassnacht C, Weese J. Multi-scale line segmentation with automatic estimation of width, contrast and tangential direction in 2D and 3D medical images. CVRMed-MRCAS'97. Berlin Heidelberg: Springer, 1997.
  12. He K, Sun J, Tang X. Guided image filtering. IEEE Trans Pattern Anal Mach Intell 2013;35:1397-409. [Crossref] [PubMed]
  13. Xu X, Liu B, Zhou F. Hessian-based vessel enhancement combined with directional filter banks and vessel similarity. 2013 ICME International Conference on Complex Medical Engineering. IEEE, 2013:80-4.
  14. Vakilkandi MJ, Ahmadzadeh MR, Amirfattahi R, Sadri S. Accurate coronary vessel extraction using fast directional filter bank. In: 2011 7th Iranian Conference on Machine Vision and Image Processing, MVIP 2011 - Proceedings. 2011.
  15. Schneider M, Sundar H. Automatic global vessel segmentation and catheter removal using local geometry information and vector field integration. In: 2010 7th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Proceedings. 2010:45-8.
  16. Fazlali HR, Karimi N, Soroushmehr SR, Shirani S, Nallamothu BK, Ward KR, Samav S, Najarian K. Vessel segmentation and catheter detection in X-ray angiograms using superpixels. Med Biol Eng Comput 2018;56:1515-30. [Crossref] [PubMed]
  17. Petkov S, Carrillo X, Radeva P, Gatta C. Diaphragm border detection in coronary X-ray angiographies: new method and applications. Comput Med Imaging Graph 2014;38:296-305. [Crossref] [PubMed]
  18. Hernandez-Vela A, Gatta C, Escalera S, Igual L, Martin-Yuste V, Sabate M, Radeva P. Accurate Coronary Centerline Extraction, Caliber Estimation, and Catheter Detection in Angiographies. IEEE Trans Inf Technol Biomed 2012;16:1332-40. [Crossref] [PubMed]
  19. Chen BJ, Wu Z, Sun S, Zhang D, Chen T. Guidewire tracking using a novel sequential segment optimization method in interventional X-ray videos. In: Proceedings - International Symposium on Biomedical Imaging. IEEE Computer Society; 2016:103-6.
  20. Chen K, Wang C, Xie Y, Zhou S. A GPU-Based Automatic Approach for Guide Wire Tracking in Fluoroscopic Sequences. Intern J Pattern Recognit Artif Intell 2019;33:1954025 [Crossref]
  21. Shi P, Guo SX, Zhang LS, Jin XL, Song DP, Wang WH. Guidewire Tracking based on Visual Algorithm for Endovascular Interventional Robotic System. In: Proceedings of 2019 IEEE International Conference on Mechatronics and Automation, ICMA, 2019:2235-9.
  22. Zhou YJ, Xie XL, Hou ZG, Bian GB, Liu SQ, Zhou XH. FRR-NET: Fast Recurrent Residual Networks for Real-Time Catheter Segmentation and Tracking in Endovascular Aneurysm Repair. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020:961-4.
  23. Ullah I, Chikontwe P, Park SH. Real-Time Tracking of Guidewire Robot Tips Using Deep Convolutional Neural Networks on Successive Localized Frames. IEEE Access 2019;7:159743-53.
  24. Nguyen A, Kundrat D, Dagnino G, Chi W, Abdelaziz MEMK, Guo Y, Ma Y, Kwok T, Riga C, Yang G. End-to-End Real-time Catheter Segmentation with Optical Flow-Guided Warping during Endovascular Intervention. In: Proceedings - IEEE International Conference on Robotics and Automation. 2020:9967-73.
  25. Omisore OM, Duan W, Akinyemi T, Han S, Du W, Alhanderish Y, Wang L. Design of a master-slave Robotic System for Intravascular Catheterization during Cardiac Interventions. In: 16th IEEE International Conference on Control, Automation, Robotics and Vision. 2021:996-1000.
  26. Omisore OM, Han SP, Ren LX, Wang L. A teleoperated robotic catheter system with motion and force feedback for vascular surgery. 18th International Conference on Control, Automation and Systems. 2018:172-7.
  27. Bai XZ, Zhou FG, Xue BD. Image enhancement using multi-scale image features extracted by top-hat transform. Opt Laser Technol 2012;44:328-36. [Crossref]
  28. Rasche C. Rapid contour detection for image classification. IET Image Processing 2017;12:532-8. [Crossref]
  29. Lopez AM, Lumbreras F, Serrat J, Villanueva JJ. Evaluation of Methods for Ridge and Valley Detection. IEEE Trans Pattern Anal Mach Intell 1999;21:327-35. [Crossref]
  30. Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Trans Syst Man Cybern B Cybern 1973;6:610-21. [Crossref]
  31. Haralick RM. Ridges and valleys on digital images. Computer Vision, Graphics, and Image Processing 1983;22:28-38. [Crossref]
  32. Azzopardi G, Strisciuglio N, Vento M, Petkov N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med Image Anal 2015;19:46-57. [Crossref] [PubMed]
  33. Rodrigues P, Guimaraes P, Santos T, Simao S, Miranda T, Serranho P, Bernardes RC. Two-dimensional segmentation of the retinal vascular network from optical coherence tomography. J Biomed Opt 2013;18:126011 [Crossref] [PubMed]
  34. Cervantes-Sanchez F, Cruz-Aceves I, Hernandez-Aguirre A, Solorio-Meza S, Cordova-Fraga T, Aviña-Cervantes JG. Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution. Appl Radiat Isot 2018;138:18-24. [Crossref] [PubMed]
  35. Vlachos M, Dermatas E. Multi-scale retinal vessel segmentation using line tracking. Comput Med Imaging Graph 2010;34:213-27. [Crossref] [PubMed]
  36. Heneghan C, Flynn J, O’Keefe M, Cahill M. Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Med Image Anal 2002;6:407-29. [Crossref] [PubMed]
  37. Coye T. A novel retinal blood vessel segmentation algorithm for fundus images. MATLAB Central File Exchange. 2015.
  38. Chanwimaluang T, Fan G. An efficient blood vessel detection algorithm for retinal images using local entropy thresholding. In: Proceedings - IEEE International Symposium on Circuits and Systems. 2003.
  39. Kang W, Chen W, Liu B, Wu W. Segmentation method of degree-based transition region extraction for coronary angiograms. In: Proceedings - 2nd IEEE International Conference on Advanced Computer Control, ICACC, 2010:466-70.
  40. Nguyen UTV, Bhuiyan A, Park LAF, Ramamohanarao K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit 2013;46:703-15. [Crossref]
  41. Qian Y, Eiho S, Sugimoto N, Fujita M. Automatic extraction of coronary artery tree on coronary angiograms by morphological operators. Computers Cardiol 1998;25:765-8. [Crossref]
  42. Abbasi A. Blood vessel segmentation in retinal images using mathematical morphology. Available online: www.mathworks.com/matlabcentral/
  43. Omisore OM, Ojokoh BA, Babalola AE, Igbe T, Folajimi Y, Nie Z, Wang L. An affective learning-based system for diagnosis and personalized management of diabetes mellitus. Future Gener Comput Syst 2021;117:273-90. [Crossref]
  44. Li Y, Zhou S, Wu J, Ma X, Peng K. A novel method of vessel segmentation for X-ray coronary angiography images. In: Proceedings - 4th International Conference on Computational and Information Sciences, ICCIS, 2012:468-71.
Cite this article as: Omisore OM, Duan W, Du W, Zheng Y, Akinyemi T, Al-Handerish Y, Li W, Liu Y, Xiong J, Wang L. Automatic tool segmentation and tracking during robotic intravascular catheterization for cardiac interventions. Quant Imaging Med Surg 2021;11(6):2688-2710. doi: 10.21037/qims-20-1119

Download Citation