#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking


Authors: Marco Lai aff001;  Simon Skyrman aff003;  Caifeng Shan aff001;  Drazenko Babic aff001;  Robert Homan aff004;  Erik Edström aff003;  Oscar Persson aff003;  Gustav Burström aff003;  Adrian Elmi-Terander aff003;  Benno H. W. Hendriks aff001;  Peter H. N. de With aff002
Authors place of work: Philips Research, Eindhoven, The Netherlands aff001;  Eindhoven University of Technology (TU/e), Eindhoven, The Netherlands aff002;  Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden aff003;  Philips Healthcare, Best, the Netherlands aff004;  Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands aff005
Published in the journal: PLoS ONE 15(1)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0227312

Summary

Objective

Surgical navigation is a well-established tool in endoscopic skull base surgery. However, navigational and endoscopic views are usually displayed on separate monitors, forcing the surgeon to focus on one or the other. Aiming to provide real-time integration of endoscopic and diagnostic imaging information, we present a new navigation technique based on augmented reality with fusion of intraoperative cone beam computed tomography (CBCT) on the endoscopic view. The aim of this study was to evaluate the accuracy of the method.

Material and methods

An augmented reality surgical navigation system (ARSN) with 3D CBCT capability was used. The navigation system incorporates an optical tracking system (OTS) with four video cameras embedded in the flat detector of the motorized C-arm. Intra-operative CBCT images were fused with the view of the surgical field obtained by the endoscope’s camera. Accuracy of CBCT image co-registration was tested using a custom-made grid with incorporated 3D spheres.

Results

Co-registration of the CBCT image on the endoscopic view was performed. Accuracy of the overlay, measured as mean target registration error (TRE), was 0.55 mm with a standard deviation of 0.24 mm and with a median value of 0.51mm and interquartile range of 0.39˗˗0.68 mm.

Conclusion

We present a novel augmented reality surgical navigation system, with fusion of intraoperative CBCT on the endoscopic view. The system shows sub-millimeter accuracy.

Keywords:

Cameras – magnetic resonance imaging – Surgical and invasive medical procedures – Computed axial tomography – endoscopy – Endoscopic surgery – Skull – Endoscopic plastic surgery

Introduction

Endoscopic endonasal skull base surgery offers a minimally-invasive approach to skull base pathologies, including tumors, infectious diseases, CSF leak, vascular and compressive conditions affecting the cranial fossae and sinuses. This technique has several potential advantages, such as shortened hospitalization, reduced postoperative pain and lower complication rates compared to open surgery [1, 2]. However, approaching the skull base from the nasal cavity implies that the surgical target and adjacent risk organs, such as the carotid arteries and cranial nerves, are covered by bone and not in direct view. Successful use of endoscopy requires surgical experience and in-depth knowledge of anatomical landmarks. To further increase safety, surgical navigation has been implemented and is today a well-established tool in endoscopic skull base surgery [36]. While some studies have failed to demonstrate a positive impact of navigation in endonasal surgery [79], others have shown reduced complication rates and improved patient outcome [1015].

Available navigation systems in clinical use are based on co-registration of preoperative CT and MR images to a coordinate system with a fixed relation to the patient’s head. This allows tracking and visualization of a pointer tool, or other instrument, in relation to the patient and the preoperative imaging. The navigational feedback, showing the instrument in relation to the patient’s imaging anatomy, is displayed on a dedicated screen [16]. Thus, since endoscopy works through line of sight, there is consequently no real-time information on sub-surface structures. The use of a pointer tool also means that the surgery must be paused during navigation. It has been shown that navigation tends to increase OR time in endoscopic endonasal procedures [10, 15, 1719].

In the past few decades augmented reality (AR) has been investigated as a method to improve endoscopic navigation. AR is a technique where real-world objects are enhanced by overlay of computer-generated perceptual information. In the case of endoscopic surgery, AR can be used to augment the live video stream from the endoscope with overlaid image data from pre- or intraoperative radiological exams, like MRI- or CT-scans. Thus, a computer-generated image of a pre-planned surgical target, path or risk organ, can be integrated in the endoscope’s real-world view. In this way, sub-surface structures can be visualized and a pointer tool is not needed [2022]. AR navigation systems have been successfully applied in several surgical fields, including microsurgery and spine surgery [2326].

The AR systems proposed for endoscopic surgery have thus far mostly relied on preoperative imaging and contour-based registration protocols, which may result in surgically insufficient accuracy [27]. A commercially available system with direct navigational feedback in the endoscopic view allowing the overlay of annotations and models, is the Target Guided Surgery System. This system supports both electromagnetic and optical tracking as well as simultaneous hybrid tracking and, as for other AR navigation systems, a contour-based protocol is used for preoperative CTs registration on endoscopic images. Surgical targets and pathways are depicted as geometric figures overlaid on the endoscopic view [28]. Since this protocol is quite different from the followed approach in our system, a detailed comparison is not relevant. Alternatively, navigation systems based on intraoperative cone beam computed tomography (CBCT) have proved to reach sub-millimeter accuracy in skull-base surgery [29]. In addition, intraoperative CBCT allows for acquisition of updated images during surgery [26, 3033].

In this study, we present a novel navigation technique for endoscopic endonasal skull base surgery, based on an augmented reality surgical navigation (ARSN) system, previously presented in [20]. Integrating an endoscope into the system allows us to augment intraoperative CBCT imaging data onto the endoscope view during surgery. The aim of the study was to test the accuracy of the system.

Material and methods

The endoscope

A rigid endoscope was used (model 28132AA, straightforward telescope 0º, Karl Storz GmbH & Co. KG, Tuttlingen, Germany), and attached to a 5-Mpixel camera (model acA2500-14uc, Basler Beteiligungs-GmbH & Co. KG, Ahrensburg, Germany) via a 35-mm focal-length endoscope-camera coupler. Images of the endoscope camera were acquired at 15 fps (frames per second) and at a resolution of 2590x1942 pixels.

Skull phantom

A skull phantom was used for simulating the workflow in a surgical scenario. The skull phantom model was downloaded from the Internet and 3D printed in-house in PLA plastic material. Also, two inserts that mimic the internal carotids (cylinders with diameter of 3 mm), one insert that mimics the optic nerve (cylinders with diameter of 2 mm) and one insert that mimics the pituitary gland (sphere with diameter of 10 mm) were 3D printed in a resin material and glued inside the skull. Afterwards, the head was fixed on a stable plastic base.

The augmented reality surgical navigation system

We present a new method for endoscope tracking and image augmentation, based on a previously presented augmented reality surgical navigation system (Philips Healthcare, Best, The Netherlands; Fig 1) [34]. The ARSN system has its own proprietary software for planning, segmentation and image processing. The system is composed of two parts: a C-arm for CBCT image acquisition and an optical tracking system (OTS), which makes use of four small high-resolution cameras in the flat-panel X-ray detector of the C-arm [20]. The use of four cameras increases robustness, since only two cameras are needed for marker detection and tracking. The OTS runs at 15 fps and tracks optical markers, each consisting of a 7-mm diameter white disk on a black background. The optical markers are automatically identified in the same coordinate system as the CBCT images. To allow this, a simple calibration procedure is performed when the system is set up by using several markers, which for this initial procedure are both optical and radiopaque and therefore are seen by the OTS and recognized on the CBCT images. This calibration creates a rigid integration of the two parts of the ARSN system and does not need to be repeated. For endoscope tracking and image augmentation (Fig 2), however, the following steps are performed for every surgical procedure.

Augmented reality surgical navigation system for endoscopy.
Fig. 1. Augmented reality surgical navigation system for endoscopy.
Experimental setup for the study on the skull phantom.
Fig. 2. Experimental setup for the study on the skull phantom.
  • Endoscope calibration

  • CBCT acquisition and co-registration with OTS

  • Image fusion on the endoscopic view.

1. Endoscope calibration

First, an endoscope marker (EM), a 5-cm diameter aluminium disc with a printed pattern of optical markers, was attached to the collar of the endoscope for detection and tracking by the OTS. Second, the intrinsic endoscope camera parameters were computed with the Zhang algorithm [35], using 15 images of a checkerboard at multiple perspectives. Third, extrinsic parameters were computed using a hand-eye camera calibration algorithm [36], which defined the rigid transformation TCM between the EM mounted on the endoscope TMO, tracked via the OTS, and the camera pose TCO (Fig 3). For this, a calibration-plate (CP), with a pattern of 25 optical markers for the OTS was used. The endoscope was fixed in position by a surgical arm, while the CP was moved manually. Twenty views of the EM and the CP were acquired with the OTS while the CP was photographed with the endoscope. For each of the twenty views, the camera pose TCO were calculated with the P3P (Perspective 3 Points) algorithm [37], combining the 3D marker locations of the CP and their corresponding 2D endoscopic image projections, as well as the calibrated intrinsic endoscope-camera parameters. A dataset of EM-positions as detected by the OTS and the relative camera poses was constructed and, eventually, the rigid transformation TCM was then computed, following a least-square minimization method [36].

Hand-eye calibration with a moving calibration plate.
Fig. 3. Hand-eye calibration with a moving calibration plate.

2. CBCT acquisition and co-registration with OTS

A skull phantom was positioned on the surgical table and 5–10 optical markers were placed on its surface and tracked by the cameras. The detected optical markers generated a virtual reference grid (VRG) on the skull surface that was constantly tracked by the OTS. A CBCT image of the skull phantom was acquired, during which the VRG was co-registered with the CBCT image. At this point, any movement recognized by the OTS could be compensated for in the CBCT 3D volume.

3. Image fusion on the endoscopic view

The CBCT image could be overlaid on the endoscopic image, as shown in Fig 4, by defining the transformation TCP from the patient model TPO to the camera position and orientation (i.e. pose) TCO, such that:

Relationship of the frame transformations.
Fig. 4. Relationship of the frame transformations.

The patient model TPO was defined based on the optical markers placed on the surface of the skull phantom and their resulting, VRG. Using the VRG, the CBCT image could be adjusted according to the motion of the skull phantom TPO. The camera pose TCO was defined as:


with TMO pose of the MP tracked via the OTS and TCM the rigid transformation computed during the hand-eye calibration step. The complete transformation which expresses the skull phantom in the camera position reference system can be written as:

This series of transformations lead to the co-registration of CBCT and the endoscopic image (Figs 5 and 6).

The workflow in a surgical scenario.
Fig. 5. The workflow in a surgical scenario.
Overall performances of the image fusion system were evaluated on a plastic skull phantom with a realistic representation of the nasal cavity and adjacent skull base anatomy, including vessels, nerves and the pituitary gland. 1. The skull phantom with optical markers on its surface was positioned on the surgical table. The 3D position of the optical markers was detected by the OTS of the navigation system, to create a VRG for tracking of the phantom’s motion. 2. A CBCT image, co-registered with the 3D position of the optical markers (VRG) was acquired. 3. Anatomical structures of interest were manually segmented from the CBCT image. 4. The endoscope, automatically recognized and tracked by the OTS, was placed in the nasal cavity of the phantom. 5. Segmented structures at the base of the skull were augmented onto the live endoscopic image. The augmented endoscopic view, together with anatomical views to guide the surgeon inside the nasal cavity, were displayed.
Example of image fusion on the endoscopic view.
Fig. 6. Example of image fusion on the endoscopic view.

Test of accuracy

A custom-made grid was designed to test the accuracy of the image overlay on the endoscopic view (Fig 7A). Thirteen stainless steel spheres, with a diameter of 2 mm and a tolerance of 5 μm, were incorporated in the central 20x20 mm of a 60x60 mm grid. A CBCT image of the grid was acquired and the spheres, manually segmented from the CBCT, were overlaid on the endoscopic view. Eleven optical markers were placed on the sides of the grid, allowing tracking of the motion of the grid and adjusting the CBCT position according to grid motion.

Fig. 7.
a) Custom-made grid designed for studying the accuracy of the image overlay on the endoscopic view. b) The endoscope was held in a perpendicular position with respect to the grid by means of a surgical arm.

The accuracy of the CBCT overlay was tested at distances of 5, 10, 15, 20, 25 and 30 mm from the grid, covering common working distances of the endoscope in neurosurgical skull base procedures. At each distance, the grid was repeatedly photographed with the endoscope, while manually moving the grid to obtain at least 100 positions, covering the entire endoscopic field of view. The grid was kept perpendicular to the straight line of sight of the endoscope, which, in turn, was held in position by a surgical arm (Fig 7B). Endoscopic images were segmented, detecting centres and radii, measured in pixels, of the real spheres and of the overlaid spheres segmented from the CBCT. The error in pixels was converted to millimeters and defined as the target registration error, TRE. For the conversion from pixels to millimeters, the ratio between the diameter of the spheres in the endoscopic image (in pixels) and the real dimension of the spheres (in mm) was used. This conversion is then computed by:


Statistical analysis

The one-way ANOVA with Tukey-Kramer post-hoc analysis was used for statistical analysis of TRE distributions. Results are presented as means with corresponding standard deviations and medians with interquartile ranges.

Results

Overall TRE was 0.55±0.24 mm, with a median of 0.51 mm and interquartile range of 0.39–0.68 mm (Fig 8). Mean and standard deviation, along with median and minimum and maximum values for each distance were calculated using 100 data points. The mean and median values were all notably close to 0.5 mm. The variation (spread) of the error for each distance slightly increased as the endoscope moved closer to the grid, but no significant difference in the mean and median TRE between the distances tested was found (p = 0.37). Furthermore, the measured maximum error was 1.43 mm (outlier). Fig 9A shows the heat maps of the TRE distribution on the endoscopic view at several distances between the endoscope and the grid. The maps show a lower TRE in the central area of the endoscopic image and higher TRE towards the image sides. Also, it should be considered that no image overlay was tested in the corners of the image, since the endoscopic field of view is circular (as shown in Fig 9B).

Boxplots of the errors in the image overlay as a function of the distance of the endoscope from the grid.
Fig. 8. Boxplots of the errors in the image overlay as a function of the distance of the endoscope from the grid.
Fig. 9.
a) Error distribution, expressed in mm, of the image overlay on the endoscopic view at several distances between the endoscope and the grid. Blue represents areas with lower TRE, and red indicates areas with larger TRE. b) Steel spheres segmented from the CBCT and overlaid on the endoscopic view at several distances between the endoscope and the grid.

Discussion

In this study, we present a novel application for a previously described ARSN system[34]. It has been adapted and developed for endoscopic endonasal skull base surgery with overlay of the intraoperatively acquired CBCT images to create an augmented reality endoscopic view. Sub-millimeter accuracy in CBCT image overlay on the endoscopic view was achieved.

The utility of surgical navigation in endoscopic endonasal skull base surgery is well established [16, 1015]. Most commercially available navigation systems employ a contour-based registration protocol, where a laser pointer is used to identify the skin surface, which is then co-registered with the preoperative CT or MR images[38]. The general consensus is that accuracy, defined as target registration error (TRE), must be less than 2 mm for accurate navigation. [27, 39]. However, this is not consistently achieved with existing navigation systems. [3941] Moreover, even if mean TRE values are below 2 mm, it is still likely that part of the range will be > 2 mm, resulting in insufficient accuracy in the surgical setting. Therefore, as proposed by Citardi et al., “the next immediate goal for a next-generation surgical navigation platform would be to move TRE to 1.0˗˗1.5 mm or, ideally, to 0.6 ˗˗1.0 mm.” [27]. Surgical navigation system prototypes with image fusion on the endoscopic view, have so far not reached TRE values of such low levels [21, 22, 42, 43]. The use of intraoperative CBCT has been suggested as a solution to this problem, as higher registration accuracy on the endoscopic view can be achieved. [42, 4447]. With respect to other proposed solutions for image fusion on the endoscopic view, our system presents several advantages. Tracking accuracy is always dependent on a combination of the distances between the tracked markers, the distances between the markers and the cameras and the resolution of the cameras. Since the cameras integrated in the flat-panel detector are at close distance to the markers, have a high resolution and a fixed relation to each other, we can track the endoscope with high accuracy [29]. The accuracy in co-registration between OTS and CBCT depends on the distance between them, and since the OTS is rigidly integrated in the C-arm, we can achieve a high accuracy in OTS and CBCT co-registration. Also, while there is no change in their relative position, there is no need for repeating the procedure of co-registration prior to each surgical procedure.

The method presented here, achieves a TRE of 0.55±0.24 mm in CBCT image projection, with a median of 0.51 mm and interquartile range of 0.39˗˗0.68 mm, independent of the working distance. The maximum error is 1.43 mm (outlier) and this is well below the currently accepted 2.0 mm [27, 39]. Since there is no standardized method for measuring TRE, the results here should be interpreted cautiously in relation to previous publications. Bong et al. achieved an accuracy of about 1 mm in their experiments of image overlay on the endoscopic view [21]. Li et al. found a TRE of 1.28±0.45 mm [22]. Mirota et al. reported a registration accuracy with a mean TRE of 1.28 mm [42]. Citardi et al. estimated a target registration accuracy for surgical navigation of 1.5 mm or better [28]. To the best of our knowledge, the TRE presented in this study is the lowest reported error. However, surgical simulations with printed models and cadavers, tests of inter-user variability and clinical studies are needed to confirm the results of this study, since the accuracy achieved in a laboratory setting may decline as a navigation system is translated into clinical practice.

Using augmented reality for surgical navigation has several potential benefits compared to conventional navigation with display of 2D medical imaging on a separate screen. Overlaying segmented anatomical structures from CT or MRI on the endoscopic video stream enables navigation without the use of dedicated instruments and thereby improves workflow, while visualizing sub-surface anatomy [48]. However, it has been shown that although users of AR navigation were able to identify a target more accurately, they were at the same time at risk of inattentional blindness, e.g. failing to identify unexpected targets like foreign bodies or critical complications [49, 50]. This aspect should be incorporated in the further development of this ARSN system and the design of the associated user interface. It is important that the interface provides only the relevant information to the surgeon. Furthermore, in this experimental setup, the skull phantom was fixed. However, the registration and accuracy of the method does not depend on fixation of the head, since the position of the head is tracked by optical markers with real-time updating of its position. The placement of the optical markers must be carefully investigated to avoid interference with the surgical workflow.

The intraoperative CBCT in the ARSN system is primarily performed for registration purposes, and algorithms for fusion with preoperative MRI images must be developed to enable a-priori pre-planning and segmentation of anatomical structures. Alternatively, there are also several potential advantages with acquisition and post-processing of the intraoperative CBCT images. A contrast-enhanced CBCT could potentially be used for segmentation of the carotid arteries or a contrast-enhancing tumor. There is also the possibility to update the imaging during surgery, e.g. to evaluate tumor resection grade or intraoperative changes of anatomy. Fast and accurate segmentation of CBCT images has been performed successfully intraoperatively in the system’s spine surgery application [48].

Limitations

In this first study of the ARSN endoscope tracking application, our aim was to set up a system to develop algorithms for tracking of the endoscope with high accuracy. However, the study design has several limitations in evaluating the clinical applicability of the results. The use of a flat grid simplified changes of distance between endoscope and target, and provided measurable targets throughout the endoscopic field of view. However, to prove the clinical value of the system, further testing and simulations on anatomical models as well as cadavers are needed.

Conclusion

In this study we present a novel application for an augmented reality navigation system in endoscopic surgery, with fusion of intraoperative CBCT to the endoscopic view. A mean TRE of 0.55±0.24 mm was achieved, with a median of 0.51mm and interquartile range of 0.39˗˗0.68mm. The system shows great potential for clinical use in endoscopic skull base surgery, and further development is warranted.


Zdroje

1. Lipski SM, Digonnet A, Dolhen PJE. Modern indications for endoscopic endonasal surgery. 2016;4(1):96–102.

2. Zwagerman NT, Zenonos G, Lieber S, Wang W-H, Wang EW, Fernandez-Miranda JC, et al. Endoscopic transnasal skull base surgery: pushing the boundaries. 2016;130(2):319–30. doi: 10.1007/s11060-016-2274-y 27766473

3. American Academy of Otolaryngology-Head & Neck Surgery. Position Statement: intra-operative use of computer aided surgery. 2014. Available at: http://www.entnet.org/content/intra-operative-use-computer-aided-surgery. Accessed August 20, 2016. 2016.

4. Hepworth EJ, Bucknor M, Patel A, Vaughan WCJOH, Surgery N. Nationwide survey on the use of image-guided functional endoscopic sinus surgery. 2006;135(1):68–75. doi: 10.1016/j.otohns.2006.01.025 16815185

5. Justice JM, Orlandi RR, editors. An update on attitudes and use of image‐guided surgery. International forum of allergy & rhinology; 2012: Wiley Online Library.

6. Orlandi RR, Petersen EJAjor. Image guidance: a survey of attitudes and use. 2006;20(4):406–11. doi: 10.2500/ajr.2006.20.2884 16955769

7. Tabaee A, Kassenoff TL, Kacker A, Anand VKJOH, Surgery N. The efficacy of computer assisted surgery in the endoscopic management of cerebrospinal fluid rhinorrhea. 2005;133(6):936–43. doi: 10.1016/j.otohns.2005.07.028 16360517

8. Dubin MR, Tabaee A, Scruggs JT, Kazim M, Close LGJAoO, Rhinology, Laryngology. Image-guided endoscopic orbital decompression for Graves' orbitopathy. 2008;117(3):177–85. doi: 10.1177/000348940811700304 18444477

9. Tschopp KP, Thomaser EGJR. Outcome of functional endonasal sinus surgery with and without CT-navigation. 2008;46(2):116–20. 18575012

10. Al-Swiahb JN, Al Dousary SHJAoSm. Computer-aided endoscopic sinus surgery: a retrospective comparative study. 2010;30(2):149. doi: 10.4103/0256-4947.60522 20220266

11. Dalgorf DM, Sacks R, Wormald P-J, Naidoo Y, Panizza B, Uren B, et al. Image-guided surgery influences perioperative morbidity from endoscopic sinus surgery: a systematic review and meta-analysis. 2013;149(1):17–29. doi: 10.1177/0194599813488519 23678278

12. Fried MP, Moharir VM, Shin J, Taylor-Becker M, Morrison P, Kennedy DWJAjor. Comparison of endoscopic sinus surgery with and without image guidance. 2002;16(4):193–7. 12222943

13. Javer AR, Genoway KAJJoo. Patient quality of life improvements with and without computer assistance in sinus surgery: outcomes study. 2006;35(6). doi: 10.2310/7070.2006.0083 17380830

14. Masterson L, Agalato E, Pearson CJTJoL, Otology. Image-guided sinus surgery: practical and financial experiences from a UK centre 2001–2009. 2012;126(12):1224–30. doi: 10.1017/S002221511200223X 23067580

15. Metson R, Cosenza M, Gliklich RE, Montgomery WWJAoOH, Surgery N. The role of image-guidance systems for head and neck surgery. 1999;125(10):1100–4. doi: 10.1001/archotol.125.10.1100 10522501

16. Schulze F, Bühler K, Neubauer A, Kanitsar A, Holton L, Wolfsberger SJIjocar, et al. Intra-operative virtual endoscopy for image guided endonasal transsphenoidal pituitary surgery. 2010;5(2):143–54. doi: 10.1007/s11548-009-0397-8 20033497

17. Reardon EJJTL. Navigational risks associated with sinus surgery and the clinical effects of implementing a navigational system for sinus surgery. 2002;112(S99):1–19.

18. Rombaux P, Ledeghen S, Hamoir M, Bertrand B, Eloy P, Coche E, et al. Computer assisted surgery and endoscopic endonasal approach in 32 procedures. 2003;57(2):131–7. 12836470

19. Eliashar R, Sichel J, Gross M, Hocwald E, Dano I, Biron A, et al. Image guided navigation system—a new technology for complex endoscopic endonasal surgery. 2003;79(938):686–90. 14707243

20. Burström G, Nachabe R, Persson O, Edström E, Terander AEJS. Augmented and Virtual Reality Instrument Tracking for Minimally Invasive Spine Surgery: A Feasibility and Accuracy Study. 2019. doi: 10.1097/BRS.0000000000003006 30830046

21. Bong JH, Song Hj, Oh Y, Park N, Kim H, Park SJTIJoMR, et al. Endoscopic navigation system with extended field of view using augmented reality technology. 2018;14(2):e1886.

22. Li L, Yang J, Chu Y, Wu W, Xue J, Liang P, et al. A novel augmented reality navigation system for endoscopic sinus and skull base surgery: a feasibility study. 2016;11(1):e0146996. doi: 10.1371/journal.pone.0146996 26757365

23. Salehahmadi F, Hajialiasgari FJWjops. Grand Adventure of Augmented Reality in Landscape of Surgery. 2019;8(2):135. doi: 10.29252/wjps.8.2.135 31309050

24. Eckert M, Volmerg JS, Friedrich CMJJm, uHealth. Augmented reality in medicine: systematic and bibliographic review. 2019;7(4):e10967. doi: 10.2196/10967 31025950

25. Mikhail M, Mithani K, Ibrahim GMJWn. Presurgical and Intraoperative Augmented Reality in Neuro-oncologic Surgery: Clinical Experiences and Limitations. 2019. doi: 10.1016/j.wneu.2019.04.256 31103764

26. Elmi-Terander A, Burström G, Nachabe R, Skulason H, Pedersen K, Fagerlund M, et al. Pedicle Screw Placement Using Augmented Reality Surgical Navigation with Intraoperative 3D Imaging: A First In-Human Prospective Cohort Study. 2019;44(7):517–25. doi: 10.1097/BRS.0000000000002876 30234816

27. Citardi MJ, Yao W, Luong AJOCoNA. Next-Generation Surgical Navigation Systems in Sinus and Skull Base Surgery. 2017;50(3):617–32. doi: 10.1016/j.otc.2017.01.012 28392037

28. Citardi MJ, Agbetoba A, Bigcas JL, Luong A, editors. Augmented reality for endoscopic sinus surgery with surgical navigation: a cadaver study. International forum of allergy & rhinology; 2016: Wiley Online Library.

29. Mirota DJ, Wang H, Taylor RH, Ishii M, Gallia GL, Hager GDJItomi. A system for video-based navigation for endoscopic endonasal skull base surgery. 2011;31(4):963–76. doi: 10.1109/TMI.2011.2176500 22113772

30. Batra PS, Kanowitz SJ, Citardi MJJAjor. Clinical utility of intraoperative volume computed tomography scanner for endoscopic sinonasal and skull base procedures. 2008;22(5):511–5. doi: 10.2500/ajr.2008.22.3216 18954511

31. Elmi-Terander A, Nachabe R, Skulason H, Pedersen K, Söderman M, Racadio J, et al. Feasibility and accuracy of thoracolumbar minimally invasive pedicle screw placement with augmented reality navigation technology. 2018;43(14):1018. doi: 10.1097/BRS.0000000000002502 29215500

32. Elmi-Terander A, Skulason H, Söderman M, Racadio J, Homan R, Babic D, et al. Surgical navigation technology based on augmented reality and integrated 3D intraoperative imaging: a spine cadaveric feasibility and accuracy study. 2016;41(21):E1303. doi: 10.1097/BRS.0000000000001830 27513166

33. Jackman AH, Palmer JN, Chiu AG, Kennedy DWJAjor. Use of intraoperative CT scanning in endoscopic sinus surgery: a preliminary report. 2008;22(2):170–4. doi: 10.2500/ajr.2008.22.3153 18416975

34. Edström E, Burström G, Nachabe R, Gerdhem P, Elmi-Terander AJON. A Novel Augmented-Reality-Based Surgical Navigation System for Spine Surgery in a Hybrid Operating Room: Design, Workflow, and Clinical Applications. Epub ahead of print, available at: https://doi.org/10.1093/ons/opz236 Accessed Augsut 27 2019. doi: 10.1093/ons/opz236

35. Zhang ZJITopa, intelligence m. A flexible new technique for camera calibration. 2000;22.

36. Lai M, Shan C, editors. Hand-eye camera calibration with an optical tracking system. Proceedings of the 12th International Conference on Distributed Smart Cameras; 2018: ACM.

37. Gao X-S, Hou X-R, Tang J, Cheng H-FJItopa, intelligence m. Complete solution classification for the perspective-three-point problem. 2003;25(8):930–43.

38. Gumprecht HK, Widenka DC, Lumenta CB. Brain Lab VectorVision neuronavigation system: technology and clinical experiences in 131 cases. Neurosurgery. 1999;44(1):97–104. doi: 10.1097/00006123-199901000-00056 9894969

39. Labadie RF, Davis BM, Fitzpatrick JMJCoio, head, surgery n. Image-guided surgery: what is the accuracy? 2005;13(1):27–31. doi: 10.1097/00020840-200502000-00008 15654212

40. Schlaier J, Warnat J, Brawanski AJCAS. Registration accuracy and practicability of laser-directed surface matching. 2002;7(5):284–90. doi: 10.1002/igs.10053 12582981

41. Snyderman C, Zimmer LA, Kassam AJOH, Surgery N. Sources of registration error with image guidance systems during endoscopic anterior cranial base surgery. 2004;131(3):145–9. doi: 10.1016/j.otohns.2004.03.002 15365528

42. Mirota DJ, Uneri A, Schafer S, Nithiananthan S, Reh DD, Ishii M, et al. Evaluation of a system for high-accuracy 3d image-based registration of endoscopic video to c-arm cone-beam ct for image-guided skull base surgery. 2013;32(7):1215–26. doi: 10.1109/TMI.2013.2243464 23372078

43. Winne C, Khan M, Stopp F, Jank E, Keeve EJIjocar, surgery. Overlay visualization in endoscopic ENT surgery. 2011;6(3):401–6. doi: 10.1007/s11548-010-0507-7 20577827

44. Daly MJ, Chan H, Nithiananthan S, Qiu J, Barker E, Bachar G, et al., editors. Clinical implementation of intraoperative cone-beam CT in head and neck surgery. Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling; 2011: International Society for Optics and Photonics.

45. Daly MJ, Chan H, Prisman E, Vescan A, Nithiananthan S, Qiu J, et al., editors. Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures. Medical Imaging 2010: Visualization, Image-Guided Procedures, and Modeling; 2010: International Society for Optics and Photonics.

46. Hamming NM, Daly MJ, Irish JC, Siewerdsen JH. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions. Med Phys. 2009;36(5):1800–12. Epub 2009/06/24. doi: 10.1118/1.3117609 19544799; PubMed Central PMCID: PMC2832033.

47. Prisman E, Daly MJ, Chan H, Siewerdsen JH, Vescan A, Irish JC, editors. Real‐time tracking and virtual endoscopy in cone‐beam CT‐guided surgery of the sinuses and skull base in a cadaver model. International forum of allergy & rhinology; 2011: Wiley Online Library.

48. Burström G, Buerger C, Hoppenbrouwers J, Nachabe R, Lorenz C, Babic D, et al. Machine learning for automated 3-dimensional segmentation of the spine and suggested placement of pedicle screws based on intraoperative cone-beam computer tomography. 2019;1(aop):1–8.

49. Dixon BJ, Daly MJ, Chan HH, Vescan A, Witterick IJ, Irish JCJAjor, et al. Inattentional blindness increased with augmented reality surgical navigation. 2014;28(5):433–7. doi: 10.2500/ajra.2014.28.4067 25198032

50. Yeh M, Wickens CDJHF. Display signaling in augmented reality: Effects of cue reliability and image realism on attention allocation and trust calibration. 2001;43(3):355–65. doi: 10.1518/001872001775898269 11866192


Článek vyšel v časopise

PLOS One


2020 Číslo 1
Nejčtenější tento týden
Nejčtenější v tomto čísle
Kurzy

Zvyšte si kvalifikaci online z pohodlí domova

Svět praktické medicíny 1/2024 (znalostní test z časopisu)
nový kurz

Koncepce osteologické péče pro gynekology a praktické lékaře
Autoři: MUDr. František Šenk

Sekvenční léčba schizofrenie
Autoři: MUDr. Jana Hořínková

Hypertenze a hypercholesterolémie – synergický efekt léčby
Autoři: prof. MUDr. Hana Rosolová, DrSc.

Význam metforminu pro „udržitelnou“ terapii diabetu
Autoři: prof. MUDr. Milan Kvapil, CSc., MBA

Všechny kurzy
Kurzy Podcasty Doporučená témata Časopisy
Přihlášení
Zapomenuté heslo

Zadejte e-mailovou adresu, se kterou jste vytvářel(a) účet, budou Vám na ni zaslány informace k nastavení nového hesla.

Přihlášení

Nemáte účet?  Registrujte se

#ADS_BOTTOM_SCRIPTS#