Augmented reality during laparoscopic liver resection: a work in progress that is terribly necessary
In their article, Cheung et al. (1) report the application of a laparoscopic liver resection (LLR) technique based on augmented reality (AR) based on the use of preoperative of indocyanine green fluorescence.
This technique has already been reported also in LLR (2,3) but never specifically for patients with cirrhosis and hepatocellular carcinoma.
Let us also leave aside the classic criticisms of retrospective studies: small number, mono-centric, variable chosen to construct the propensity score of little relevance, which they had to have reviewed, to focus only on the fact that this study indirectly highlights that parenchymotomy during LLR remains difficult, even for expert surgeons. This difficulty is the cornerstone on which the development of LLR is based. For example, the development of a complexity score is there to highlight this (4,5).
It is interesting to note that we now have such effective equipment (CUSA-LIGASURE) that conceptually we could consider cutting the liver in half, in the horizontal plane, and this without bleeding problems. This crazy idea, which would lead to experimental hepatocellular insufficiency, illustrates that the major difficulty in LLR is no longer hemorrhagic control but rather the control of intraparenchymal navigation (IPN).
This IPN, even in open surgery, remains blind. Indeed, even if it is necessary to go through a truism: “anatomical elements are only visible during the progression of IPN when they are exposed”, therefore the only way to visualize them beforehand is to use AR. Currently this AR is performed by the surgeon through a mental construction that relies on his ability to merge: preoperative iconographic examination data, intraoperative ultrasound data, intraoperative findings and his personal ability to see 3D. Those who have reached this mental capacity, keep reminding that the learning curve remains long, and moreover at least 10% will not be able to claim to achieve it since 10% of surgeons do not have 3D perception (6). Thus, all AR’s development efforts should be encouraged and supported.
Let’s go back to Cheung et al. article. In theory, any navigation is based on two components: detecting and progressing. For navigation in liver surgery, also based on two components: detecting the tumor and detecting anatomical elements (veins and sliding pedicles), in order to bypass the first and avoid or control the second. A large part of the IPN is anticipated when the “flight plan” is set up, i.e. during preoperative work at the imaging console, with it now being covered more and more frequently with 3D virtual reality (VR-3D). When the planned resection is called anatomical, in a number of cases, the programmed IPN is linear and could allow the surgeon to be free to detect the lesion, since the planned IPN passes at a distance from it. But “prudence is the mother of safety”, and because the IPN, due to intraoperative turbulence and in particular in laparoscopy, may have to vary, it makes essential to do everything possible to ensure that the tumour is precisely located in all cases.
Therefore, it is precisely in the population chosen by Cheung et al. that this IPN is probably and surely the most complex. Indeed, as shown by Iwate’s classification (4), in the case of cirrhosis hepatic, LLR is even more difficult and in particular the progression is complex because the control of anatomical elements is more difficult and ultrasound detection of lesions is also more difficult or impossible. This choice of population once again we need to congrats Cheung et al. for their effort.
Acknowledgments
Funding: None.
Footnote
Provenance and Peer Review: This article was commissioned by the editorial office, Laparoscopic Surgery. The article did not undergo external peer review.
Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/ls.2018.10.08). The author has no conflicts of interest to declare.
Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Cheung TT, Ma KW, She WH, et al. Pure laparoscopic hepatectomy with augmented reality-assisted indocyanine green fluorescence versus open hepatectomy for hepatocellular carcinoma with liver cirrhosis: A propensity analysis at a single center. Asian J Endosc Surg 2018;11:104-11. [Crossref] [PubMed]
- Terasawa M, Ishizawa T, Mise Y, et al. Applications of fusion-fluorescence imaging using indocyanine green in laparoscopic hepatectomy. Surg Endosc 2017;31:5111-8. [Crossref] [PubMed]
- Kawaguchi Y, Velayutham V, Fuks D, et al. Usefulness of Indocyanine Green-Fluorescence Imaging for Visualization of the Bile Duct During Laparoscopic Liver Resection. J Am Coll Surg 2015;221:e113-7. [Crossref] [PubMed]
- Ban D, Tanabe M, Ito H, et al. A novel difficulty scoring system for laparoscopic liver resection. J Hepatobiliary Pancreat Sci 2014;21:745-53. [Crossref] [PubMed]
- Kawaguchi Y, Fuks D, Kokudo N, et al. Difficulty of Laparoscopic Liver Resection: Proposal for a New Classification. Ann Surg 2018;267:13-7. [Crossref] [PubMed]
- Bogdanova R, Boulanger P, Zheng B. Depth Perception of Surgeons in Minimally Invasive Surgery. Surg Innov 2016;23:515-24. [Crossref] [PubMed]
Cite this article as: Laurent A. Augmented reality during laparoscopic liver resection: a work in progress that is terribly necessary. Laparosc Surg 2018;2:52.