Toggle Main Menu Toggle Search

Open Access padlockePrints

Deep Learning for Automated Boundary Detection and Segmentation in Organ Donation Photography

Lookup NU author(s): Dr George KourounisORCiD, Robin Nandi, Dr Sam Tingle, Dr Emily Glover, Emily Thompson, Balaji Mahendran, Chloe Connelly, Dr Beth Gibson, Lucy BatesORCiD, Professor Neil SheerinORCiD, Professor Colin Wilson

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

Background: Medical photography is ubiquitous and plays an increasingly important role in the fields of medicine and surgery. Any assessment of these photographs by computer vision algorithms requires first that the area of interest can accurately be delineated from the background. We aimed to develop deep learning segmentation models for kidney and liver retrieval photographs where accurate automated segmentation has not yet been described. Methods: Two novel deep learning models (Detectron2 and YoloV8) were developed using transfer learning and compared against existing tools for background removal (macBGRemoval, remBGisnet, remBGu2net). Anonymized photograph datasets comprised training/internal validation sets (821 kidney and 400 liver images) and external validation sets (203 kidney and 208 liver images). Each image had two segmentation labels: whole organ and clear view (parenchyma only). Intersection over Union (IoU) was the primary outcome, as the recommended metric for assessing segmentation performance. Results: In whole kidney segmentation, Detectron2 and YoloV8 outperformed other models with internal validation IoU of 0.93 and 0.94, and external validation IoU of 0.92 and 0.94, respectively. Other methods—macBGRemoval, remBGisnet, and remBGu2net—scored lower, with highest internal validation IoU at 0.54 and external validation at 0.59. Similar results were observed in liver segmentation, where Detectron2 and YoloV8 both showed internal validation IoU of 0.97 and external validation of 0.92 and 0.91, respectively. The other models showed a maximum internal validation and external validation IoU of 0.89 and 0.59 respectively. All image segmentation tasks with Detectron2 and YoloV8 completed within 0.13 to 1.5 seconds per image. Conclusions: Accurate, rapid, and automated image segmentation in the context of surgical photography is possible with open-source deep-learning software. These outperform existing methods, and could impact the field of surgery, enabling similar advancements seen in other areas of medical computer vision.


Publication metadata

Author(s): Kourounis G, Elmahmudi AA, Thomson B, Nandi R, Tingle S, Glover E, Thompson E, Mahendran B, Connelly C, Gibson B, Bates L, Sheerin NS, Hunter J, Ugail H, Wilson C

Publication type: Article

Publication status: Published

Journal: Innovative Surgical Sciences

Year: 2024

Pages: epub ahead of print

Online publication date: 20/08/2024

Acceptance date: 27/07/2024

Date deposited: 30/07/2024

ISSN (electronic): 2364-7485

Publisher: De Gruyter

URL: https://doi.org/10.1515/iss-2024-0022

DOI: 10.1515/iss-2024-0022

Data Access Statement: Organ photographs have been shared under specific data-sharing agreements and cannot be further distributed. Raw data on segmentation results are available upon request from the corresponding author. All software used is open source and referenced, allowing readers to replicate our methodology.


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
(NIHR203332)
Northern Counties Kidney Research Fund.
Wellcome Trust [R120782]

Share