surgical videos
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 51)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
pp. 014556132110624
Author(s):  
Amy B. De La Torre ◽  
Stephanie Joe ◽  
Victoria S. Lee

Objectives Online surgical videos are an increasingly popular resource for surgical trainees, especially in the context of the COVID-19 pandemic. Our objective was to assess the instructional quality of the YouTube videos of the transsphenoidal surgical approach (TSA), using LAParoscopic surgery Video Educational Guidelines (LAP-VEGaS). Methods YouTube TSA videos were searched using 5 keywords. Video characteristics were recorded. Two fellowship-trained rhinologists evaluated videos using LAP-VEGaS (scale 0 [worst] to 18 [best]). Results The searches produced 43 unique, unduplicated videos for analysis. Mean video length 7 minutes (standard deviation [SD] = 13), mean viewership was 16 017 views (SD = 29 415), and mean total LAP-VEGaS score was 9 (SD = 3). The LAP-VEGaS criteria with the lowest mean scores were presentation of the positioning of the patient/surgical team (mean = 0.2; SD = 0.6) and the procedure outcomes (mean = 0.4; SD = 0.6). There was substantial interrater agreement (κ = 0.71). Conclusions LAP-VEGaS, initially developed for laparoscopic procedures, is useful for evaluating TSA instructional videos. There is an opportunity to improve the quality of these videos.


2021 ◽  
Author(s):  
Serena Yeung ◽  
Emmett Goodman ◽  
Krishna Patel ◽  
Yilun Zhang ◽  
William Locke ◽  
...  

Abstract Open procedures represent the dominant form of surgery worldwide. Artificial intelligence (AI) has the potential to optimize surgical practice and improve patient outcomes, but efforts have focused primarily on minimally invasive techniques. Our work overcomes existing data limitations for training AI models by curating, from YouTube, the largest dataset of open surgical videos to date: 1997 videos from 23 surgical procedures uploaded from 50 countries. Using this dataset, we developed a multi-task AI model capable of real-time understanding of surgical behaviors, hands, and tools—the building blocks of procedural flow and surgeon skill—across both space and time. We show that our model generalizes across diverse surgery types and environments. Illustrating this generalizability, we directly applied our YouTube-trained model to analyze open surgeries prospectively collected at an academic medical center and identified kinematic descriptors of surgical skill related to efficiency of hand motion. Our Annotated Videos of Open Surgery (AVOS) dataset and trained model will be made available for further development of surgical AI.


2021 ◽  
Vol 74 ◽  
pp. 102240
Author(s):  
Zixu Zhao ◽  
Yueming Jin ◽  
Junming Chen ◽  
Bo Lu ◽  
Chi-Fai Ng ◽  
...  
Keyword(s):  

Author(s):  
James R. Mullen ◽  
Ramesh C. Srinivasan ◽  
David A. Tuckman ◽  
Warren C. Hammert

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yoshiko Bamba ◽  
Shimpei Ogawa ◽  
Michio Itabashi ◽  
Shingo Kameoka ◽  
Takahiro Okamoto ◽  
...  

AbstractAnalysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.


2021 ◽  
Vol 10 (13) ◽  
pp. 23
Author(s):  
Hsu-Hang Yeh ◽  
Anjal M. Jain ◽  
Olivia Fox ◽  
Sophia Y. Wang

2021 ◽  
Author(s):  
Tianyu Liu ◽  
chongyu wang ◽  
Junyu Chang ◽  
Liangjing Yang

Specular reflections have always been undesirable when processing endoscope vision for clinical purpose. Scene afflicted with strong specular reflection could result in visual confusion for the operation of surgical robot. In this paper, we propose a novel model based on deep learning framework, known as Surgical Fix Deep Neural Network (SFDNN). This model can effectively detect and fix the reflection points in different surgical videos hence opening up a whole new approach in handling undesirable specular reflections.


2021 ◽  
Author(s):  
Tianyu Liu ◽  
chongyu wang ◽  
Junyu Chang ◽  
Liangjing Yang

Specular reflections have always been undesirable when processing endoscope vision for clinical purpose. Scene afflicted with strong specular reflection could result in visual confusion for the operation of surgical robot. In this paper, we propose a novel model based on deep learning framework, known as Surgical Fix Deep Neural Network (SFDNN). This model can effectively detect and fix the reflection points in different surgical videos hence opening up a whole new approach in handling undesirable specular reflections.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Jiaying You ◽  
Shangdi Wu ◽  
Xin Wang ◽  
Bing Peng

Sign in / Sign up

Export Citation Format

Share Document