Artificial Neural Network for Laparoscopic Skills Classification Using Motion Signals from Apple Watch

Rubbermaid Laverde, Claudia Rueda, Lusvin Amado, David Rojas, Miguel Altuve

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

The acquisition of laparoscopic technical skills is constrained by the limited training opportunities and the necessity of having staff physicians on site to provide feedback to the trainees. In addition, the assessment tools used to measure trainees performance are not always sensitive enough to detect different levels of expertise. To address this problem, two Apple Watches worn on inexperienced subjects in laparoscopy were used to record their motion signals (attitude, rotation rate and acceleration) during multiple practices of the peg transfer task in a fundamentals of laparoscopic surgery (FLS) trainer box. This training process was carried out through a massed practice methodology (two hours of training), in which subjects were assessed following the guidelines of the FLS program. Subsequently, a series of metrics were estimated from the acquired motion signals and the Spearman's rank correlation coefficient was used to select the most statistically significant attributes. Then, a classification model based on artificial neural networks was trained, using these attributes as model inputs, to classify trainees according to their level of expertise into three classes: low, intermediate and high. Using this approach, an average classification performance of F1=86.11% was achieved on a test subset. This suggests that new technologies, such as smartwatches, can be used to complement surgical training by including motion-based metrics to improve current clinical education and offering a new source of feedback through objective assessment.

Fingerprint

Dive into the research topics of 'Artificial Neural Network for Laparoscopic Skills Classification Using Motion Signals from Apple Watch'. Together they form a unique fingerprint.

Cite this