Next Article in Journal
Investigation on Modeling and Formation Mechanism of Dynamic Rotational Error for Spindle-Rolling Bearing System
Next Article in Special Issue
Texture Identification of Objects Using a Robot Fingertip Module with Multimodal Tactile Sensing Capability
Previous Article in Journal
Synthesis and Properties of Novel Calcia-Stabilized Zirconia (Ca-SZ) with Nano Calcium Oxide Derived from Cockle Shells and Commercial Source for Dental Application
Previous Article in Special Issue
A Pneumatically-Actuated Mouse for Delivering Multimodal Haptic Feedback
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Training and Guidance Systems in Medical Surgery

1
Department of Surgery & Cancer, Faculty of Medicine, Imperial College London, London SW7 2AZ, UK
2
Tecnologico de Monterrey, School of Engineering and Science, Ave. Eugenio Garza Sada 2501, Monterrey 64849, Mexico
3
Purdue Polytechnic Institute, Purdue University, West Lafayette, IN 47907, USA
4
Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Submission received: 31 July 2020 / Accepted: 4 August 2020 / Published: 20 August 2020
(This article belongs to the Special Issue Haptics: Technology and Applications)

Abstract

:
In this paper, a map of the state of the art of recent medical simulators that provide evaluation and guidance for surgical procedures is performed. The systems are reviewed and compared from the viewpoint of the used technology, force feedback, learning evaluation, didactic and visual aid, guidance, data collection and storage, and type of solution (commercial or non-commercial). The works’ assessment was made to identify if—(1) current applications can provide assistance and track performance in training, and (2) virtual environments are more suitable for practicing than physical applications. Automatic analysis of the papers was performed to minimize subjective bias. It was found that some works limit themselves to recording the session data to evaluate them internally, while others assess it and provide immediate user feedback. However, it was found that few works are currently implementing guidance, aid during sessions, and assessment. Current trends suggest that the evaluation process’s automation could reduce the workload of experts and let them focus on improving the curriculum covered in medical education. Lastly, this paper also draws several conclusions, observations per area, and suggestions for future work.

1. Introduction

Learning is difficult in life-critical areas (e.g., medicine), where the required level of expertise is high, and students need to spend a considerable amount of time in operation rooms or using training stations. Training methods have been evolving thanks to the advances in technology, and the current trends focus on e-learning courses and technology-enhanced learning methodologies to improve the education processes. Previous works in the area state that users are using virtual environments to understand concepts or acquire skills. However, these virtual environments often do not provide feedback or help during task execution. One of the current goals of technology applied in education is to assess the skills and knowledge obtained in simulators.
Educational models have been characterized as approaches where an expert teaches a user how to complete a task correctly. This has been considered as an effective technique to acquire new knowledge [1]. Nevertheless, training methods are affected by technological progress. Human-computer interaction (HCI) is a multidisciplinary field that focuses on studying the interaction between humans (the users) and computers. HCI has been a research topic since the creation of computers and learning environments, or computer-assisted instruction (CAI) systems. Current approaches in learning environments, such as E-learning courses and Technology-Enhanced Learning (TEL) methodologies, have been gaining focus of higher education institutions as feasible ways to enhance and reinforce the knowledge acquired in classes. These can be used as auxiliary elements for courses or as abundant educational resources on specific concepts. Additionally, artificial intelligence (AI) in education has been a current topic over the past two decades [2,3,4], where a wide range of methods have been used to assist users [5].
One of the main areas where these technology-oriented practices have been used is the medical field. Surgical procedures need a high level of skill and mastery due to their complex nature. The dissection of human corpses has been a traditional method to study anatomy and practice different surgical interventions since ancient times [6]. Among the difficulties found in this method are the legal problems in acquiring the cadavers and their subsequent preservation in appropriate containers with adequate solutions to maintain organoleptic characteristics. Moreover, the use of live animals for dissection and surgery training is a practice that faces increasing ethical concerns, and it has been discouraged [7]. Nevertheless, they often only resemble the real characteristics of human anatomy, and their use is limited to specific procedures, and they cannot be reused. Therefore, TEL approaches, such as Interactive Learning Environment (ILE), can provide students new ways to practice their skills before they can perform real surgeries.
Virtual environments are used to simulate a wide range of situations that doctors could face during surgeries. They can be easily reconfigured to reflect different scenarios or vary the difficulty of the performed tasks. Moreover, former virtual environments only exploited the senses of sight and hearing; however, haptic devices have gained terrain in the developing process of these environments. This technology enables users to manipulate objects in a virtual scene and feel the force feedback. Haptics solutions offer tangible interfaces with a high level of usability. This has led several companies and authors to create e-learning environments for medical education (Figure 1).

2. Theoretical Framework

In this section, a review of related concepts used in e-learning and TEL, as well as existing artificial intelligence (AI), and guidance techniques are presented.

2.1. Surgical Simulators

More recent and increasing practice in hospitals and universities has been the use of surgical simulations. These have evolved to include more realistic features, in their physical appearance and behavior [7]. Currently, there are several approaches for training practitioners in surgery methods [8,9,10]. They can be classified into two major groups—(i) those practices that include the use of human bodies, natural or synthetic, as cadavers, animals or manikins [11], and (ii) those that use virtual simulations, which uses current technology trends to provide better interactions and realism [12]. The latter can be classified into three groups—(i) immersive, (ii) augmented, and (iii) visuo-haptic simulators.

2.1.1. Immersive Simulators

Virtual reality simulators (VR) are artificial environments provided by a computer and in which one’s actions partially determine what happens in the environment. Virtual reality simulators use a graphic representation of the body parts and tools present in surgery. The user can operate in the virtual environment and see the results of the performed manipulations. Moreover, by using a headset, the user can face an immersive experience through sensory stimuli [13,14].

2.1.2. Augmented Simulators

While VR is fully immersive, the real world can be expanded by using projection displays by using augmented reality (AR). AR can provide additional information on top of real-world experiences, such as visual clues for navigation. To make the experience more realistic, these simulators can take advantage of the incorporation of augmented reality technology (AR), which allows the perception of objects in the simulators in three dimensions [15,16,17].

2.1.3. Visuo-Haptic Simulators

Visuo-haptic simulators typically involve virtual simulation that is combined with end-effectors, such as haptic devices. These environments allow the reproduction of the sense of touch. In this way, the user interacts with the simulation exerting and feeling the actual forces that they would apply to perform a real intervention [8]. In computer-assisted or teleoperated surgery, some stations provide haptic feedback while performing the surgery, such as the da Vinci system (Intuitive Surgical, Inc., CA, USA). This system was designed to facilitate operation using a minimally invasive approach controlled by a surgeon from a console.

2.2. Intelligent Systems

Artificial intelligence (AI) has contributed to the development process of virtual learning environments [18]. AI is used in parts of the intelligent systems that require decision making, classification of patterns, learning from behavior or actions, and so forth. It often collaborates with the user to perform a task with skill and quality [19]. This type of approach focuses on the capacity of systems to contribute to the job and the ability of users to learn and adapt themselves to their use. AI technologies can be classified into two categories: (1) Intelligent assistants (IAs) are embedded knowledge systems that provide users intelligent resources, such as help and solutions, during a task or a process. (2) Intelligent tutors (ITs) are environments that adapt themselves to the different ways that users acquire knowledge. They are capable of giving lessons and support to users, and they can provide instant feedback on users’ performance.
AI provides a set of powerful tools to build applications that support teaching, training, and learning; expanding new horizons in transmitting information and skills. However, because it is a rather new technology, only a few applications in the medical area use them. In the design of learning environments supported by AI, it should be considered that students do not perceive the help these technologies provide as a perturbation in their learning process [20]. Consequently, AI technologies have been applied in learning environments to expand new horizons in transmitting information and skills. AI provides a set of powerful tools to build applications that support teaching, training, and learning. Some of the AI technologies used in virtual surgery environments are: Hidden Markov Models (HMMs) [21,22,23], Support Vector Machine (SVMs) [24,25], Fuzzy Logic (FL) [26], and Bayesian Networks (BNs) [27,28,29]. Recent advances focus on deep learning and derived methods [30].

2.3. Guidance Techniques

A device that provides force feedback is usually implemented to guide users inside virtual tangible environments. In 1998, Gillespie et al. [31] proposed a virtual professor system. They characterized three ways to guide students in the acquisition process of physical skills:
  • Indirect contact, where an expert and a student grasp a tool in different places, and they do the action together.
  • Double contact, where a user holds the tool and an expert grasp his/her hand, and together they proceed to do the task.
  • Individual contact, where an expert grasps the tool, and a student holds his hand, and they work together in the activity.
A wide spectrum of guidance aid exists in virtual learning environments and robot-assisted systems. Powell and O’Malley proposed a classification of the techniques used in the development of virtual training environments that require force feedback [32]. Five groups were identified:
  • Gross Assistance (GA) uses virtual fixtures (VF), spring-damper systems, or attraction models. They are used to "guide" the user to a defined goal by restricting movement. According to Gillespie et al., GA fell into indirect and individual contact approaches. However, some authors have considered that GA is not very useful in education because it slows the brain’s immediate retention [20,33].
  • Temporally Separated Assistance (TAS) systems temporally separate orientation and task forces [34]. TAS are rapidly displayed in the same haptic device to let users think that they control the action. To achieve this, an updating rate of 1 Hz is required, which lets students feel direction clues during the activity.
  • Spatially Separated Assistance (SSA) uses two haptic devices. One is used to show the task force, and the other one displays the orientation force. Therefore, according to Gillespie et al., SSA can be considered a double contact technique. Only the work of Gillespie et al. can be classified as an SSA system.
  • Gross Resistance (GR) is based on the over-training concept, where the users train a task in the presence of an opposing force. Consequently, after taking this force, they can perceive the real environment and efficiently perform the task [35]. Random disturbances, in the form of viscous forces or force fields, are classified as GR guidance.
  • Shared-Control Proxy (SCP) techniques are based on a modification to work proposed by Zilles and Salisbury [36], where a second proxy and biased spring-dampers are added. This approach made the shared proxy’s position be influenced equally by the expert and the beginner. Recently, authors have started to explore SCP usefulness [37].

3. Methods

The present review was planned and structured according to the process described by Keele University’s Staffs [38].

3.1. Review Protocol

The following review protocol was created to establish the methodology to perform the review and avoid bias. The search strategy was conducted using the following steps: (1) Background, (2) Research questions, (3) Search Method for the Identification of Studies, (4) Selection Criteria, (5) Data Extraction and Characteristics of the Studies, (6) Analysis, and (7) Synthesis of the selected works.

3.2. Background

Previous works in the field state that several e-learning environments for medical education had been created in recent years [39,40]. However, current training environments do not provide the level of visual realism needed for these kinds of tasks [41]. Current reviews in the area cover advances in medical training [8,42,43,44]. These reviews have described virtual applications that have been used to practice surgical skills as a means of simulation stations, as serious games to validate the acquisition of skills, or how technology has been applied to them (e.g., haptic technologies and virtual or augmented reality). However, it has been identified that current virtual environments do not often provide sufficient feedback during or after the simulation, or they do not provide the assessment of obtained skills and knowledge. Consequently, this review analyzes recent advances in medical simulators with a focus on evaluation and guidance technologies.

3.3. Research Questions

The initial search started with identifying which solutions for training and guidance in medical surgery have been successfully developed in research and on a commercial level. This made it easier to focus the review efforts and define the research questions. The initial studies found were taken into account and, subsequently, the following research questions were chosen for the review and the formation of research questions:
(RQ 1)
What kind of virtual applications can provide training and guidance for the different areas of medical surgery?
(RQ 2)
What are the evaluation aspects to compare and find relationships between the training and guidance medical surgery applications?
(RQ 3)
What is the trend for the design of medical training environments between virtual and physical applications?

3.4. Search Method for the Identification of Studies

A bibliographic search using the Purdue Library, which is constituted of 676 Databases, and the Google Scholar platform was carried out to identify the most relevant studies available up to May 2020. The purpose was to identify studies which solutions for training and guidance in medical surgery have been successfully developed in research and on a commercial level.
The search was performed using the following keywords—(1) guidance, (2) evaluation, (3) assessment, (4) training, (5) medical education, (6) medical training, (7) surgery, (8) simulation training, (9) computer-assisted instruction, and 10) E-learning. For items 4 and 7, it should be noted, for the purpose of this review, that training is considered as the process that trainee has to perform to be competent enough to operate on their own, and surgery is conceived as the action required for surgeons to perform incisions to permit unrestricted visibility and direct access to organs and to treat a disease.

3.5. Selection Criteria

The selection criteria was established to extract studies that cover the following list: (1) articles published between 2005 and 2020, (2) articles in English, (3) articles that presented guidance or assistance techniques used in medical education, (4) training, (5) surgery, (6) articles that discussed a method of evaluation or statistical approach, (7) articles assess performance in training or education, and (8) articles that appeared in conference proceedings and scientific journals, (9) exclusion of review papers to avoid duplicated studies.

3.6. Data Extraction and Characteristics of the Studies

A total of 3530 studies were gathered and analyzed. First, two reviewers undertook the reading of all the titles and abstracts, and those that did not refer to the research questions were removed, remaining 2857. From these, 2184 were additionally removed to avoid duplicates or works not related to the study. Subsequently, the aforementioned reviewers conducted an independent evaluation of the remaining 673 studies. The reviewers excluded 606 works according to the selection criteria until reaching a consensus on the studies to be included in the review. Finally, a total of 67 studies were included, and the full texts of the selected studies were obtained for review. (Figure 2)

3.7. Analysis of the Selected Works

The previous works were assessed according to the following criteria—(1) are current applications able to provide assistance and track performance in training? And (2) are virtual environments more suitable for practicing than physical applications? Below a comparison and summary of technical aspects used in the simulators are provided and evaluation to find if there were any relationships between the applications is made. An automatic analysis of the papers to minimize subjective bias was carried. To do so, a high dimensional metric that assigns a value to each paper is proposed, according to the category to which it belongs (e.g., commercial/non-commercial, virtual solution, physical application, etc.). Moreover, the current analysis takes the assumption that two similar papers will be close to each other.

3.7.1. Quantitative Comparison per Area

Figure 3 shows the overall distribution of the papers, and it can be seen that 75% focus on training, 19% focus on surgery, and 6% on planning. From the latter, 25% are purely in the area of planning, 50% were developed for preparation but also have training approaches, and 25% have planning and surgery as their goal.
We found that arthroscopy is composed of 33.33% training, 50% surgery, and 16.66% for planning. Endoscopy is formed of 62.5% training and 37.5% surgery. Areas that focus only on training were laparoscopy, ophthalmology, ear, nose, and throat (ENT) procedures, radiology (where only Reference [45] consider the planning area), open surgery (except Reference [46]), neurosurgery, endovascular procedures, urology, and colorectal procedures. All works in orthopedics were developed for surgery, except for Reference [47]. In dentistry, only one work focuses on surgery [48], and the rest were produced for training purposes. Finally, pediatrics is the area that has the same amount of training and surgery papers. It is worth mentioning that these latter areas have 80% of the data set focused on planning. This could mean that researchers working in these areas have identified a need to add a planning module in these kinds of simulators.

3.7.2. Comparison Analysis

Table 1 provides an implementation summary of the analyzed works, and it describes their set-up, the type of application, and the type of use. The aspects considered to evaluate the applications were—CS: Commercial Solution, FF: Force Feedback, SM: Stored Metadata, DS: Database Storage, AI: Artificial Intelligence, G: guidance, A: Assistance, and E: Evaluation. PA: Physical Application and VS: Virtual Simulation is mentioned to prove our second hypothesis. Using the information from this table, an evaluation of the presented works was made. First, the eight-dimensional metric of each paper was used to conduct the cluster analysis. Hierarchical clustering with complete linkage to detect similar works, was performed in this study. The clustering was carried out employing the hclust command of R and the euclidean distances of the analyzed works. Then, to obtain the appropriate number of clusters, the best height was selected once the dendrogram was obtained. For the analysis, it happened for h = 1.7 , and the resulting clusters are in Figure 4.
Table 2 shows how the analyzed work are grouped according to their cluster and the order presented in Figure 4. Each cluster has its specific characteristics. The first cluster included the simulators described in References [61,71,80]. The main characteristic of this group is the use of AI techniques to provide an assessment of user performance. This cluster covers 60% of the AI applications, where the complement can be found in cluster two. Similar to cluster two, this cluster has applications that store metadata. It is worth noticing that most of the applications in this cluster provide guidance, except for Reference [71], and only the work described in Reference [80] implements force feedback. Simulators in this cluster could be considered the next generation of training and guidance applications in medical education.
The second cluster was formed by the works discussing surgical environments [49,58,59,60,75,76,77,84,98,109,111]. The main characteristic of this cluster is the feature to store metadata, similar to cluster one. The metadata used in these simulators is mostly used for evaluation. Assistance and guidance are only covered in References [49,58,59,60,109], and the rest of the works only cover one or none, like Reference [75]. Training environments in this cluster could be considered the complete solutions that do not use artificial intelligence techniques, and only 27.77% of the applications do not provide force feedback. Finally, it has to be noted that most of the simulators in this cluster are virtual solutions, except for Reference [111].
The third cluster includes the papers in References [45,51,63,65,67,70,74,78,79,81,82,87,91,92,93,96,99,100,102,104,105,106,108]. In this cluster, most of the simulators mainly use haptic devices for force feedback, and these works are also used to assess students’ performance. This cluster represents 43.24% of all the researches that implements force feedback in their design. The most commonly haptic device used is Geomagic Touch, due to its six degrees of freedom control. The above-presented simulators also assess user performance in the virtual environment and assistance during the task. However, only a few of them provide guidance [74,93,100]. This cluster could represent early-stage simulations that have started the validation process. In this cluster, applications included should be considered as the next step to add the store metadata module to start the evaluation of the task. Additionally, to provide guidance, they should start implementing visual cues, direct messages, or markers as a visual aid, or they should implement orientation forces to improve performance during the task.
The fourth cluster includes the simulators covered in References [56,72,97,110]. The common group features are that they are applications that focus on assistance and store metadata. They also guide the form of visual cues or markers [56,72,110]. Nevertheless, force feedback implementation [97,110] should be considered in these environments, where the interaction and user performance are usually related to the way the practitioner maneuvers in the virtual environment. An improvement to this cluster could be the implementation of a database. By only storing the session, simulators could also provide performance tracking over time.
The fifth cluster characterizes the works in References [46,47,48,50,52,53,54,55,57,62,64,66,68,69,73,83,85,86,88,89,90,94,95,101,103,107]. The main characteristic in this cluster is the use of guidance. This cluster represents 65% of the works that use guidance and 36.67% of works that are commercial solutions. Authors present in this cluster should focus future researches on providing full evaluation and assistance solutions. In-depth feedback for the users, with the addition of individual learning assessment over time, could allow students to enhance dexterity and improve performance. These modules should measure and guide them to perform the task properly, providing an enhanced learning process. Finally, it has to be noted that only the works in References [68,69] apply AI techniques.
Lastly, an interesting fact is that 45% of the researches presented here uses commercial solutions. From this percentage, 53% are physical workstations that use workstations, which are composed of physical models and virtual environments to proceed with a surgical task, 30% are CA solutions for planning or to be used during surgeries, and 17% are virtual environments that can be found in the market. The other 55% are solutions that were developed by research groups and laboratories across the world (Figure 5).

3.8. Synthesis of the Selected Works

In this section, the relevant studies selected, are presented. As mentioned in Section 3.6, this section reviews the applications from the viewpoint of used technology, force feedback, learning evaluation, didactic and visual aid, guidance, data collection and storage, and type of solution (commercial or non-commercial). The selected studies are organized by the kind of medical practice they cover.

3.8.1. Arthroscopy

Arthroscopy is a type of keyhole surgery used to diagnose and treat problems with joints. It offers benefits such as less trauma, reduced pain, and fast recovery time. They demand that doctors possess high psycho-motor skills. Commercial stations, such as ARTHRO Mentor, can be found in the market. Simbionix manufactures it. Jacobsen et al. tested ARTHRO Mentor to evaluate its usefulness as a basic competency training simulator [49]. The system consists of a stand with a fiberglass model of a right knee, connected to a computer and two Geomagic Touch haptic devices, sold by Geomagic. ARTHRO Mentor uses visual cues during simulation to guide students during a task. It has medical curricula integrated, which includes procedures and full surgical operation cases. Finally, it provides user management capabilities, and it generates statistics and graphics related to students’ learning process. Therefore, Jacobsen et al. consider that these types of solutions offer quality, complete, and secure stations using virtual environments that cover the medical curriculum.
The development of computer-assisted (CA) systems or training stations has been considered to help residents and surgeons perform surgeries correctly and with skill. One research in this area is the one conducted by Facca and Liverneaux [50]. They applied CA techniques in the operation of a trapeziometacarpal prosthesis on a corpse. The bone morphing system Leibinfer and Knee 2.0 software, both produced by Stryker, were used in this approach. Two tracers and a calibration probe were used during the surgery to provide aid during the procedure. The system is focused on helping the surgeons during the following steps—incision, bone morphing preparation and initialization, trapezoidal and metacarpal recordings, first and second kinematic analysis, and prosthesis insertion. Facca and Liverrneaux recorded data from the surgery to evaluate it. No dislocation during the task was found. This study helped proved that CA techniques help surgeons obtain and use biomechanics measurements during a surgery.
Total knee arthroplasty (TKA) is one of the most challenging procedures. Over 500,000 TKAs are performed annually in North America [112]. A study made by Hernandez-Vaquero et al. evaluates if a CA system could facilitate this type of surgery [51]. To assess CA techniques, the study consisted of two groups, where one had to do the TKA normally, and the other had to use a CA software. Both groups used the Triathlon arthroplasty implant model manufactured by Stryker, and the CA group used Stryker Knee software. This navigation system allows users to track their instruments’ movements and send the data to the camera’s localization system. Results from the study state that both CA and mechanical technique improved femorotibial alignment, but when there are preexisting deformities, the CA group obtained better results.
Kim et al. used a CA software to optimize coronal stability and parallel component positions during TKA surgeries [52]. Kim et al. evaluated the OrthoPilot navigation system developed by Aesculap. Pre and post-operation scores were obtained using the Knee Society (KSS) and Hospital for Special Surgery (HSS) systems. OrthoPilot enabled guidance through real-time feedback during operations. Consequently, it provides a useful way to achieve a correction of the varus deformity of the coronal alignment from zero to two degrees. Research, such as that from Hernandez-Vaquero et al. and Kim et al. validate that CA techniques let surgeons reach better results than those obtained during standard instrumentations, where closer to three degrees of external rotation were collected during the positioning of the femoral component.
Another exciting work in TKA is the one made by Myden et al., where they designed an educational intervention for this type of procedure [53]. Myden et al. developed a course to teach TKA. The experiment involved nine residents, where six were juniors, and three were seniors. Before using a CA TKA approach, test subjects had to do the Objective Structured Assessment of Technical Skills (OSATS), a validated skill test for TKA [113]. After the OSATS test, the residents used two tibiofemoral (TF) CA systems and one in-house patellar CA system. The TF systems were—Orthosoft Universal Knee software on a Sesamoid Plasty system, both developed by Zimmer CAS, and a Knee Unlimited software on a Kolibri system, both created by BrainLAB. These CA solutions enable users to plan on a virtual model the desired femoral and tibial component positions, and then they guide the user during cuts to obtain accurate results. Myden et al. enabled an environment for developing cognitive flexibility and self-assessment. Results showed that due to the multiple points of view (conventional and CA), residents were benefited during their learning process, where junior scores even surpassed the ones of the seniors in the post-tests. As a result, the creation of educational courses with CA systems should be considered by universities or hospitals to reinforce their educational model.
In general arthroscopy procedures, Tashiro et al. focused their work on evaluating skills in this area [54]. The tests were made using a knee simulator developed by Sawbones. Tashiro et al. recorded the parameters that were surgery time, instrument trajectory, and force applied in the simulator. The Aurora measurement system, manufactured by NDI, was used to collect the instruments’ path length and velocity. To evaluate the surgical force, a six-degrees of freedom sensor manufactured by Nitta Corporation was implemented. Users had an orientation and practice session before starting scored trials. Tasks to be evaluated were—joint inspection and probing, and meniscectomy. Joint checks and probing allow the users to interact with a defined number of figures in the time limit. On the other side, meniscectomy was programmed as a guided simulation, where the user has to perform it by following a line in the simulation.
By creating evaluation systems, such as the one described by Tashiro et al. in Reference [54] assessment of skill levels can be done. This is an essential feature in current commercial environments. They allow students and surgeons to understand their degree of mastery and skill in tasks, and help them focus and improve their less developed techniques in a procedure.

3.8.2. Dentistry

TEL techniques have been gaining terrain in all fields of science, where simulators and collaborative environment can be appointed as its most representative approaches. In dentistry, dental cavities treatment is one of the most common procedures that dentists face. Therefore, Kosuki and Okada implemented a collaborative simulator with guidance [55]. The simulator was created using IntelligentBox, a framework previously developed by the authors [114]. In this dental simulator, actions such as drilling and touch were programmed. The simulator uses a Geomagic Touch haptic device to provide force feedback. Collaborative training is deployed thanks to modules called RoomBoxes, which share user-operations between simulators. In this mode, the simulator is continuously collecting and propagating actions to other RoomBoxes. This feature lets the simulator apply all activities in the simulator at the moment they occur, enabling a user to proceed with real-time guidance and collaboration in a procedure. Even though collaborative techniques guide training, they still need a user’s presence, which can range from a resident to an expert. In the case of students, they still lack expertise during tasks, and for doctors, this type of approach is time-consuming. Consequently, researches should focus on developing automated or computer-assisted resources to provide quality and feasible training in education.
Another simulator found in this area is the one described by Medellin-Castillo et al. [56]. They created a haptic-enabled virtual simulator called Orthognathic Surgery System (OSSys). OSSys can be used for training, evaluation, or even planning of orthognathic procedures. The system displays three types of visualization—2D, 2.5D, and 3D. The simulator was developed using Microsoft Foundation Classes (MFC) of Visual Studio 2010, the Visualization Toolkit libraries (VTK) for graphical rendering, and the H3DAPI for haptic rendering. In this study, a Falcon, manufactured by Novint Technologies, or a Geomagic Touch haptic device can be used. For planning, OSSys calculates the distance between user-defined markers, which helps surgeons to generate a possible layout of the procedure. It also provides orthogonal views to consider all possible angles in the surgery. On the other hand, a simulation of six procedures was implemented for training, and evaluation reports are given as feedback. The reports are generated using the completed task times, and the Steiner methodology [115].
On dentistry, CA techniques are also found in the area of implantology [48]. Albiero et al. used a CA system for edentulous maxilla implantology. The test subject in this study was a 78 years old man. The CA system Simplant was used. It is developed by DENTSPLY Implants. Impressions of the maxilla and mandible were made. Implant planning, position, and angles were calculated by the software considering the final position of the titanium bar. This data was transferred to a master model, and the authors proceeded with the implantology. During operation, only minimal corrections were needed to achieve complete contact. Post-implantology results showed a mean apical deviation of 2.31 mm and a mean angular deviation of 3.72 degrees. This approach also provided an immediate rehabilitation of the mandible and maxilla on the same day of the surgery.
On the virtual simulations side, Liu et at. evaluated a ceramic crown preparation training system developed by Affiliated Stomatological Hospital of Nanjing Medical University, the Virtual Learning Network Platform [57]. The simulator provides visual aids for the user to understand the procedure (operational essentials, pre-defined criteria, and videos). Moreover, the system provides guidance using the Real-time Dental Training and Evaluation System (RDTES) and collects learning data to assess users’ performance [116]. Fifty-seven dental students participated in this study, and results showed that there was a significant improvement in their clinical skills.
Lastly, Al-Saud et al. performed a study to validate the commercial dental simulator Simodont Dental Trainer (Moog Inc., New York, USA) [58]. The simulator provides feedback. Sixty-three participants tested the environment. Participants performed five different tasks using geometric shapes. They had to remove a target zone and avoid restricted zones during the task. Al-Saud et al. found that the presence of VR devices alone is not sufficient for optimal motor skills training and must be coupled with expert guidance. Moreover, users that only used the feedback from the device alone were the ones that obtained the lowest performance throughout the experiment. This could mean that the system has room for improvements to increase pedagogical resources in its platform.

3.8.3. Endoscopy

As new technologies arise, endoscopy has been benefited due to its requirements in sensorial-motor skills. By applying technology during surgeries, safety can be enhanced, and harmful or dangerous events can be prevented. This type of technology is usually provided by commercial systems, hardware, and software. Fried et al. evaluated the performance of the Endoscopic Sinus Surgery Simulator (ES3) [59]. ES3 records overall and task-specific scores. For evaluation, it calculates scores using procedure time and accuracy. Penalties are applied to the rating if it detects surgical hazards during the process. ES3 provides guidance and aid in the form of a virtual instructor, which points out mistakes, errors, and misses during the simulation. Moreover, the system supplies users with predefined navigation cues, and it uses target markers during injection and dissection tasks.
Another study that analyzes a commercial system for evaluation and guidance during training is the one made by Tanoue et al. [60]. In this study, the authors tested the LapSim system manufactured by Surgical Science. This endoscopy simulator was used to create a training course. LapSim exercises used in this course were lifting and grasping. These let the authors evaluate both hands’ coordination, which is a crucial skill during endoscopy tasks. LapSim lets professors manage classes, students’ profiles, and it enables them to generate performance reports. In the area of aid, LapSim gives real-time on-screen cues and off-site reviews based on the user’s actions, and the system also uses a virtual helper to provide interactive learning in the simulation.
Although commercial systems can provide full tracking, guidance, and assessment tools, they are costly, which makes them no easily affordable for most educational or medical institutions worldwide. Consequently, studies such as the one developed by Surangsirat et al., focus their research to provide a low-cost solution [61]. The authors created an upper endoscopy training system that stores data from user’s sessions. The system provides two modes: training and exam modes. On training, the simulator uses an interactive training experience by enabling a helper on the scene. On the other hand, exam mode asks students to take snapshots of the scene, which are evaluated using an SVM. The simulator provides adaptive questions related to the task, that are generated according to the response of the previous one. The system provides user scores based on the pictures, the results from the questionnaire, and the procedure time.
Adaptation of commercial stations is another topic researched in endoscopy. It enables institutions that already have invested in market systems to expand its usefulness by replacing preinstalled software on the stations. Jiang et al. modified the NeuroVR system (CAE Inc., QC, Canada) to perform endoscopic third ventriculostomy [62]. The authors replaced NeuroVR’s original two mirrors and two screens, and they mounted four mirrors and a screen. New simulation software developed using the Blade framework was installed. The authors modeled two tasks—burr-hole position and entry orientation selection and navigation inside the ventricular system. Jiang et al. worked closely with end-users, and they noticed that guidance would be a beneficial resource in their simulator. Therefore, they implemented an on-screen advisor. This advisor can tell the user the actions that have to be done during the procedure by displaying a message window.
Another area in endoscopy that has been researched focuses on the information provided to doctors during surgery. Visual information on endoscopy is usually shown in a monitor during the operation; however, some studies state that a direct view of the scene provides better performance [117]. Heuer et al. conducted a recent study that is centered on this matter [63]. Sixteen participants were divided into two groups, an endoscopic-view group and a direct-view group. They had to ablate three layers of simulated tissue. The system consisted of a box with a hole to insert a rigid A 2614 resectoscope model manufactured by Olympus. The simulator was equipped with a CMS 50 acoustic motion capture system, from zebris Medical GmbH, to record the 3D positions of the resectoscope. The authors provided feedback by showing the removed residuals and explaining the quality of the result. By doing this experiment, the authors found that a direct-view approach could help surgeons calibrate the monitor’s information before performing the surgery on endoscopic-view.
In the area of paranasal sinuses researchers such as Mueller and Caversaccio have directed their interest to evaluate and compare the outcome of CA surgeries and non-CA ones [64,118]. The study focused on chronic rhinitis or nasal polyps tasks. The set-up proposed consisted of the SurgiGATE ORL navigation system developed by Synthes Incorporation. The SurgiGATE ORL keeps tracks of instruments’ tip and axis localization. The study showed that surgeries had no difference between CA procedures and non-CA ones. Therefore, studies like this warn the research community to avoid dependence on CA systems alone. CA system provides mutual aid during the surgery; consequently, proper training without this type of assistance should be additionally considered.
Advances in robot-assisted procedures also appear in endoscopy. Ryu et al. developed a vision-based instrument tracking algorithm to assist surgeons during surgeries [65]. Since endoscopy enters the category of minimally invasive surgeries (MIS), workspace is minimal, and collisions between surgical tools can occur. Ryu et al. applied CA methods in robot-assisted operations. In this research, an algorithm to monitor instruments and collisions was developed. They used a visual feedback approach, where if the tools are going to experience a collision between themselves, the system provides a warning. This warning is generated by using Euclidian distance calculation. A Kalman filter was applied to this value to avoid errors during the instrument tracking process. The algorithm was programmed on MATLAB, and it was tested using a record of endoscopic surgery. Results were promising, and support their non-hardware approach in CA related surgeries, which could considerably reduce the cost of operations.
Lastly, Korzeniowski et al. developed a simulator to perform a transgastric hybrid cholecystectomy [66]. For their simulator, NOViSE, the authors developed an in-house force-feedback endoscope haptic device and couple it to a virtual simulation developed in Unity and their in-house deformable object framework. The simulator lets the user navigate the endoscope through the esophagus to reach the gallbladder and perform the removal of it. Moreover, NOViSE provides visual guidance through checkpoints to help the user reach the target. Fourteen clinicians validated the NOViSE simulator, and good overall results were achieved.

3.8.4. Laparoscopy

Laparoscopy, besides endoscopy, is one of the medical areas that has benefited from advances in technology. Some studies have focused on the recreation of commercial solutions in virtual environments [67]. Zhang et al. created the Base Laparoscopic Surgical Trainer (VBLaST). It recreates the Fundamentals of Laparoscopic Skills (FLS) simulator, developed by the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES). Tasks simulated by VBLaST are peg transfer, pattern cutting, placement of lighting loop, and suturing with intracorporeal or extracorporeal knot tying. The user performs the tasks by using two Geomagic Touch haptic devices that have graspers as end-effectors. To measure user performance, the simulator analyzes video captures of the session to extract the timing of procedure and errors. This experiment was made to check if VBLaST can be as effective as FLS for laparoscopy training. Students used FLS and VBLaST, and results showed that faster learning could be obtained with VBLaST than FLS due to its haptic feedback approach.
On the side of teleoperated surgery training, Ovur et al. created a training station to practice surgical skills using a Geomagic Touch haptic device [68]. The simulation was developed using the Robot Operating system environment and VTK. The simulator was considered to have augmented reality functionality. The application lets the user practice in different tasks: explore a kidney tumor on an AR scenario, interact with the contour of the renal artery during a nephrectomy procedure on a virtual scenario, and additional basic tests, such as explore primitive shapes. Guidance is provided during the exploration of virtual anatomy. Visual aid let the user know the distance between the organs and the interaction tool, as path guidance. Moreover, the authors used a Fuzzy-PD controller to provide force assistance during the tasks. To validate their work, twenty-seven non-medical participants tested their simulator. Results state that there is a meaningful difference among users that tested the environment with force assistance.
Another development area that has been considered by researchers is low-cost laparoscopy simulators [69]. Park et al. developed a low-cost cholecystectomy simulator. The authors made their system using laptops, a Microsoft Kinect camera, and two Nintendo Wiimote controllers. The Kinect was used to capture the user’s motor actions to display it on the virtual scene. Wiimotes were used to provide tactile feedback during the process. The simulator was constructed using video-laparoscopic recordings from real surgeries. Park et al. built an expert knowledge database to constitute their simulator as an interactive video tutorial. For evaluation, HMMs are used to compare actions recorded by the Kinect and compare them with those stored in the database. Even though Wiimotes do not provide strictly haptic feedback per se, the simulator created by Park et al. can be considered as an in-home novel training application due to its low cost, which is around 180 USD.
In the area of evaluation systems in laparoscopy, works like the one of Lamata et al. and Liang and Shi can be mentioned [70,71]. SINERGIA simulator, made by Lamata et al., is another system that enters the field of evaluation systems for laparoscopy. Lamata et al. created SINERGIA to provide a virtual solution with an educational approach. SINERGIA lets users control the camera, grasp, pull, cut, dissect, and do suture in the virtual environment. It was created using the Blender framework, and its workstation consists of a monitor and two haptic devices that have surgical grippers as handlers. In the area of appendectomy, Liang and Shi created a virtual system to assess trainees’ skill level. The simulator provides a virtual advisor, which instructs users when improper actions happen in the simulation. The system records training data during the virtual procedure to keep track of the user’s learning process. This data is used to determine the expertise level of students by applying machine learning algorithms. In this case, HMMs and FL are used.
Other simulators focus only on guiding and aid [72,73]. Munro et al. developed a resectoscope simulator, which aims to let users understand the manipulation of tools and organs during myoma resection. The system directs the user during the simulation. Users have to interact with four targets, one at a time. The target point stays in red until the user successfully touches it. The simulator stores operation time, outcomes from objectives, and errors; however, this data is not shown to the student. Adding additional assessment parameters to the system, the authors could provide a proper evaluation to give learning feedback after sessions, which could enhance the user’s understanding and knowledge in this type of operation. Gaudina et al. designed video laparoscopic training exercises and developed the eLaparo4D system. eLaparo4D was constructed as a node.js application server, and it uses HTML5 with the Unity3D engine plugin to give users an online platform. The authors used the Blender framework for graphic rendering. For haptic interactions, three Geomagic Touch haptic devices are used. Two are used as tool handlers (grasper, hook, or scissors), and the third one is used to move the camera within the virtual abdomen. This approach tries to simulate realistic laparoscopic equipment’s interactions. Gaudina et al. used an Arduino board connected to a vibrating motor to enhance haptic feedback in the form of vibration. Additionally, eLaparo4D has a module for remote guidance. This guidance can be provided via the web to show students the proper way to execute a critical task; however, another eLaparo4D simulator has to be used. Scores are calculated in a way where each exercise has allowed and not allowed actions, which adds or subtracts points. Finally, user profiles are implemented in the simulator to check user progress over time.
One of the most appointed systems on commercial stations is the da Vinci station manufactured by Intuitive Surgical. It costs around 1.5 to 2 million USD. Tillou et al. evaluated if the system can be applied in a training program [74]. Tillou et al. used the da Vinci Skill simulator system. The da Vinci skill simulator provides task performance assessment by using quizzes and evaluation metrics during the simulation. Moreover, the simulator guides users by providing instructions on-screen, visual prompts, and markers for aiming during the simulation. Tillou et al. considered that the following tasks had to be assessed for this course: camera control, clutching, handle of EndoWrist, needle driving, energy coagulation, and dissection. Performance results showed that participants obtained 80 or higher scores in the tasks, and users’ post-test from users highlighted the acceptance of simulators in training courses.
Another commercial station in laparoscopy that can be found in the market is SimSurgery. Nissen developed it. Jungmann et al. evaluated this system to test its usability for training courses [75]. SimSurgery consists of a screen and two laparoscopic tools that provide force feedback to give as much realism as possible in the simulator. During the virtual procedure, on-screen guidance is provided to the user. SimSurgery counts with collision detection techniques that are used to evaluate if collisions between instruments occur. If this type of collision happens during a session, the simulator lights up a red warning on the screen. This type of visual help was implemented to teach users how to operate correctly during real operations. At the end of the simulation, time, instrument collisions, and tip trajectory are used to assess the procedure and provide a score. Studies such as the ones made by Tillou et al. and Jungmann et al. reinforce that users who perform surgeries, via conventional, CA, or robot-assisted approaches, should perform runs on virtual environments to secure their level of prior experience in the task. This statement can be validated by Tillou et al. results, where they found that experienced laparoscopy users slowed down their performance to adapt themselves to new methods or technologies in surgery.
On the side of commercial stations that are used to teach novice surgeons specific skills to manipulate surgery stations, the RoSS enables the training of motor and cognitive skills required for operating the da Vinci surgical robot. Simulated Surgical Systems sell it. Rehman et al. used this robot-assisted training system to evaluate its capacity to develop cognitive-motor skills [76]. RoSS provides an integrated management system that stores the metrics for all users and tasks performed on this platform. Sixteen modules are programmed into the simulator. For guidance, the simulator provides visual tips on-screen to help students during the task. To evaluate users’ performance, bi-manual dexterity, critical errors, safety in the operative field, and task time are considered to provide a score. Scores are generated using a six scale (retake, poor, average, good, expert, and superior). Benefits that RoSS provides to users are customization, and that it is a portable, stand-alone console.
Works based on modifications to commercial systems can also be found in laparoscopy. Ayodeji et al. adapted an LS500 laparoscopy surgery simulator with haptic feedback sold by Xitac [77]. The modification was focused on mounting the Lap Mentor training software, developed by Simbionix, into the LS500 system. Lap Mentor consists of two operation modes: skill trainer and procedural training. The skill trainer mode lets users practice fundamental skills in a non-anatomic environment. The second one allows trainees to practice laparoscopic cholecystectomy on a virtual patient. In both modes, the users’ level of competence is assessed, and Lap Mentor provides scores for each task. The simulator lets users check videos, 3D maps, and tutorials to enhance their educational process. Lastly, Lap Mentor saves user sessions to provide graphs and performance timelines of students’ learning curves.
Other work based on modifications to existing environments is the one conducted by Herbert et al. [78] The authors adapted the FLS simulator for pediatric purposes. The new approach is called the Pediatric Laparoscopic Surgery (PLS) [119]. PLS is a multi-port box trainer. Herbert et al. modified it to be used in a SILS Port, made by Covidien. Four tasks were performed by testers—peg transfer, pattern cutting, lighting loop, and intracorporeal suturing. Using the SILS Port, the PLS can be adapted for low-cost, and low-barrier single-port laparoscopic surgery training.
Moreover, on FLS related researches, Bahrami et al. tested its usefulness by adapting and coupling it in a magnetic resonance (MRI) environment [79]. Tasks in the FLS are evaluated based on speed and accuracy; however, penalty scores are deducted if errors occur in the simulation. In this experiment, Bahrami et al. replaced all metal parts of the FLS to create an MRI approach. In this test, depth perception, peg transfer, and knot tying on a string were done to assess the user’s mental activity. Results pointed brain and behavior relationships during the tasks. This dictates that dynamic changes in cortical networks occur as a consequence of training. This newly implemented FLS application could help educational institutes understand how residents or novices’ training evolves and modify or create new educational or courses on laparoscopy.
Hong et al. developed a skill assessment and guidance module for a computer-assisted laparoscopic procedures simulator [80]. The authors used the Computer Assisted Surgical Trainer (CAST) developed by Rozenblit et al. [120]. The simulator provides visual and haptic guidance thanks to their previously developed modules [121] which implements virtual fixtures and visual aids. In this study, the authors used an adaptive neuro-fuzzy inference system to assess the user. They used six evaluation metrics to achieve performance assessment: average speed, completion time, path length, idle time, deviation ratio, and direction profile ratio. To test the proposed module, forty-two participants tested the environment. The task to perform was to move along a recommended trajectory and touch multiple targets. The authors found that the module is feasible to assess laparoscopic skills; however, they will investigate a better method to provide the score of the performed task.

3.8.5. Ophthalmology

Ophthalmology simulators have been previously analyzed by the authors [42]. However, some of these training environments neither provide evaluation nor guidance modules. Cataract surgery involves replacing the cloudy lens inside the eye with an artificial one. As people grow, eye lenses start to become frosted, which disturbs the sight. Choi investigated the feasibility of a virtual surgery simulator with evaluation capacities [81]. He programmed an application for phacoemulsification in cataract surgery. The solution consists of a monitor for visual orientation, and it uses a Geomagic Touch haptic device for interaction purposes. This application can be considered as a tool to monitor learning progress and assess skill competency. It collects data such as completion time of cross-shape trench creation, the number of tissues removed from the lens, the width and depth of the cut and the crossing angle, and the path of the phaco tip and the forces applied during sculpting to measure the operation.
In a study carried out by Kim et al., an extracapsular cataract extraction simulator was developed [82]. The authors developed an in-house pen-shaped haptic device. This device is used with a head-mount display model, the Aurora electromagnetic tracking device, a foot pedal, and a fake lens physical model to navigate a 3D virtual environment implemented using Unity. In the simulator, the user observes scenes through the head-mount display and control his/her view utilizing the foot pedal. The eye model was created using 3DMax, and a 3D virtual model of a syringe with a bent tip needle was to interact with the virtual eye.
On the other hand, Henderson et al. designed and developed a 2D environment for cataract surgery [83]. The Virtual Mentor Training Program software provides an interactive cognitive simulator for teaching hydro dissection. The software was used in seven academic departments of Ophthalmology in the Medical School of Harvard University. Decision-making and error recognition are features that a virtual mentor gives as feedback to students. Additionally, it provides help and references during the lesson. Pre-test, post-test, and satisfaction surveys were applied to students of the course. This data showed that residents enjoyed the experience and commented that this type of didactic resources encourage self-learning and could enhance performance on their education.
Feasible training and assessment tools are also found in this area. The Eyesi surgical simulator manufactured by VRMagic is one of them. Le et al. used this simulator to measure the performance of trainees in intraocular operations [84]. A practice trial followed by three scored trials were performed. The time limit of each task was twenty minutes, which is considered normal in this type of surgery. Eyesi counts with a tracking system that captures the movement of the instruments and eye and the biomechanical reaction of the tissue. Eyesi provides users the ability to manipulate instrument settings, which lets them experience a real operation set-up. It has an instructor station that allows real-time performance monitoring. The system provides session recording and video playback to review the virtual process. Finally, Eyesi can also generate reports to evaluate individual performance data.
Physical applications that allow guidance are described in researches as the one conducted by Nasseri et al. [85]. In the system developed by these authors, a robot-assisted technique was applied in ophthalmic surgery. It consists of VFs to pivot around the remote center of motion (RCM), which is the incision point in this type of surgery. Five sub-micron precision piezo actuators provide VFs. The system’s novelty is that VFs can be adjusted before or during the procedure, which enhances orientation and safety during surgeries. However, one of the points that should be considered in systems that use adjustability is that this action should be done automatically during the process to avoid the surgeon’s distraction.

3.8.6. Orthopedics

Simulations in this area cover standard procedures. For example, the investigation of Rambani et al. used real patient computed tomography (CT) scans for pedicle screw fixation in spinal surgery [86]. An in-house CA software made by the authors in their previous research was used [122]. These CT scans, in collaboration with the software, are used to produce real-time fluoroscopic images of the lumbar spine. This approach was applied to generate guidance in the virtual operation. Results in Rambani et al. simulator are based on surgery time, the accuracy of pedicle screw insertion, and the number of exposures made by the user to complete the procedure.
In the physical application area, Facca et al. proceed to perform a shoulder girdle and brachial plexus surgery using robot-assisted procedures [87]. Facca et al. used this approach using the Intuitive Surgical da Vinci Si. Their goal was to create a minimally invasive technique that could allow earlier exploration and possible repair within eight days of the trauma. The experiment was done at the Intuitive Surgical laboratory. Four skin incisions were made during this operation. Two of them were used to introduce robotic arms that carried suitable surgical instruments. The other two incisions were created to add a dual endoscopic 3D HD vision camera. Dissections were made with the da Vinci Si robot, and results obtained could support that the endoscopic supraclavicular brachial plexus palsy is feasible using robot-assisted surgeries.
Another approach for CA surgeries is the one proposed by Gebhard et al. [88]. They evaluated if a CA system could act as an intraoperative ruler in high tibial osteotomy (HTO). Fifty-one patients with arthritis were treated. They used the Tomofix Osteotomy System guide technique, developed by Synthes, in collaboration with the VectorVision Osteotomy module created by BrainLAB. In this type of surgery, minimally invasive units were placed in the medial cortex of the tibia and the ventral medial or the ventral lateral side of the femur. By using anatomical landmarks, the system calculates all the information that surgeons need for this operation. A screen is used to show the leg and the alignment parameters. Patients that were treated using this approach showed precise intraoperative limb alignment. Therefore, the authors concluded that CA HTO provides better results than using current pre-operative plans.
Grossterlinder et al. also tested the feasibility and accuracy of CA systems in pelvic screw placement [89]. They used the BrainLAB VectorVision navigation system and the ARCADIS Orbic intraoperative 3D imaging system created by Siemens Medical Solutions. Tests were done on five human cadavers preserved using Jore’s solution. Both software was used to help surgeons place forty screws into the first and second vertebra. Accuracy tests were done after the operation via standardized CT protocols by two independent radiologists. Malpositioning was defined as any screw penetration of the bony cortices, regardless of the direction. The CA system obtained positive results; in other words, complete and successful interior tidal screw positioning in all the different pelvic regions was achieved with a zero percent misplacement rate.
Approaches covered by Facca et al. [87] Gebhard et al. [88] and Grossterlinder et al. [89] could confirm the need for CA or robot-assisted technologies in surgeries. These systems should be considered high-value resources because they can enhance performance by reducing malpositioning or errors during medical operations. Additionally, they could be considered a way to provide security when less experienced surgeons are the ones that perform the task in operating rooms.
Cecil et al. developed an orthopedic training simulator to perform Less Invasive Stabilization System (LISS) plating surgery [47]. The simulator can use both virtual reality and haptic feedback features. The simulator can be applied using an HTC Vive set up for virtual immersion. The user interacts to perform the procedure using its handheld controllers, or it can be used with a Geomagic Touch haptic device. The simulator was built using Unity, and to implement the VR capabilities, they used the Steam VR toolkit. Moreover, the simulator provides web-based features, and for guidance, the authors implemented text-based and audio-visual cues. The simulator offers a training station where the user can practice six training environments based on the LISS plating surgical step.
Lastly, in the area of tumor removal, Wong et al. focused on the generation of a CA procedure for malignant bone tumors [90]. It uses the CT spine navigation system developed by Stryker. CT and MRI of Axial, Coronal, and Sagittarius views are inputted on the software, and it fuses them to generate operation-related data for planning purposes. 3D models, such as no-tumor ones, can be generated using the CT and MRI images. Twenty patients with 21 malignant musculoskeletal tumors were treated using the CA approach. By analyzing the obtained data, the authors could conclude that an accurate tumor resection in affected bones was achieved.

3.8.7. ENT Procedures

ENT (ear, nose, and throat) procedures, also known as otolaryngology, is the surgical area that evaluates and manages a wide range of diseases of the head and neck, including the ear, nose, and throat regions. Sewell et al. focused their work on providing metrics and performance feedback in a mastoidectomy simulator [91]. This simulator teaches haptic, anatomic, and cognitive aspects found in this surgery. Haptic devices are used to interact and feel force feedback in the simulation. The authors decided to use the Chai3D framework, which is a C++ solution that includes haptic libraries. Aids and visualizations, such as color markers, are used to highlight pertinent parts in the session. Sessions are assessed to provide an evaluation, and they are recorded into a database. Session data is used to create a video of the procedure. By giving video support, the simulator lets users navigate it to identify where they got low scores and check it properly to improve their performance.
Other solutions focus on temporal bone surgery. A commercial simulator for this surgery is Voxel-Man Tempo. It was created by the University Medical Center Hamburg-Eppendorf (UKE). Arora et al. evaluated this system to evaluate its use to rehearse activities [92]. Voxel-Man Tempo provides self-study facilities, and it offers seven pre-defined training cases of the middle ear, with different anatomy and pathology. An exciting feature of the software is that it provides an upload module. It lets users load CT data to expand the surgery database. For interaction, two Geomagic Touch haptic devices are used to enhance the tactile experience. On the side of visualization, the software provides camera movement features, such as different views windows, which let users correctly orientate themselves during the virtual operation. It includes an automatic skills assessment, where the process and the trainee’s knowledge can be objectively evaluated. The evaluation is based on predefined tasks, and a report is generated at the end of each session. Authors stated that since Voxel-Man offers a problem-based learning (PBL) approach, it is suitable to create new realistic scenarios.
Another study centered on temporal bone surgery is the one conducted by Fang et al. [93]. They evaluated another commercial simulator, Visible Ear. The Alexandra Institute created it. It provides a 3D simulation that uses one Geomagic Touch haptic device to train dissection on the temporal bone area. It gives objective evaluation, where performance scores are calculated considering the volume of bone removed, operation time, and collisions. The collision with the dura, facial nerve, inner ear, stapes, malleus, and incus are the ones considered to reduce the score obtained in the task. Visible Ear guides by using a tutor in a window. This tutor gives an explanation to users about the actions they are doing. For aid, pop-up warning messages during drilling are used to teach students the actions that have to be avoided during real operations.
Even though commercial simulators provide feasible environments, they still lack the pedagogy features required in proper medical curricula. Medical institutions developed both the Voxel-Man Tempo and Visible Ear simulator. They were modeled by taking into account the educational needs that students required during simulations. Therefore, researchers should focus their attention not only on creating simulators, but also implementing pedagogical approaches to provide integral solutions in medical science.
VFs are also applied in ENT studies. Wilkening et al. used them to create an algorithm to be involved in robot-assisted surgery for cochlear implant insertion [94]. VFs are used in this work as constraints during the procedure. The system can be adapted to use the patient’s anatomy by inputting pre and intraoperative optical coherence tomography (OCT). OCT is obtained using a bulk-volume scanner and a side-viewing probe. Using this hardware, imaging and registration of the cochlear lumen are obtained. The robot-assisted system they used is the one designed by He et al. [123]. This set-up is simultaneously controlled by the operator and the robot arm to decide where the instruments will be placed. For visualization, the operator uses a surgical microscope during the operation. To test this solution, Wilkening et al. used a Nucleus 24 Contour Advance Practice Electrode implant, which Cochlear sold. The procedure was compared with manual operations and robot-assisted approaches that do not use VF. Wilkening et al. results suggest that by using VFs, users could enhance their accuracy and precision.

3.8.8. Pediatrics

Liver tumor removal, principally hepatoblastoma, in children is not a standard procedure; however, it represents the 0.2% of the number of operations in pediatric malignancies [124]. Doctors in the area are currently discussing the best approach for its surgical treatment. Consequently, researchers have focused their attention on developing technology-based solutions [125]. One of the works in this area is Warmann et al., where they used CA technology to plan complex liver tumor operations [95]. They used a commercial assistant software called LiverAnalyzer, created by the Fraunhofer MEVIS. LiverAnalyzer is used to process data from cross-sectional CT images. This analysis provides individualized anatomical information of the liver, including tumors, vascular structures, and remnants of the liver after the task. Sixty-three children were evaluated for this CA surgery, and results pointed out that CA systems could play an essential role in the decision-making process of hepatoblastoma surgery.
Another area in pediatrics that presents advances in simulators is fetal therapy. One of the most severe treatments performed in this area is twin-twin transfusion syndrome (TTTS). TTTS is a severe complication that is usually treated using fetoscopic laser surgery. Its success rate is 60% to 70% to save both twins, and 80% to 90% to save at least one twin [126]. Peters et al. developed a high-realistic simulator to let surgeons practice fetoscopic laser surgery [96]. The system was built by adapting the simulator described in the work of Pittini et al. [127]. In this simulator, a realistic monochorionic twin placenta model and twin fetuses models were inserted. Additionally, a silicone interface at the top of the model was used to imitate the abdominal wall. All these modifications let the simulator provide a real-oriented architecture. The Delphi consensus was used as evaluation metrics, where two independent observers assessed the process. Results found in this research can point out that this life-like environment improves the performance of the procedure in a standardized model.

3.8.9. Radiology

Virtual environments provide new training alternatives in radiotherapy (RT). Bridge et al. created MITIE, an imaging training immersive environment [97]. The application allows RT students to practice, outside the usual radiography laboratory, radiographic procedures. MITIE uses the Quest3D library for rendering the virtual scene. By using a tutor-determined gold standard, MITIE enables students to compare their performance and knowledge using a PBL pedagogy. The system provides automated feedback, such as playback. Moreover, a video is used to highlight errors and point potential improvements. On the other hand, Phillips et al. programmed a virtual treatment room, RTStar, that provides a range of simulations and visualization found in radiology [45]. RTStar was built using C++ and OpenGL for graphic rendering. RTStar initial use was for education and interactive, immersive radiology; however, the authors extended the system to support RT planning. Currently, RTStar provides a visual warning when a collision between the gantry of the linear accelerator, the couch, and the patient is about to happen in the simulation.

3.8.10. Open Surgery

Suturing is one of the fundamental skills that all doctors should perform adequately. Authors, such as Kazemi et al., have started to generate in-house solutions to help trainees train suture [98]. Their platform uses a Geomagic Touch haptic device, a pair of Crystal Eyes 3D glasses, sold by StereoGraphics, a stereo synchronization emitter, a semitransparent mirror, and a metal frame. To provide a realistic approach, different needle shapes can be used during the simulation. The software was developed using the Reachin API, custom classes, and objects. To evaluate the user, time to complete the task, entry and exit tear size area, penetration, exit angles, tool movements, average grasp number, and motion smoothness are considered to calculate a task score, which is stored in a performance database. The system also gives immediate feedback and objective measurement of performance during the simulation to let the trainees compare and improve their techniques.
Venipuncture is another typical procedure. It is the process of obtaining intravenous access for venous blood sampling or intravenous therapy. Wrong techniques increase the discomfort of patients and can delay therapeutics or testing. Therefore, solutions have arisen in the market to cover venipuncture training. Smith and Todd evaluated a commercial haptic-based clinical training system made by UK Haptics Limited [99]. The system consists of a Reachin Display, a pair of StereoGraphics CrystalEyes 3D glasses, and one Geomagic Touch haptic device. This simulator consists of two modules—training management (Clinical Skills Trainer) and a training solution (Virtual Veins). Each user has a personalized account, which lets the simulator monitor their progress through time. After a practice or test session, a report is generated. The report enables the user to reflect on their performance and allows the user, and teaching staff, to review student’s performance after testing.
Innovations in the area of mobile learning are usually in the form of applications or courses. One new use in the market is Touch Surgery [100]. Kinosis LTD developed this application for iOS and Android. It is a freeware that allows users to simulate over 30 common operations on a mobile device. It provides orientation and instructions when the user is performing a simulated operation. It will enable trainees to affect each task cognitively; thus, students build awareness of potential complications and procedure experience. The application counts with a progress module, that provides scores and evaluates users’ knowledge as they work.
Low-cost training solutions have been pursued in all areas. In aortic root surgery, Hossien et al. created a portable, inexpensive, reusable simulator [101]. The cost of the total simulator is a little more than 1 USD. The simulator consists of a 9.5 cm diameter x 4 cm height thin box with holes in it to place an aortic root silicone replica. The simulator also guides training by putting a circular ring. This ring was designed using cardboard, and it has fifteen to twenty cuts to help the user during the process. Hossien et al. developed the solution to be performed in every kind of aortic valve root repair specified in El Khoury’s classification. Procedures that can be achieved are—excision of a diseased aortic valve, sizing the valve, and replacement of different aortic valve types (interrupted, semi-continuous, sutureless, and stainless) using circular or oblique aortotomy.
A common area in surgery is tumor removal. On the liver, it is usually treated using hepatic resection; nevertheless, radiofrequency ablation is another good alternative. It requires accurate needle insertion and precise hand-eye coordination. To solve this issue, Wen et al. built a cooperative robot station that uses hand gesture controls and AR [46]. AR is used as a guidance mechanism. It is generated using a projector, sold by ProCam, to create an overlay on the patient’s body, where pre and intraoperative information is displayed. Graphics were supported by using OpenGL 3D. To enable hand gesture support, the system uses a Kinect camera. The simulator also uses an in-house robot to help the surgeon during the needle insertion task. This is a novel approach in the area of information provided to the user; however, an evaluation of AR overlays should be done to assess if this kind of aid disrupts surgeons’ perception during surgeries.
Guo et al. developed a surgical training system for percutaneous renal biopsy [102]. The system incorporates two Touch haptic devices and a Microsoft HoloLens to display holographic images based on CT scans. The application shows a patient model and simulation of tactile puncture. The system can also record the user session to assess surgical skills. Eight experts and twenty-four students participated in the study. The parameters they assessed are the time of the procedure, the length of the injection path, the cumulative angle, and the number of hits. Finally, the authors performed a perception questionnaire. Results from the study state that the system provides higher immersion than traditional approaches, and the novice group improved after a period of training.
Lastly, on the side of path-guidance, Licona R. et al. developed a collaborative, hands-on training system [103]. The authors developed a simulator where a trainer can guide a trainee using the double contact guidance approach to control a slave tool. The study used three Geomagic touch haptic devices, and the aim was to guide the trainee in a path and how to interact with organs. To provide a fully n-degrees of freedom interconnection, Intrinsically Passive Controllers were used. There were used to provide better transparency during force interactions and energy exchange [128]. The environment was implemented in Matlab, and it uses the Open Haptics library to connect with the haptic devices. Results from this study show that precision and force feedback was adequate to guide trainees during surgical navigation tasks.

3.8.11. Neurosurgery

Neurosurgery is one of the surgical specialties with the highest operation risks [129]. Delorme et al. developed NeuroTouch, a virtual simulator for cranial microsurgery training [104]. It is currently known as CAE NeuroVR. The neurosurgical simulator was developed using a physics-based engine, a 3D graphics engine system for rendering, a pair of Geomagic Touch Hamitic devices, procedure controls, and two screens. NeuroTouch was designed to enable residents to practice their skills in tumor-debulking and tumor cauterization tasks. This system computes tissue deformation and topology changes according to tissue rupture, cut, or removal. Assessment of technical skills involved in craniotomy procedures is calculated using operation time, errors, instrument forces, and the amount of virtual tissue removal.
Si et al. created an augmented reality neurosurgical training simulator with haptic feedback [105]. The simulation provides holographic guidance for pre-operative training using Microsoft HoloLens during brain tumor resection. The organs were reconstructed using MRI and a Voxelwise Residual Network, VoxResNet [130]. The simulator also provides the features of cutting, deformation, and bleeding. The Microsoft HoloLens provides an overlay with information about the hidden target organs using a benchtop 3D-printed skull model. The authors used two Geomagic touch devices, and the simulation was implemented in Unity and the UNET module. Finally, Si et al. performed a user study with ten participants using and a perception questionnaire to validate the system. Results from this study state that the simulation was realistic, it provided an immersive training environment, and the tool interaction was accurate.
Another work in the area of neurology is conducted by Vite et al. [106]. They designed a neurosurgical training application for repairing cerebral aneurysms in the Sylvian fissure region. The authors used the SOFA library, NVIDIA Flex, and Open Haptics Toolkit to develop their simulator. One Geomagic Touch haptic device is used to provide haptic feedback during the procedure. The anatomy models were constructed using coronary angio CT scans. In their study, users are asked to perform aneurysm clipping in the carotid region. The simulator was validated by an expert neurosurgeon, who stated that the realism of deformation and visualization were correct; however, additional validation tests are needed to demonstrate that the simulator helps students or residents to improve their surgical skills.

3.8.12. Endovascular Procedures

Endovascular surgery has been the focus of many recent studies due to its advantages like small incisions, less blood loss, quicker recovery over traditional surgeries. In this area, Halabi et al. developed a VF-based training simulation for heart arteries navigation [107]. The simulation was developed using Chai3d, where a Novint Falcon haptic device is used to navigate the tip of a catheter inside the heart arteries. The catheter is simulated as a navigating sphere, and the arteries were constructed using cylindrical and corner elements. To provide guidance, the VF is based on a midpoint-generated trajectory, using force-field guidance and forbidden-region constraints with visual cues. Twelve participants tested the environment. Results state that their performance improved. They improved their task accuracy, and short task completion times were achieved in path-following procedures.
Guo et al. designed and implemented an endovascular simulator system in Reference [108]. The authors used a Geomagic Touch haptic device to develop catheter navigation training. Guo et al. used Unity to develop the simulation, and they provide visual assistance during training via a notification module. Moreover, the authors added a telecommunication system to provide teleoperating or teletraining features. Guo et al. tested their training simulator internally; therefore, no expert validation or user perception results were reported.

3.8.13. Urology

Zhu et al. validated a commercial simulator for transurethral prostatectomy (TURP) [109]. The TURP Mentor system was used in this research. It is developed by Simbionix. TURP Mentor provides educational content and movies that exemplify core procedural techniques. It gives feedback reports, and it gives an objective performance assessment by using optional expert-defined scores. Like the other commercial solutions, the software can provide playbacks of sessions for further discussion and review. Finally, for user management, it lets professors visualize user or group performance statistics to understand how users or classes are performing; therefore, they can plan lessons to cover specific topics to improve individual or group performance.

3.8.14. Colorectal Procedures

Colorectal cancer is a type of cancer that starts in the colon or the rectum. These cancers can also be named colon cancer or rectal cancer, depending on where they start. Most colorectal cancers begin as a growth on the inner lining of the colon or rectum. To improve the treatment of colorectal cancer, a rectal examination should be performed. In this area, Muangpoon et al. developed an augmented reality system for digital rectal visualization [110]. The application uses sensors to track the finger’s maneuvers and applied pressure while using a benchtop model and Microsoft HoloLens. The system can display the performance and essential metrics on the real benchtop model using an AR device. HoloLens is used to visualize the benchtop model’s internal components and overlay the relevant virtual anatomy.
For colorectal surgery, market counts with the simulator sold by CAE Healthcare called PROMIS. It is a virtual simulator that provides accurate and comprehensive performance feedback. PROMIS’s hardware model is a replication of the peritoneal cavity. It can be modified with disposable and reusable components that are placed inside the model. It has remote access features so that collaborative training can be done in PROMIS. Leblanc et al. used the PROMIS station to evaluate hand-assisted laparoscopy (HAL) versus the typical approach (TA) [111]. Conventional laparoscopic surgery was performed using the PROMIS in its usual configuration; however, for HAL, it was adapted with cameras mounted inside the peritoneal cavity. Results show that HAL tests were made faster without differences in intraoperative errors than TA tests.

4. Discussion

Using the results obtained in Section 3.7.1 and Section 3.7.2, several reflections and considerations in the area of evaluation and guidance are discussed next. Seven aspects that should be considered relevant in the development of training and surgery applications are covered and described. These are (i) the incorporation of AI-DL, (ii) the addition of AR-VR, (iii) the inclusion of force feedback, (iv) the suitability of session recording, (v) the use of guidance systems, (vi) the monitoring of the learning process, and (vii) the pertinence of the use of real models and/or of virtual environments.
  • Artificial Intelligence and Deep Learning (AI-DL) applied in simulators was covered by References [61,68,69,71,80]. These are used to predict and categorize users’ performance using inference and knowledge databases. Intelligent systems could provide very powerful reasoning systems to help experts with knowledge acquisition. Also, these systems can provide users metacognitive prompts as assistance and learning feedback according to their decisions. Systems that would use AI can help classify expertise levels and analyze patterns, which could give institutions a base that could let them choose students prepared to face real operations. Moreover, in recent years, DL, a subset of AI, is getting lots of attention [131]. In DL, models are trained using a large set of labeled data and neural network architectures that contain many layers. Therefore, DL approaches are achieving results that were not possible before [132].
  • Augmented Reality and Virtual Reality (AR-VR) approaches in simulation were applied by References [47,68,102,105,110]. Alongside the advancement in data processing and artificial intelligence, solutions that allow users to dive and explore immersive computer-generated, or applications that overlay computer graphic interfaces in humans’ field of view have appeared and attracted the research society’s attention and different industries. AR-VR systems have become more powerful and can provide high-end visualizations. These approaches could enable researchers to create new ways of interaction and enhance the understanding of surgical tasks [133].
  • In the area of interaction, force feedback is used in 55.22% of the applications covered in this paper. Haptic technologies have been gaining terrain since the last decade [8,42]. Companies are manufacturing haptic devices to help authors develop new learning environments or just to enhance current solutions that do not provide the sufficient tactile realism needed in surgeries. An affordable haptic solution is Novint Falcon, which costs around $250–$500 (2020). However, most of the surgical procedures also require torque that is not available with Falcon. Out of the 37 simulators that implement force feedback, 45.94% uses Geomagic Touch devices.
  • Session recording is usually provided when applications save current session data to be analyzed later. Applications that record session data are only covered by 34.32% of the papers, where only 65.21% of it provides full storage of all sessions that occur in the system. Moreover, this feature is usually offered only by commercial solutions (33.33% are not commercial solutions). Even though this percentage is relatively low, this also indicates that authors have started to consider implementing databases to record the whole user experience. Authors have begun to notice that session databases are a great resource to track learning during training accurately.
  • On the other hand, guidance systems currently are not very common. A recent meta-analysis found that, over a wide array of conditions, learning from ITs was associated with higher outcome scores [134]. However, it is recognized that these systems have not lived up to their potential regarding their wider adoption, which could be due to clinicians not having experience in automated processes. It is likely to change with the introduction of AI-DL-oriented stations. The recent rapid development and introduction of more generic, flexible, accessible, and adaptive IAs could fulfill the potential of AI to revolutionize how learning takes place, by supporting students and performing instructional functions normally reserved for teachers or tutors.
  • One of the principal concerns in education is the measurement of the learning process. In our study, 50.74% of the works cover task evaluation; nevertheless, task’s dexterity acquisition is a topic that has not been included. Only the work of Bahrami et al. [79] has focused on the mental activity that occurs during the acquisition of concepts. Brain activity and user behavior have a close relationship during the tasks due to dynamic changes in cortical networks. However, authors should try to model environments that do not necessarily use MRI environments to assess this process. Additionally, by creating applications that consider this new approach, the generation of lectures and contents could be improved.
  • In the second hypothesis of this paper, we asked about real models versus virtual environments. Since 64.18% are virtual environments, this hypothesis could also be complemented by considering the topic of commercial solutions versus in-house applications. In the past, physical models were the most feasible options to practice procedures. Current advances in technology have led this factor to be reduced to 35.82% of the works analyzed. Even though there are still physical models for surgery, this finding could guide virtual solutions to broaden their scope and find new opportunities in the development of simulators. As discussed in the above paragraphs, commercial solutions are currently considered complete resources; however, authors have started to encourage themselves to create novel solutions, like pediatrics, radiology, endovascular procedures, urology, and rectal examinations. In conclusion, these areas can be considered new development branches, and current commercial workstations have not considered adaptability in their design.

5. Conclusions

We have presented a survey of training and guidance systems in medical surgery and procedures. These environments are aimed to train students to acquire proper dexterity in the surgical procedures needed in the diverse medical areas, providing guidance, feedback, and assistance during task execution. Additionally, the environment should incorporate tools for assessing acquired skills and knowledge. Sixty-seven systems were retrieved through a search using the following keywords—guidance, evaluation, assessment, training, medical education, medical training, surgery, simulation training, computer-assisted instruction, and e-learning. We found that most of these systems are focused on training purposes (75%), as compared to surgery procedures (19%), and only 6% are aimed for planning (including planning/training and planning/surgery). The medical disciplines covered were, in decreasing order of numbers of systems surveyed—laparoscopy (14), endoscopy (8), open surgery (7), arthroscopy (6), orthopedics (6), dentistry (5), ophthalmology (5), ENT procedures (4), neurosurgery (3), pediatrics (2), radiology (2), endovascular procedures (2), colorectal procedures (2), and urology (1). From these figures, a first conclusion of this work is the presence of a clear potential to create novel solutions and new development branches for those areas with the lowest systems aimed for future academic research or commercial use.
The works and simulators studied in this research have been reviewed and compared from their set-up, type of application, and type of use. The criteria considered for this comparison were the modeled surgical techniques, used technology, force feedback during simulation, students’ learning evaluation, educational and visual aids, guidance techniques, user data and its storage, and if the solution is commercial or not. A clustering analysis was performed to associate the systems according to the following specific features—Commercial Solution, Force Feedback, Stored Metadata, Database Storage, Artificial Intelligence, Guidance, Assistance, and Evaluation. Five clusters were obtained in which the main discriminant factors were the capability of storing and managing data and metadata in databases, as well as the fact of providing tools for guiding, assisting, and evaluating medical procedures.
From the clustering analysis, several opportunities for the development of environments for surgery training have been identified as future research. First, the inclusion of AI-DL in the solutions, to register and analyze the actual procedures students follow during the task, providing guidance and feedback during task execution. In this way, students can study their movements to correct them whenever it is necessary. Although the advent of new advances in these technologies is promising in the forthcoming years, only four works in our sample incorporated AI-DL in their developments. This is an up-and-coming trend for future developments of training and guidance systems for surgery and medical procedures. On the other hand, the incorporation of extensive data storage capabilities of the system, record procedures, and analyze them with detail also becomes critical. Nevertheless, it was identified that less than 40% of the works reported this capability. This issue has to be addressed by systems developers in the future.
The addition of force feedback to the systems constitutes another important opportunity area in the latter’s development. This technology allows providing the user tactile perception and force feedback similar to those of real surgeries. It can be coupled with other visual technologies in a system to provide a realistic training environment. For simple tasks, the use of haptic devices with 3 degrees of liberty could be appropriate. Nevertheless, for more complex procedures, the incorporation of 6 degrees of freedom is necessary. Although these systems can be relatively expensive, their cost may be reduced when haptic technology becomes more accessible. Interestingly, almost 60% of the works surveyed already incorporate force feedback in their developments.
A clue factor for training medical and surgery systems identified in the present research is the incorporation of guidance, assistance, and evaluation features. It is encouraging to notice that a significant advance has been achieved in this regard. 60% of the surveyed sample systems already include guiding during the task, 51% of the developments feature assistance capabilities, and an equal percentage (51%) present evaluation possibilities. Nevertheless, the incorporation of these affordances has still to be increased in future developments to provide students with a closer monitoring and assessment of their task performance.
Finally, another major element and opportunity area in the future of the development of Training and Guidance Systems in Medical Surgery, identified in this review, will be the inclusion of AR-VR technologies to the systems. They have a high potential of enriching simulations, providing a complete immersion experience and visual aids or cues that can better guide students when performing a surgery or medical procedure.
Regarding the third research question of this paper, asking about the trend of virtual environments and physical applications, it was found that virtual environments comprise almost two-thirds of the systems considered in this study (64%) as compared to the real models. While physical models for surgery were the most feasible options to practice procedures in the past, current advances in computational and visualization technologies have increased the presence of virtual solutions. This trend has broadened the scope of training and guidance systems for medical procedures and has provided new areas of opportunity in simulators’ development. Additionally, an exciting finding of this study is that 45% of the researches presented here uses commercial solutions, which in general, are the most complete. From this percentage, 53% are physical workstations composed of physical models and virtual environments to proceed with a surgical task, 30% are CA solutions for planning, or to be used during surgeries, and 17% are virtual environments available in the market. The remaining 55% are solutions were developed by research groups and laboratories across the world.
Haptic and visual channels have been privileged in the evolution of the interaction between humans and computers. Advances in the development of haptic devices have allowed companies and researchers to enhance the sense of touch and interaction in surgical simulators’ development. Using virtual training environments provides an important alternative to train and gain hand-operated skills. On the other hand, CA and robot-assisted systems provide proper feedback to surgeons, which lets them plan surgeries or execute them with high success rates. Some works limit themselves to recording the session data to evaluate them internally. Others evaluate it and provide immediate user feedback. However, simulators should have the capability to save the entire training process to evaluate students’ performance and learning curves. Additionally, by applying AI techniques, in the form of ITs or IAs, these provide guidance and aid during sessions and assess the whole learning process to give detailed feedback to students. Automating the evaluation process could reduce the workload of experts and professors and focus on improving the systems and designing new content. This would undoubtedly assess the potential to increase medical training and guidance systems in medical practices.

Author Contributions

Conceptualization, D.E.-C., J.N., F.B., and B.B.; methodology, J.N. and F.B.; software, D.E.-C.; validation, F.B., L.N., and A.J.M.; formal analysis, J.N. and A.J.M.; investigation, D.E.-C.; resources, J.N., F.B., and B.B.; data curation, D.E.-C., L.N. and J.N.; writing—original draft preparation, D.E.-C.; writing—review and editing, D.E.-C., J.N., F.B., B.B. and L.N.; visualization, D.E.-C.; supervision, J.N. and F.B.; project administration, J.N., F.B., and B.B.; funding acquisition, J.N. and F.B.; All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by Vicerrectoría de Investigación y Posgrado and the Research Group of Product Innovation of Tecnologico de Monterrey, by Newton Fund, through the Newton International Fellowship schemes (NIF004-1018), by a scholarship provided by Tecnologico de Monterrey to graduate student A01170737 - David Escobar-Castillejos, and a national scholarship granted by the Consejo Nacional de Ciencia y Tecnologia (CONACYT) to study graduate programs in institutions enrolled in the Padron Nacional de Posgrados de Calidad (PNPC) to CVU 559247 - David Escobar-Castillejos.

Acknowledgments

We would like to thank the Simulation and Modelling in Medicine and Surgery (SiMMS) Research Group of Imperial College London, Vicerrectoría de Investigación y Posgrado, the Research Group of Product Innovation, and the Cyber Learning and Data Science Laboratory of Tecnologico de Monterrey.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Vanlehn, K. The Behavior of Tutoring Systems. Int. J. Artif. Intell. Educ. 2006, 16, 227–265. [Google Scholar]
  2. Roll, I.; Wylie, R. Evolution and Revolution in Artificial Intelligence in Education. Int. J. Artif. Intell. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef] [Green Version]
  3. Perrotta, C.; Selwyn, N. Deep learning goes to school: Toward a relational understanding of AI in education. Learn. Media Technol. 2019, 1–19. [Google Scholar] [CrossRef] [Green Version]
  4. Wartman, S.A.; Combs, C.D. Reimagining Medical Education in the Age of AI. AMA J. Ethics 2019, 21, 146–152. [Google Scholar] [CrossRef] [Green Version]
  5. Dillenbourg, P. The Evolution of Research on Digital Education. Int. J. Artif. Intell. Educ. 2016, 26, 544–560. [Google Scholar] [CrossRef] [Green Version]
  6. Reed, A.B.; Crafton, C.; Giglia, J.S.; Hutto, J.D. The Behavior of Tutoring Systems. Surgery 2006, 146, 757–763. [Google Scholar] [CrossRef]
  7. Balcombe, J. Medical training using simulation: Toward fewer animals and safer patients. Altern. Lab. Anim. 2004, 32, 553–560. [Google Scholar] [CrossRef] [Green Version]
  8. Coles, T.R.; Meglan, D.; John, N.W. The Role of Haptics in Medical Training Simulators: A Survey of the State of the Art. IEEE Trans. Haptics 2011, 4, 51–66. [Google Scholar] [CrossRef]
  9. The Royal Academy of Engineering. Simulation and Medical Training. 2017. Available online: http://www.raeng.org.uk/publications/reports/simulation-and-medical-training-briefing (accessed on 15 June 2020).
  10. Sachdeva, A.; Buyske, J.; Dunnington, G.; Sanfey, H.; Mellinger, J.; Scott, D.; Satava, R.; Fried, G.; Jacobs, L.; Burns, K. A New Paradigm for Surgical Procedural Training. Curr. Probl. Surg. 2011, 48, 854–968. [Google Scholar] [CrossRef]
  11. Medical Corps International Forum. Use of Simulation and Military Medical Training: 2014. 2017. Available online: http://www.mci-forum.com/use-of-simulation-and-military-medical-training-2014/ (accessed on 15 June 2020).
  12. Passiment, M.; Sacks, H.; Huang, G. Medical Simulation in Medical Education: Results of an AAMC Survey. 2017. Available online: https://www.aamc.org/download/259760/dat (accessed on 15 June 2020).
  13. Izard, S.G.; Juanes, J.A.; García Peñalvo, F.J.; Estella, J.M.G.; Ledesma, M.J.S.; Ruisoto, P. Virtual Reality as an Educational and Training Tool for Medicine. J. Med. Syst. 2018, 42. [Google Scholar] [CrossRef]
  14. Smutny, P.; Babiuch, M.; Foltynek, P. A Review of the Virtual Reality Applications in Education and Training. In Proceedings of the 2019 20th International Carpathian Control Conference (ICCC), Kraków-Wieliczka, Poland, 26–29 May 2019; pp. 1–4. [Google Scholar]
  15. Leblanc, F.; Champagne, B.; Augestad, K.; Neary, P.; Senagore, A.; Ellis, C.; Delaney, C.; Group, C. A Comparison of Human Cadaver and Augmented Reality Simulator Models for Straight Laparoscopic Colorectal Skills Acquisition Training. J. Am. Coll. Surg. 2006, 211, 757–763. [Google Scholar] [CrossRef] [PubMed]
  16. Teber, D.; Guven, S.; Simpfendörfer, T.; Baumhauer, M.; Güven, E.O.; Yencilek, F.; Gözen, A.S.; Rassweiler, J. Augmented Reality: A New Tool To Improve Surgical Accuracy during Laparoscopic Partial Nephrectomy? Preliminary In Vitro and In Vivo Results. Eur. Urol. 2009, 56, 332–338. [Google Scholar] [CrossRef] [PubMed]
  17. Rankin, T.M.; Slepian, M.J.; Armstrong, D.G. Augmented Reality in Surgery. In Technological Advances in Surgery, Trauma and Critical Care; Latifi, R., Rhee, P., Gruessner, W.R., Eds.; Springer: New York, NY, USA, 2015; pp. 59–71. [Google Scholar]
  18. Noguez, J. Columna de Inteligencia Artificial en Educación. Komput. Sapiens Rev. Divulg. Soc. Mex. Intel. Artif. 2014, 2, 4–6. [Google Scholar]
  19. Kaschek, R.H. Intelligent Assistant Systems: Concepts, Techniques and Technologies; IGI Global: Hershey, PA, USA, 2006; p. 326. [Google Scholar]
  20. Crespo, L.M.; Reinkensmeyer, D.J. Effect of robotic guidance on motor learning of a timing task. In Proceedings of the 2008 2nd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, Scottsdale, AZ, USA, 19–22 October 2008; pp. 199–204. [Google Scholar]
  21. Harvey, L. Hidden Markov models and learning in authentic situations. Tutor. Quant. Methods Psychol. 2011, 7, 32–41. [Google Scholar] [CrossRef] [Green Version]
  22. Doleck, T.; Basnet, R.B.; Poitras, E.; Lajoie, S. Towards examining learner behaviors in a medical intelligent tutoring system: A Hidden Markov Model approach. In Proceedings of the 2015 IEEE International Advance Computing Conference (IACC), Banglore, India, 12–13 June 2015; pp. 329–332. [Google Scholar]
  23. Zia, A.; Essa, I. Automated surgical skill assessment in RMIS training. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 731–739. [Google Scholar] [CrossRef] [Green Version]
  24. Kose, U. Artificial Intelligence Applications in Distance Education; IGI Global: Hershey, PA, USA, 2014; p. 329. [Google Scholar]
  25. Chen, H.L.; Yang, B.; Wang, G.; Wang, S.J.; Liu, J.; Liu, D.Y. Support Vector Machine Based Diagnostic System for Breast Cancer Using Swarm Intelligence. J. Med. Syst. 2012, 36, 2505–2519. [Google Scholar] [CrossRef]
  26. Ballester, L.; Colom, A. Lógica difusa: Una nueva epistemología para las Ciencias de la Educación. Rev. Educ. 2006, 340, 995–1008. [Google Scholar]
  27. Sucar, L.; Noguez, J. Student Modeling. In Bayesian Networks: A Practical Guide to Applications; Pourret, O., Naïm, P., Marcot, B., Eds.; Wiley: Hoboken, NJ, USA, 2008; Chapter 10; pp. 173–185. [Google Scholar]
  28. Brusilovsky, P.; Millán, E. User Models for Adaptive Hypermedia and Adaptive Educational Systems. In The Adaptive Web: Methods and Strategies of Web Personalization; Brusilovsky, P., Kobsa, A., Nejdl, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 3–53. [Google Scholar]
  29. Le, N.T.; Pinkwart, N. Bayesian Networks for Competences-base Student Modeling. In Proceedings of the 11th International Conference on Knowledge Management, Osaka, Japan, 4–6 November 2015; pp. 129–138. [Google Scholar]
  30. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  31. Gillespie, R.B.; OModhrain, M.S.; Tang, P.; Zaretzky, D.; Pham, C. The Virtual Teacher. Proc. ASME Dyn. Syst. Control Div. 1998, 64, 171–178. [Google Scholar]
  32. Powell, D.; O’Malley, M.K. Efficacy of shared-control guidance paradigms for robot-mediated training. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 427–432. [Google Scholar]
  33. Reinkensmeyer, D.J. How to retrain movement after neuro- logic injury: A computational rationale for incorporating robot (or therapist) assistance. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Cancun, Mexico, 17–21 September 2003; Volume 2, pp. 1479–1482. [Google Scholar]
  34. Endo, T.; Kawasaki, H.; Kigaku, K.; Mouri, T. Transfer method of Force Information using Five-Fingered Haptic Interface Robot. In Proceedings of the Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’07), Tsukaba, Japan, 22–24 March 2007; pp. 599–600. [Google Scholar]
  35. Lee, J.; Choi, S. Effects of haptic guidance and disturbance on motor learning: Potential advantage of haptic disturbance. In Proceedings of the 2010 IEEE Haptics Symposium, Waltham, MA, USA, 25–26 March 2010; pp. 335–342. [Google Scholar]
  36. Zilles, C.B.; Salisbury, J.K. A constraint-based god-object method for haptic display. In Proceedings of the 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems, ‘Human Robot Interaction and Cooperative Robots’, Pittsburgh, PA, USA, 5–9 August 1995; Volume 3, pp. 146–151. [Google Scholar]
  37. Nudehi, S.S.; Mukherjee, R.; Ghodoussi, M. A shared-control approach to haptic interface design for minimally invasive telesurgical training. IEEE Trans. Control Syst. Technol. 2005, 13, 588–592. [Google Scholar] [CrossRef]
  38. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering. 2007. Available online: https://0-www-elsevier-com.brum.beds.ac.uk/ (accessed on 15 June 2020).
  39. Lau, F.; Bates, J. A review of e-learning practices for undergraduate medical education. J. Med. Syst. 2004, 28, 71–87. [Google Scholar] [CrossRef] [PubMed]
  40. Juanes, J.A.; Ruisoto, P. Computer Applications in Health Science Education. J. Med. Syst. 2015, 39, 1–5. [Google Scholar] [CrossRef] [PubMed]
  41. Secin, F.P.; Savage, C.; Abbou, C.; de La Taille, A.; Salomon, L.; Rassweiler, J.; Hruza, M.; Rozet, F.; Cathelineau, X.; Janetschek, G.; et al. The learning curve for laparoscopic radical prostatectomy: An international multicenter study. J. Urol. 2010, 184, 2291–2296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Escobar-Castillejos, D.; Noguez, J.; Neri, L.; Magana, A.; Benes, B. A Review of Simulators with Haptic Devices for Medical Training. J. Med. Syst. 2016, 40, 1–22. [Google Scholar] [CrossRef]
  43. Graafland, M.; Schraagen, J.M.C.; Schijven, M.P. Systematic review of validity of serious games for medical education and surgical skills training. Br. J. Surg. 2012, 99, 1322–1330. [Google Scholar] [CrossRef]
  44. Barsom, E.; Graafland, M.; Schijven, M. Systematic review on the effectiveness of augmented reality applications in medical training. Surg. Endosc. 2016, 30, 4174–4183. [Google Scholar] [CrossRef] [Green Version]
  45. Phillips, R.; Ward, J.W.; Bridge, P.; Appleyard, R.M.; Beavis, A.W. A hybrid virtual environment for training of radiotherapy treatment of cancer. In Proceedings of the SPIE 6055: Stereoscopic Displays and Virtual Reality Systems XIII, San Jose, CA, USA, 15–19 January 2006; SPIE: Bellingham, WA, USA, 2006; pp. 1–12. [Google Scholar]
  46. Wen, R.; Tay, W.L.; Nguyen, B.P.; Chng, C.B.; Chui, C.K. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput. Methods Programs Biomed. 2014, 116, 68–80. [Google Scholar] [CrossRef]
  47. Cecil, J.; Gupta, A.; Pirela-Cruz, M.; Ramanathan, P. A Network-Based Virtual Reality Simulation Training Approach for Orthopedic Surgery. ACM Trans. Multimed. Comput. Commun. Appl. 2018, 14, 1–21. [Google Scholar] [CrossRef]
  48. Albiero, A.M.; Benato, R. Computer-assisted surgery and intraoral welding technique for immediate implant-supported rehabilitation of the edentulous maxilla: Case report and technical description. Int. J. Med. Robot. Comput. Assist. Surg. 2015, 12, 1–8. [Google Scholar] [CrossRef]
  49. Jacobsen, M.E.; Andersen, M.J.; Hansen, C.O.; Konge, L. Testing Basic Competency in Knee Arthroscopy Using a Virtual Reality Simulator. J. Bone Jt. Surg. 2015, 97, 775–781. [Google Scholar] [CrossRef]
  50. Facca, S.; Liverneaux, P.A. Feasibility of computer-assisted surgery for trapeziometacarpal prosthesis: A preliminary experimental study. Surg. Radiol. Anat. 2012, 34, 857–864. [Google Scholar] [CrossRef] [PubMed]
  51. Hernandez-Vaquero, D.; Noriega-Fernandez, A.; Fernandez-Carreira, J.M.; Fernandez-Simon, J.M.; Llorens de los Rios, J. Computer-assisted surgery improves rotational positioning of the femoral component but not the tibial component in total knee arthroplasty. Knee Surg. Sports Traumatol. Arthrosc. 2014, 22, 3127–3134. [Google Scholar] [CrossRef] [PubMed]
  52. Kim, S.H.; Lee, H.J.; Jung, H.J.; Lee, J.S.; Kim, K.S. Less femoral lift-off and better femoral alignment in TKA using computer-assisted surgery. Knee Surg. Sports Traumatol. Arthrosc. 2013, 21, 2255–2262. [Google Scholar] [CrossRef] [PubMed]
  53. Myden, C.A.; Anglin, C.; Kopp, G.D.; Hutchison, C. Computer-assisted surgery simulations and directed practice of total knee arthroplasty: Educational benefits to the trainee. Comput. Aided Surg. 2012, 17, 113–127. [Google Scholar] [CrossRef] [PubMed]
  54. Tashiro, Y.; Miura, H.; Nakanishi, Y.; Okazaki, K.; Iwamoto, Y. Evaluation of Skills in Arthroscopic Training Based on Trajectory and Force Data. Clin. Orthop. Relat. Res. 2009, 467, 546–552. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Kosuki, Y.; Okada, Y. 3D Visual Component Based Development System for Medical Training Systems Supporting Haptic Devices and Their Collaborative Environments. In Proceedings of the 2012 Sixth International Conference on Complex, Intelligent and Software Intensive Systems (CISIS), Palermo, Italy, 4–6 July 2012; pp. 687–692. [Google Scholar]
  56. Medellín-Castillo, H.; Govea-Valladares, E.; Pérez-Guerrero, C.; Gil-Valladares, J.; Lim, T.; Ritchie, J.M. The evaluation of a novel haptic-enabled virtual reality approach for computer-aided cephalometry. Comput. Methods Programs Biomed. 2016, 130, 46–53. [Google Scholar] [CrossRef]
  57. Liu, L.; Zhou, R.; Yuan, S.; Sun, Z.; Lu, X.; Li, J.; Chu, F.; Walmsley, A.D.; Yan, B.; Wang, L. Simulation training for ceramic crown preparation in the dental setting using a virtual educational system. Eur. J. Dent. Educ. 2020, 24, 199–206. [Google Scholar] [CrossRef]
  58. Al-Saud, L.M.; Mushtaq, F.; Allsop, M.J.; Culmer, P.C.; Mirghani, I.; Yates, E.; Keeling, A.; Mon-Williams, M.A.; Manogue, M. Feedback and motor skill acquisition using a haptic dental simulator. Eur. J. Dent. Educ. 2017, 21, 240–247. [Google Scholar] [CrossRef] [Green Version]
  59. Fried, M.P.; Sadoughi, B.; Gibber, M.J.; Jacobs, J.B.; Lebowitz, R.A.; Ross, D.A.; Bent, J.P.; Parikh, S.R.; Sasaki, C.T.; Schaefer, S.D. From virtual reality to the operating room: The endoscopic sinus surgery simulator experiment. Otolaryngol. Head Neck Surg. 2010, 142, 202–207. [Google Scholar] [CrossRef]
  60. Tanoue, K.; Uemura, M.; Kenmotsu, H.; Ieiri, S.; Konishi, K.; Ohuchida, K.; Onimaru, M.; Nagao, Y.; Kumashiro, R.; Tomikawa, M.; et al. Skills assessment using a virtual reality simulator, LapSim™, after training to develop fundamental skills for endoscopic surgery. Minim. Invasive Ther. Allied Technol. 2010, 19, 24–29. [Google Scholar] [CrossRef]
  61. Surangsrirat, D.; Deshpande, A.R.; Surangsrirat, S.; Tapia, M.A.; Zhao, W. A customized simulation system with computer integrated auto-evaluation function for upper endoscopy training. Technol. Health Care 2011, 19, 79–90. [Google Scholar] [CrossRef] [PubMed]
  62. Jiang, D.; Hovdebo, J.; Cabral, A.; Mora, V.; Delorme, S. Endoscopic third ventriculostomy on a microneurosurgery simulator. Simulation 2013, 89, 1442–1449. [Google Scholar] [CrossRef] [Green Version]
  63. Heuer, H.; Klimmer, F.; Luttmann, A.; Bolbach, U. Specificity of motor learning in simulator training of endoscopic-surgery skills. Ergonomics 2012, 55, 1157–1165. [Google Scholar] [CrossRef] [PubMed]
  64. Mueller, S.A.; Caversaccio, M. Outcome of computer-assisted surgery in patients with chronic rhinosinusitis. J. Laryngol. Otol. 2010, 124, 500–504. [Google Scholar] [CrossRef] [Green Version]
  65. Ryu, J.; Choi, J.; Kim, H.C. Endoscopic Vision-Based Tracking of Multiple Surgical Instruments During Robot-Assisted Surgery. Artif. Organs 2013, 37, 107–112. [Google Scholar] [CrossRef]
  66. Korzeniowski, P.; Brown, D.C.; Sodergren, M.H.; Barrow, A.; Bello, F. Validation of NOViSE: A Novel Natural Orifice Virtual Surgery Simulator. Surg. Innov. 2017, 24, 55–65. [Google Scholar] [CrossRef] [Green Version]
  67. Zhang, L.; Grosdemouge, C.; Arikatla, V.S.; Ahn, W.; Sankaranarayanan, G.; De, S.; Jones, D.; Schwaitzberg, S.; Cao, C.G.L. The added value of virtual reality technology and force feedback for surgical training simulators. Work 2012, 41, 2288–2292. [Google Scholar] [CrossRef] [Green Version]
  68. Ovur, S.E.; Cobanaj, M.; Vantadori, L.; De Momi, E.; Ferrigno, G. Surgeon Training with Haptic Devices for Computer and Robot Assisted Surgery: An Experimental Study. In XV Mediterranean Conference on Medical and Biological Engineering and Computing—MEDICON 2019; Henriques, J., Neves, N., de Carvalho, P., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1526–1535. [Google Scholar]
  69. Park, C.H.; Wilson, K.L.; Howard, A.M. Examining the learning effects of a low-cost haptic-based virtual reality simulator on laparoscopic cholecystectomy. In Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, Porto, Portugal, 20–22 June 2013; pp. 233–238. [Google Scholar]
  70. Lamata, P.; Gómez, E.J.; Sánchez-Margallo, F.M.; López, O.; Monserrat, C.; García, V.; Alberola, C.; Florido, M.A.R.; Ruiz, J.; Usón, J. {SINERGIA} laparoscopic virtual reality simulator: Didactic design and technical development. Comput. Methods Programs Biomed. 2007, 85, 273–283. [Google Scholar] [CrossRef]
  71. Liang, H.; Shi, M.Y. Surgical Skill Evaluation Model for Virtual Surgical Training. Appl. Mech. Mater. 2011, 40, 812–819. [Google Scholar] [CrossRef]
  72. Munro, M.G.; Behling, D.P. Virtual Reality Uterine Resectoscopic Simulator: Face and Construct Validation and Comparative Evaluation in an Educational Environment. J. Soc. Laparoendosc. Surg. 2011, 15, 142–146. [Google Scholar] [CrossRef] [Green Version]
  73. Gaudina, M.; Zappi, V.; Bellanti, E.; Vercelli, G. eLaparo4D: A Step Towards a Physical Training Space for Virtual Video Laparoscopic Surgery. In Proceedings of the 2013 Seventh International Conference on Complex, Intelligent, and Software Intensive Systems, Taichung, Taiwan, 3–5 July 2013; pp. 611–616. [Google Scholar]
  74. Tillou, X.; Collon, S.; Martin-Francois, S.; Doerfler, A. Robotic Surgery Simulator: Elements to Build a Training Program. J. Surg. Educ. 2016, 73, 1–9. [Google Scholar] [CrossRef] [PubMed]
  75. Jungmann, F.; Gockel, I.; Hecht, H.; Kuhr, K.; Räsänen, J.; Sihvo, E.; Lang, H. Impact of perceptual ability and mental imagery training on simulated laparoscopic knot-tying in surgical novices using a Nissen fundoplication model. Scand. J. Surg. 2011, 100, 78–85. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Rehman, S.; Raza, S.J.; Stegemann, A.P.; Zeeck, K.; Din, R.; Llewellyn, A.; Dio, L.; Trznadel, M.; Seo, Y.W.; Chowriappa, A.J.; et al. Simulation-based robot-assisted surgical training: A health economic evaluation. Int. J. Surg. 2013, 11, 841–846. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Ayodeji, I.D.; Schijven, M.; Jakimowicz, J.; Greve, J.W. Face validation of the Simbionix LAP Mentor virtual reality training module and its applicability in the surgical curriculum. Surg. Endosc. 2007, 21, 1641–1649. [Google Scholar] [CrossRef] [Green Version]
  78. Herbert, G.L.; Cundy, T.P.; Singh, P.; Retrosi, G.; Sodergren, M.H.; Azzie, G.; Darzi, A. Validation of a pediatric single-port laparoscopic surgery simulator. J. Pediatr. Surg. 2015, 50, 1762–1766. [Google Scholar] [CrossRef]
  79. Bahrami, P.; Schweizer, T.A.; Tam, F.; Grantcharov, T.P.; Cusimano, M.D.; Graham, S.J. Functional MRI-compatible laparoscopic surgery training simulator. Magn. Reson. Med. 2011, 65, 873–881. [Google Scholar] [CrossRef]
  80. Hong, M.; Rozenblit, J.W.; Hamilton, A.J. A Simulation-Based Assessment System for Computer Assisted Surgical Trainer. In Proceedings of the Symposium on Modeling and Simulation in Medicine (MSM’17), Virginia Beach, VA, USA, 23–26 April 2017; pp. 1–11. [Google Scholar]
  81. Choi, K.S. A Virtual Reality Simulator Prototype for Learning and Assessing Phaco-sculpting Skills. In Transactions on Edutainment IV; Springer: Berlin/Heidelberg, Germany, 2010; pp. 145–156. [Google Scholar]
  82. Kim, Y.; Jeong, H.; Park, H.; Kim, J.; Kim, T.; Kim, J. Virtual-reality Cataract Surgery Simulator Using Haptic Sensory Substitution in Continuous Circular Capsulorhexis. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 17–21 July 2018; pp. 1887–1890. [Google Scholar]
  83. Henderson, B.A.; Kim, J.Y.; Golnik, K.C.; Oetting, T.A.; Lee, A.G.; Volpe, N.J.; Aaron, M.; Uhler, T.A.; Arnold, A.; Dunn, J.P.; et al. Evaluation of the Virtual Mentor Cataract Training Program. Ophthalmology 2010, 117, 253–258. [Google Scholar] [CrossRef]
  84. Le, T.D.; Adatia, F.A.; Lam, W.C. Virtual reality ophthalmic surgical simulation as a feasible training and assessment tool: Results of a multicentre study. Can. J. Ophthalmol. 2011, 46, 56–60. [Google Scholar] [CrossRef]
  85. Nasseri, M.A.; Gschirr, P.; Eder, M.; Nair, S.; Kobuch, K.; Maier, M.; Zapp, D.; Lohmann, C.; Knoll, A. Virtual fixture control of a hybrid parallel-serial robot for assisting ophthalmic surgery: An experimental study. In Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, 12–15 August 2014; pp. 732–738. [Google Scholar]
  86. Rambani, R.; Ward, J.; Viant, W. Desktop-Based Computer-Assisted Orthopedic Training System for Spinal Surgery. J. Surg. Educ. 2014, 71, 805–809. [Google Scholar] [CrossRef]
  87. Facca, S.; Hendriks, S.; Mantovani, G.; Selber, J.C.; Liverneaux, P. Robot-Assisted Surgery of the Shoulder Girdle and Brachial Plexus. Semin. Plast. Surg. 2014, 28, 39–44. [Google Scholar] [CrossRef] [Green Version]
  88. Gebhard, F.; Krettek, C.; Hüfner, T.; Grützner, P.A.; Stöckle, U.; Imhoff, A.B.; Lorenz, S.; Ljungqvist, J.; Keppler, P. Reliability of computer-assisted surgery as an intraoperative ruler in navigated high tibial osteotomy. Arch. Orthop. Trauma Surg. 2011, 131, 297–302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Grossterlinden, L.; Nuechtern, J.; Begemann, P.G.C.; Fuhrhop, I.; Petersen, J.P.; Ruecker, A.; Rupprecht, M.; Lehmann, W.; Schumacher, U.; Rueger, J.M.; et al. Computer-Assisted Surgery and Intraoperative Three-Dimensional Imaging for Screw Placement in Different Pelvic Regions. J. Trauma Inj. Infect. Crit. Care 2011, 71, 926–932. [Google Scholar] [CrossRef] [PubMed]
  90. Wong, K.C.; Kumta, S.M. Computer-assisted Tumor Surgery in Malignant Bone Tumors. Clin. Orthop. Relat. Res. 2013, 471, 750–761. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  91. Sewell, C.; Morris, D.; Blevins, N.H.; Dutta, S.; Agrawal, S.; Barbagli, F.; Salisbury, K. Providing metrics and performance feedback in a surgical simulator. Comput. Aided Surg. 2008, 13, 63–81. [Google Scholar] [CrossRef]
  92. Arora, A.; Swords, C.; Khemani, S.; Awad, Z.; Darzi, A.; Singh, A.; Tolley, N. Virtual reality case-specific rehearsal in temporal bone surgery: A preliminary evaluation. Int. J. Surg. 2014, 12, 141–145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Fang, T.Y.; Wang, P.C.; Liu, C.H.; Su, M.C.; Yeh, S.C. Evaluation of a haptics-based virtual reality temporal bone simulator for anatomy and surgery training. Comput. Methods Programs Biomed. 2014, 113, 674–681. [Google Scholar] [CrossRef]
  94. Wilkening, P.; Chien, W.; Gonenc, B.; Niparko, J.; Kang, J.U.; Iordachita, I.; Taylor, R.H. Evaluation of virtual fixtures for robot-assisted cochlear implant insertion. In Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, 12–15 August 2014; pp. 332–338. [Google Scholar]
  95. Warmann, S.W.; Schenk, A.; Schaefer, J.F.; Ebinger, M.; Blumenstock, G.; Tsiflikas, I.; Fuchs, J. Computer-assisted surgery planning in children with complex liver tumors identifies variability of the classical Couinaud classification. J. Pediatr. Surg. 2016, 51, 1801–1806. [Google Scholar] [CrossRef] [PubMed]
  96. Peeters, S.H.P.; Akkermans, J.; Slaghekke, F.; Bustraan, J.; Lopriore, E.; Haak, M.C.; Middeldorp, J.M.; Klumper, F.J.; Lewi, L.; Devlieger, R.; et al. Simulator training in fetoscopic laser surgery for twin–twin transfusion syndrome: A pilot randomized controlled trial. Ultrasound Obstet. Gynecol. 2015, 46, 319–326. [Google Scholar] [CrossRef] [Green Version]
  97. Bridge, P.; Gunn, T.; Kastanis, L.; Pack, D.; Rowntree, P.; Starkey, D.; Mahoney, G.; Berry, C.; Braithwaite, V.; Wilson-Stewart, K. The development and evaluation of a medical imaging training immersive environment. J. Med. Radiat. Sci. 2014, 61, 159–165. [Google Scholar] [CrossRef] [Green Version]
  98. Kazemi, H.; Rappel, J.K.; Poston, T.; Hai Lim, B.; Burdet, E.; Leong Teo, C. Assessing suturing techniques using a virtual reality surgical simulator. Microsurgery 2010, 30, 479–486. [Google Scholar] [CrossRef]
  99. Smith, S.P.; Todd, S. Usability evaluation of a haptic-based clinical skills training system. Int. J. Clin. Skills 2008, 2, 1–10. [Google Scholar]
  100. Lewis, T.L.; Vohra, R.S. Smartphones make smarter surgeons. Br. J. Surg. 2014, 101, 296–297. [Google Scholar] [CrossRef] [PubMed]
  101. Hossien, A. Comprehensive Middle-Fidelity Simulator for Training in Aortic Root Surgery. J. Surg. Educ. 2015, 72, 849–854. [Google Scholar] [CrossRef] [PubMed]
  102. Guo, Z.; Tai, Y.; Qin, Z.; Huang, X.; Li, Q.; Peng, J.; Shi, J. Development and assessment of a haptic-enabled holographic surgical simulator for renal biopsy training. Soft Comput. 2020, 24, 5783–5794. [Google Scholar] [CrossRef]
  103. Licona, R.A.R.; Liu, F.; Lelevé, A.; Pham, M.T. Collaborative Hands-on Training on Haptic Simulators. In Proceedings of the 2019 3rd International Conference on Virtual and Augmented Reality Simulations (ICVARS’19), Perth, Australia, 23–25 February 2019; pp. 39–45. [Google Scholar] [CrossRef]
  104. Delorme, S.; Laroche, D.; DiRaddo, R.; Del Maestro, R.F. NeuroTouch: A physics-based virtual simulator for cranial microneurosurgery training. Neurosurgery 2012, 71, 32–42. [Google Scholar] [PubMed]
  105. Si, W.X.; Liao, X.Y.; Qian, Y.L.; Sun, H.T.; Chen, X.D.; Wang, Q.; Heng, P.A. Assessing performance of augmented reality-based neurosurgical training. Vis. Comput. Ind. Biomed. Art 2019, 2, 1–10. [Google Scholar] [CrossRef] [Green Version]
  106. Vite, S.T.; Velasco, C.D.; Valencia, A.F.H.; Lomelí, J.S.P.; Castañeda, M.Á.P. Virtual Simulation of Brain Sylvian Fissure Exploration and Aneurysm Clipping with Haptic Feedback for Neurosurgical Training. In Augmented Reality, Virtual Reality, and Computer Graphics; De Paolis, L.T., Bourdot, P., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 230–238. [Google Scholar]
  107. Halabi, O.; Halwani, Y. Design and implementation of haptic virtual fixtures for preoperative surgical planning. Displays 2018, 54, 9–19. [Google Scholar] [CrossRef]
  108. Guo, S.; Yu, M.; Song, Y.; Zhang, L. The virtual reality simulator-based catheter training system with haptic feedback. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Gippsland, Australia, 14–17 February 2017; pp. 922–926. [Google Scholar]
  109. Zhu, H.; Zhang, Y.; Liu, J.S.; Wang, G.; Yu, C.F.; Na, Y.Q. Virtual reality simulator for training urologists on transurethral prostatectomy. Chin. Med. J. 2013, 126, 1220–1223. [Google Scholar]
  110. Muangpoon, T.; Haghighi, O.R.; Escobar-Castillejos, D.; Kontovounisios, C.; Bello, F. Augmented Reality System for Digital Rectal Examination Training and Assessment. J. Med. Internet Res. 2020, in press. [Google Scholar]
  111. Leblanc, F.; Delaney, C.P.; Ellis, C.N.; Neary, P.C.; Champagne, B.J.; Senagore, A.J. Hand-Assisted Versus Straight Laparoscopic Sigmoid Colectomy on a Training Simulator: What is the Difference? World J. Surg. 2010, 34, 2909–2914. [Google Scholar] [CrossRef]
  112. DeFrances, C.J.; Cullen, K.A.; Kozak, L.J. National Hospital Discharge Survey: 2005 annual summary with detailed diagnosis and procedure data. Vital Health Stat. 2007, 13, 1–218. [Google Scholar]
  113. Balcombe, J. Objective structured assessment of technical skill (OSATS) for surgical residents. Br. J. Surg. 1997, 84, 273–278. [Google Scholar]
  114. Okada, Y.; Tanaka, Y. IntelligentBox: A constructive visual software development system for interactive 3D graphic applications. In Proceedings of the Computer Animation ’95, Maastricht, The Netherlands, 2–3 September 1995; pp. 114–125. [Google Scholar]
  115. White, S.C.; Pharoah, M.J. Oral Radiology: Principles and Interpretation; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  116. Tiu, J.; Cheng, E.; Hung, T.C.; Yu, C.C.; Lin, T.; Schwass, D.; Al-Amleh, B. Effectiveness of Crown Preparation Assessment Software As an Educational Tool in Simulation Clinic: A Pilot Study. J. Dent. Educ. 2016, 80, 1004–1011. [Google Scholar] [CrossRef] [PubMed]
  117. Gallagher, A.G.; Ritter, E.M.; Lederman, A.B.; McClusky, D.A., III; Smith, C.D. Video-assisted surgery represents more than a loss of three-dimensional vision. Am. J. Surg. 2005, 189, 76–80. [Google Scholar] [CrossRef]
  118. Citardi, M.J.; Batra, P.S. Intraoperative surgical navigation for endoscopic sinus surgery: Rationale and indications. Curr. Opin. Otolaryngol. Head Neck Surg. 2007, 15, 23–27. [Google Scholar] [CrossRef]
  119. Azzie, G.; Gerstle, J.T.; Nasr, A.; Lasko, D.; Green, J.; Henao, O.; Farcas, M.; Okrainec, A. Development and validation of a pediatric laparoscopic surgery simulator. J. Pediatr. Surg. 2011, 46, 897–903. [Google Scholar] [CrossRef]
  120. Rozenblit, J.; Feng, C.; Riojas, M.; Napalkova, L.; Hamilton, A.; Hong, M.; Berthet-Rayne, P.; Czapiewski, P.; Hwang, G.; Nikodem, J.; et al. The Computer Assisted Surgical Trainer: Design, Models and Implementation. In Proceedings of the Summer Simulation Conference, Monterey, CA, USA, 6–10 July 2014; Volume 46. [Google Scholar]
  121. Hong, M.; Rozenblit, J.W. A haptic guidance system for Computer-Assisted Surgical Training using virtual fixtures. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002230–002235. [Google Scholar]
  122. Rambani, R.; Viant, W.; Ward, J.; Mohsen, A. Computer-Assisted Orthopedic Training System for Fracture Fixation. J. Surg. Educ. 2013, 70, 304–308. [Google Scholar] [CrossRef]
  123. He, X.; Roppenecker, D.; Gierlach, D.; Balicki, M.; Olds, K.; Gehlbach, P.; Handa, J.; Taylor, R.; Iordachita, I. Toward Clinically Applicable Steady-Hand Eye Robot for Vitreoretinal Surgery. In Proceedings of the ASME 2012 International Mechanical Engineering Congress and Exposition, Houston, TX, USA, 9–15 November 2012; Volume 2, pp. 145–153. [Google Scholar]
  124. UPMC. Hepatoblastoma (Liver Cancer). 2017. Available online: http://www.chp.edu/our-services/transplant/liver/education/liver-disease-states/hepatoblastoma-liver-cancer (accessed on 15 June 2020).
  125. Farrell, B.B.; Franco, P.B.; Tucker, M.R. Virtual Surgical Planning in Orthognathic Surgery. Oral Maxillofac. Surg. Clin. N. Am. 2014, 26, 459–473. [Google Scholar] [CrossRef]
  126. Slaghekke, F.; Lopriore, E.; Lewi, L.; Middeldorp, J.M.; van Zwet, E.W.; Weingertner, A.S.; Klumper, F.J.; DeKoninck, P.; Devlieger, R.; Kilby, M.D.; et al. Fetoscopic laser coagulation of the vascular equator versus selective coagulation for twin-to-twin transfusion syndrome: An open-label randomised controlled trial. Lancet 2014, 383, 2144–2151. [Google Scholar] [CrossRef]
  127. Pittini, R.; Oepkes, D.; Macrury, K.; Reznick, R.; Beyene, J.; Windrim, R. Teaching invasive perinatal procedures: Assessment of a high fidelity simulator-based curriculum. Ultrasound Obstet. Gynecol. 2002, 19, 478–483. [Google Scholar] [CrossRef]
  128. Stramigioli, S. Modeling and IPC Control of Interactive Mechanical Systems—A Coordinate-Free Approach, 1st ed.; Lecture Notes in Control and Information Sciences; Springer: London, UK, 2001; Volume 1. [Google Scholar]
  129. Rodwin, M.A.; Chang, H.J.; Ozaeta, M.M.; Omar, R.J. Malpractice Premiums In Massachusetts, A High-Risk State: 1975 To 2005. Health Aff. 2008, 27, 835–844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  130. Chen, H.; Dou, Q.; Yu, L.; Qin, J.; Heng, P.A. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 2018, 170, 446–455. [Google Scholar] [CrossRef] [PubMed]
  131. Peters, M.A. Deep learning, education and the final stage of automation. Educ. Philos. Theory 2018, 50, 549–553. [Google Scholar] [CrossRef] [Green Version]
  132. Vora, D.R.; Iyer, K.R. Deep Learning in Engineering Education: Performance Prediction Using Cuckoo-Based Hybrid Classification. In Machine Learning and Deep Learning in Real-Time Applications; Mahrishi, M., Hiran, K.K., Meena, G., Sharma, P., Eds.; IGI Global: Hershey, PA, USA, 2020; Chapter 9; pp. 187–218. [Google Scholar]
  133. Moro, C.; Štromberga, Z.; Raikos, A.; Stirling, A. The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat. Sci. Educ. 2017, 10, 549–559. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  134. Ma, W.; Adesope, O.O.; Nesbit, J.C.; Liu, Q. Intelligent tutoring systems and learning outcomes: A meta-analysis. J. Educ. Psychol. 2014, 106, 901–918. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Examples of commercial stations for surgery and training: (a) Canadian Aviation Electronics (CAE) NeuroVR, (b) Simbionix Arthro Mentor, (c) Simbionix TURP Mentor, (d) Surgical System da Vinci Si, and (e) CAE ProMis.
Figure 1. Examples of commercial stations for surgery and training: (a) Canadian Aviation Electronics (CAE) NeuroVR, (b) Simbionix Arthro Mentor, (c) Simbionix TURP Mentor, (d) Surgical System da Vinci Si, and (e) CAE ProMis.
Applsci 10 05752 g001
Figure 2. Flowchart of the study selection process.
Figure 2. Flowchart of the study selection process.
Applsci 10 05752 g002
Figure 3. Distribution of the analyzed papers in terms of their type of use (training, surgery, and planning).
Figure 3. Distribution of the analyzed papers in terms of their type of use (training, surgery, and planning).
Applsci 10 05752 g003
Figure 4. Cluster dendrogram. A hierarchical clustering to generate research groups was performed. Using the value of h = 1.7 , each cluster was evaluated to find their principal characteristic.
Figure 4. Cluster dendrogram. A hierarchical clustering to generate research groups was performed. Using the value of h = 1.7 , each cluster was evaluated to find their principal characteristic.
Applsci 10 05752 g004
Figure 5. Distribution of the analyzed papers in terms of the “commercial solution” (CS) parameter.
Figure 5. Distribution of the analyzed papers in terms of the “commercial solution” (CS) parameter.
Applsci 10 05752 g005
Table 1. Comparison between applications—Guidance and evaluation aspects. PA: Physical Application, VS: Virtual Simulation, CS: Commercial Solution, FF: Force Feedback, SM: Stored Metadata, DS: Database Storage, AI: Artificial Intelligence, G: guidance, A: Assistance, and E: Evaluation.
Table 1. Comparison between applications—Guidance and evaluation aspects. PA: Physical Application, VS: Virtual Simulation, CS: Commercial Solution, FF: Force Feedback, SM: Stored Metadata, DS: Database Storage, AI: Artificial Intelligence, G: guidance, A: Assistance, and E: Evaluation.
AreaArticlePurposeTypeCSFFSMDSAIGAE
Arthroscopy[49]TrainingVSXXXX XXX
[50]SurgeryPAX X
[51]SurgeryPAX X
[52]SurgeryPAX X
[53]PlanningVSX X
[54]TrainingPAX X X
Dentistry[55]TrainingVS X XX
[56]Training/PlanningPA X XXX
[48]Surgery/PlanningPAX X
[57]TrainingPA X X
[58]TrainingVSXXXX XXX
Endoscopy[59]TrainingVS XXX XXX
[60]TrainingVSX XX XXX
[61]TrainingVS XXXXXX
[62]TrainingVSXX X
[63]TrainingPA X X
[64]SurgeryPAX X
[65]SurgeryPA XX
[66]SurgeryVS X X
Laparoscopy[67]TrainingVS X X
[68]TrainingVS X XXX
[69]TrainingVS X XX X
[70]TrainingVS X XX
[71]TrainingVS XXX XX
[72]TrainingVS X XX
[73]TrainingVS X X X
[74]TrainingVSXX XXX
[75]TrainingVSXXXX X
[76]TrainingVSXXXX X X
[77]TrainingVSXXXX X
[78]TrainingPAX X
[79]TrainingPAX X
[80]TrainingVS XXXXXXX
Ophthalmology[81]TrainingVS XX X
[82]TrainingVS X
[83]TrainingVS XX
[84]TrainingVSX XX XX
[85]TrainingPA X X
Orthopedics[86]SurgeryPA X
[87]SurgeryPAXX X
[88]SurgeryPAX X
[89]SurgeryPAX XX
[47]TrainingVS X X
[90]SurgeryPAX X
ENT procedures[91]TrainingVS XX X
[92]TrainingVSXXX XX
[93]TrainingVSXX XXX
[94]TrainingPA X
Pediatrics[95]SurgeryPAX XX
[96]TrainingPA X
Radiology[97]TrainingVS XX X
[45]Training/PlanningVS XX
Open Surgery[98]TrainingVSXXXX XX
[99]TrainingVSXX X
[100]TrainingVSX XXX
[101]TrainingPA X
[46]SurgeryPA XX
[102]TrainingVS XXX X
[103]TrainingVS X X
Neurosurgery[104]TrainingVSXX XX
[105]TrainingVS X X
[106]TrainingVS X
Endovascular Prodecures[107]TrainingVS X XX
[108]TrainingVS X X
Urology[109]TrainingVS XXX XX
Colorectal Procedures[110]TrainingVS XX XX
[111]TrainingPAX XX X X
Table 2. Clusters generated according to the evaluation aspects—PA: Physical Application, VS: Virtual Simulation, CS: Commercial Solution, FF: Force Feedback, SM: Stored Metadata, DS: Database Storage, AI: Artificial Intelligence, G: guidance, A: Assistance, and E: Evaluation.
Table 2. Clusters generated according to the evaluation aspects—PA: Physical Application, VS: Virtual Simulation, CS: Commercial Solution, FF: Force Feedback, SM: Stored Metadata, DS: Database Storage, AI: Artificial Intelligence, G: guidance, A: Assistance, and E: Evaluation.
Cluster NumberArticlePurposeTypeCSFFSMDSAIGAE
Cluster 1[80]TrainingVS XXXXXXX
[61]TrainingVS XXXXXX
[71]TrainingVS XXX XX
Cluster 2[111]TrainingPAX XX X X
[60]TrainingVSX XX XXX
[84]TrainingVSX XX XX
[75]TrainingVSXXXX X
[76]TrainingVSXXXX X X
[59]TrainingVS XXX XXX
[49]TrainingVSXXXX XXX
[58]TrainingVSXXXX XXX
[109]TrainingVS XXX XX
[77]TrainingVSXXXX X
[98]TrainingVSXXXX XX
Cluster 3[51]SurgeryPAX X
[87]SurgeryPAXX X
[100]TrainingVSX XXX
[74]TrainingVSXX XXX
[93]TrainingVSXX XXX
[92]TrainingVSXXX XX
[104]TrainingVSXX XX
[96]TrainingPA X
[65]SurgeryPA XX
[45]Training/PlanningVS XX
[99]TrainingVSXX X
[78]TrainingPAX X
[79]TrainingPAX X
[102]TrainingVS XXX X
[81]TrainingVS XX X
[91]TrainingVS XX X
[70]TrainingVS X XX
[63]TrainingPA X X
[67]TrainingVS X X
[82]TrainingVS X
[106]TrainingVS X
[105]TrainingVS X X
[108]TrainingVS X X
Cluster 4[56]Training/PlanningPA X XXX
[72]TrainingVS X XX
[97]TrainingVS XX X
[110]TrainingVS XX XX
Cluster 5[68]TrainingVS X XXX
[69]TrainingVS X XX X
[57]TrainingPA X X
[73]TrainingVS X X X
[89]SurgeryPAX XX
[95]SurgeryPAX XX
[62]TrainingVSXX X
[54]TrainingPAX X X
[90]SurgeryPAX X
[88]SurgeryPAX X
[64]SurgeryPAX X
[48]Surgery/PlanningPAX X
[53]PlanningVSX X
[50]SurgeryPAX X
[52]SurgeryPAX X
[55]TrainingVS X XX
[107]TrainingVS X XX
[103]TrainingVS X X
[47]TrainingVS X X
[66]SurgeryVS X X
[85]TrainingPA X X
[83]TrainingVS XX
[46]SurgeryPA XX
[101]TrainingPA X
[86]SurgeryPA X
[94]TrainingPA X

Share and Cite

MDPI and ACS Style

Escobar-Castillejos, D.; Noguez, J.; Bello, F.; Neri, L.; Magana, A.J.; Benes, B. A Review of Training and Guidance Systems in Medical Surgery. Appl. Sci. 2020, 10, 5752. https://0-doi-org.brum.beds.ac.uk/10.3390/app10175752

AMA Style

Escobar-Castillejos D, Noguez J, Bello F, Neri L, Magana AJ, Benes B. A Review of Training and Guidance Systems in Medical Surgery. Applied Sciences. 2020; 10(17):5752. https://0-doi-org.brum.beds.ac.uk/10.3390/app10175752

Chicago/Turabian Style

Escobar-Castillejos, David, Julieta Noguez, Fernando Bello, Luis Neri, Alejandra J. Magana, and Bedrich Benes. 2020. "A Review of Training and Guidance Systems in Medical Surgery" Applied Sciences 10, no. 17: 5752. https://0-doi-org.brum.beds.ac.uk/10.3390/app10175752

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop