Clinical Medical Education Utilizing High Fidelity Mannequins or Standardized Patients
A Research Review
Background: Learning and training using simulation have become a normal part of healthcare education. Research is assessing the use of standardized patients in relation to improving learning outcomes and creating a more realistic simulation environment.
Purpose: The purpose of this descriptive research review was to analyze the research available to determine if the use of standardized patients produces better outcomes in more realistic simulations than high fidelity simulation mannequins.
Method: A descriptive literature review was conducted utilizing five research articles investigating the use of standardized patients compared to high fidelity mannequins in clinical medical education and continuing education.
Results: Four of the five research articles reviewed in this research paper explained that using standardized patients in simulation improved the knowledge gained, level of confidence, and realness of the simulation. One article found that the participants did not have a preference when using a standardized patient or mannequin, but that the modality used should be based on the desired learning outcomes of the simulation. That study did note that the standardized patient did enhance the interactions between the participants and the patient.
Conclusion: The majority of the articles showed that standardized patients have a positive influence on participants experience during simulations. Mannequins are still good learning devices, especially when it comes to invasive procedures, but standardized patients add another level realness to the simulation. There is a lot of research that still needs to be conducted because many of the current studies are completed on small populations, but from these small studies, educators can understand how important the use of standardized patients is.
Clinical Medical Education Utilizing High Fidelity Mannequins or Standardized Patients: A Research Review
Healthcare professionals and students traditionally use simulation mannequins to learn and practice their clinical skills on. The fidelity of mannequins can range from low to high, but the cost also significantly increases with the increased level of fidelity. A simulation mannequin can cost anywhere from $27,000 to over $100,000 (“High-Fidelity Nursing Education”, 2015) (HealthLeaders, 2009). Mannequins while they are great for performing skills on and have the ability to produce different bodily sounds and are great for teaching assessments on lack the ability to create a completely realistic simulation scenario for the healthcare professional or student. Mannequins are able to be set to certain settings for different teaching scenarios, and if person using them in a simulation makes a mistake while performing the simulation the mannequin may “die,” but the instructor can use it as a learning opportunity to teach the student or healthcare professional what was done wrong in a safe environment without anyone getting hurt or actually dying (“The Ups and Downs of the Simulation Spread in Nursing Ed”, 2015). Seeing as these mannequins are not real, they lose a sense of reality, and if something were to happen in a simulation scenario losing the sense of fidelity it is really hard to regain (k”The Ups and Downs of the Simulation Spread in Nursing Ed”, 2015). Mannequins can also be high maintenance. They need to be stored specific ways in order to prevent them from breaking or being damaged, because as the initial cost of them is expensive they are also very expensive to fix and buy replacement parts for if something does happen to them (“The Ups and Downs of the Simulation Spread in Nursing Ed”, 2015).
On the other hand, standardized patients are much cheaper as they are either done by volunteers or paid by the hour. They also can carry on everyday full conversations with the learning participant and can react appropriately in different situations as compared to mannequins who may have preprogrammed responses but do not change their expression. The downfall to standardized patients is that they are live patients and cannot have programmed blood pressures or specific breath sounds such as crackles or rhonchi like the mannequins can. There is also a drawback when it comes to performing invasive procedures on them. Some programs allow the participants to perform certain invasive procedures on standardized patients, while others do not allow any sort of invasive procedure to be performed.
The research question evaluated in this literature review is: In healthcare professionals, education and continuing education are standardized patient actors compared to high fidelity simulation mannequins more effective in learning and simulating realistic situations? The purpose of this descriptive research review is to evaluate the results of studies to determine if the utilization of standardized patients has more of an effect on the learning of medical professionals and students compared to high fidelity simulation mannequins.
This literature review was designed and organized to assess five research articles that studied the effects of realism and knowledge gained by using standardized patients compared to high fidelity simulation mannequins during simulations. All of the articles were peer-reviewed studies published from 2009 until 2018. The research articles were obtained by using the ProQuest database, PubMed Central database, and Elsevier database. The keywords of standardized patient, mannequin, simulation, and high-fidelity mannequin were used in internet search engines.
The evidence-based research study completed by Alsaad, Davuluri, Bhide, Lannen, and Maniaci (2017) demonstrates that the utilization of standardized patients in healthcare professional education created a more realistic simulation scenario and a better understanding of knowledge gained during the simulation. The problem of this study is “Most of these comparison studies focus on either procedural skills or communication and teamwork skills; there is a lack of material comparing simulation mannequins to SPs in teaching medical learners clinical reasoning skills in scenarios involving decompensating patients” (Alsaad et al., 2017, p. 482). The purpose of this study was “to compare residents’ performance in management of four scenarios depicting patient clinical deterioration utilizing either a high-fidelity simulation mannequin or SP” (Alsaad et al. 2017, p. 482).
The researchers chose a “prospective randomized cohort study” (Alsaad et al., 2017, p. 482). This design was chosen because all of the participants were from the same residency placement, but they were randomly placed into the intervention groups for this study. This study was conducted at the J. Wayne and Delores Barr Weaver Simulation Center at Mayo Clinic in Jacksonville, Florida and consisted of 19 internal medicine residents who were in the second and third years of their clinical rotations that volunteered for this study (Alsaad et al. 2017, p. 482). The residents were randomly assigned to their groups, nine in the simulation mannequin group, and ten in the standardized patient group (Alsaad et al. 2017, p. 482). Each resident completed the same four decompensating patient scenarios utilizing their assigned modality (Alsaad et al. 2017, p. 482). The residents completed the same five-question medical knowledge test directly before and after each simulation scenario and rated the realism of the simulation on a scale questionnaire (Alsaad et al. 2017, p. 482). This was a very small sample size for this study, and it may not be representative of the larger majority of second- and third-year internal medicine residents. Despite this cohort being small the decision to place the residents into the two groups randomly benefits this study because it eliminated the possibility for the researchers to know or have a say in which residents were in which group. Alsaad et al. (2017, p. 483) used the results from the five-question pre- and post- knowledge tests and the realism a 5-point Likert scale questionnaire to rate realism of the simulations as the data collected from this study. They did not report validity or reliability tests completed on the data that was collected.
Alsaad et al. (2017, p. 483) a Mann-Whitney U-test to compare the unevenly distributed groups. They also used a standard t-test on the data collected from the four different simulation scenarios with a P-value of <0.05 being considered statistically significant. The Mann-Whitney U-test is a “non-parametric test that is used to compare two sample means that come from the same population, and used to test whether two sample means are equal or not” (Statistic Solutions, 2019). This analytical approach is consistent with the research question and design. Alsaad et al. (2017, 483) determined the average test score for each group before and after each scenario. They also noted the average amount of improvement from the pre-test to the post-test for each simulation scenario. They also noted the average realism score given to each simulation scenario for each group. The researchers explained their results throughout the text, in tables, and through vertical bar graphs throughout the text. The data shows that both groups had equivalent scores on the pre-test knowledge exams (Alsaad et al., 2017, p. 483). For every scenario, the standardized patient group had a higher post-test score, and all except one of those scenario scores proved statistical significance (Alsaad et al., 2017, p. 483). The internal medicine residents also rated the standardized patient scenarios to be more realistic than the mannequin simulations (Alsaad et al., 2017, p. 483).
Alsaad et al. (2017, p. 484) acknowledge that there were a few limitations to their study. They state that they only used a single academic medical center and that these simulations may not be applicable to other simulation settings (Alsaad et al., 2017, p. 484). They also recognize that the population size of 19 residents was very small and if this study is replicated or repeated a larger population size would help increase the validity of this study (Alsaad et al., 2017, p. 484). Another limitation that Alsaad et al. (2017, p.485) acknowledged was the simulations were completed in a series. There was a three-month gap between the mannequin group and the standardized patient groups completing their simulations (Alsaad et al., 2017, p. 485). The researchers tried to eliminate the sharing of information by asking all participants not to discuss anything taking place (Alsaad et al., 2017, p. 485). The residents were also aware of what was going to happen during the simulation, based off some of the questions from the pre-tests, rather than having to figure it out on their own during the simulation (Alsaad et al., 2017, p. 485). This evidenced-based research study helps prove that the use of standardized patients in simulations helps provide better education learning and a more realistic simulation scenario for the healthcare professionals involved.
Claudius, Kaji, Santillanes, Cicero, Donofrio, Gausche-Hill, Srinivasan, and Chang identified the problem of this evidenced-based research study to be, “Medical student triage accuracy and time to triage for computer-based simulated victims and live moulaged actors using the pediatric version of the Simple Triage and Rapid Treatment (JumpSTART) mass-casualty triage tool were compared, anticipating that student performance and experience would be equivalent.” (Claudius et al., 2015, p. 438). Claudius, Kaji, Santillanes, Cicero, Donofrio, Gausche-Hill, Srinivasan, and Chang identify the purpose of this study to be that there has not been any research using computerized simulation compared to standardized patients in the area of pediatric mass-casualties (Claudius et al., 2015, p. 439). They wanted to compare the “triage accuracy, time, and fidelity using the child version of the Simple Triage and Rapid Treatment (JumpSTART) triage algorithm” (Claudius et al., 2015, p. 439) when using the computer simulator compared to the standardized patient.
Claudius et al. (2015) do not specify what design they used for this study. This study fits the definition of a correlation study. A correlation study is defined as “a research design to quantify the strength and the direction of the relationship of two variables in a single subject or the relationship between a single variable in two samples.” (Houser, 2018, pg. 143). This study correlates the accuracy and time taken to triage pediatric mass-casualty victims using computer simulations or live patient actors. Claudius et al. (2015) recruited their convenience sample of 33 participants from a pre-clinical Emergency Medicine interest group at a single university. This group was considered a convenience sample because it is “a sample that includes subjects who are available conveniently to the researcher” (Houser, 2018, p. 480). Since this group is already interested in emergency medicine, the participants may already have an increased knowledge in the information being examined. Thus, the results of this study may not be representative of the larger population. This sample size is also very small, and the researchers did not explain their rationale for using such a small group, but it allows for future studies to use a larger population to and determine if the same results are achieved.
The live actor scenarios were monitored by a board-eligible or board-certified pediatric emergency medicine physician, to observe for accuracy and to monitor the time (Claudius et al., 2015, p. 439). The computer simulations were developed by the researchers. Surveys were also used to “assess students’ impressions of the fidelity of the drill scenario” (Claudius et al., 2015, p. 439). Accuracy was measured by counting unnecessary or missed actions according to the JumpSTART algorithm, and a stopwatch measured the time. The computerized scenarios were managed by a software system to track the time and actions completed during the simulations (Claudius et al., 2015, p. 440). One of the simulations had to be eliminated from the computerized scenario data because there was an error in the data tracking. Four of the seven computerized cases also had inaccurate durations for the scenario (Claudius et al., 2015, p. 440). All the scenarios for both the live actors and the computerized simulations were derived from actual patients charts from a level 1 trauma center (Claudius et al., 2015, p. 439). The researchers did not report the validity and reliability of both the computerized scenarios that they developed and the surveys that the participants completed.
Claudius, Kaji, Santillanes, Cicero, Donofrio, Gausche-Hill, Srinivasan, and Chang used a Wilcoxon rank-sum to compare median times taken to correctly triage the patients, noting that a “95% confidence interval of the difference was estimated using the Hodges-Lehman estimate” (Claudius et al., 2015, p. 440). The Wilcoxon rank-sum regulated the scenarios from the computerized scenarios that did not experience technical problems, and it corresponded them to the three live actor scenarios that matched up with the level of triage needed in those scenarios (Claudius et al., 2015, p. 440). A Mantel-Haenszel chi-square was used to compare triage accuracy between the modalities and report any relative risk (Claudius et al., 2015, p. 440). The researchers used a Kruskal-Wallis test to assess the number of missed or unnecessary actions taken and stress levels during each of these simulations (Claudius et al., 2015, p. 440). The researchers chose to use a regression analysis utilizing generalized estimation equations to look for the effect of the computerized simulations versus the live actor simulations. This model allowed for adjustments of each participant, cast, triage acuity, and the numbers of missed or unnecessary actions.
The researchers, Claudius, Kaji, Santillanes, Cicero, Donofrio, Gausche-Hill, Srinivasan, and Chang, had the 33 participants complete a total of 363 simulations, 231 were computerized, and 132 were using the live actor (Claudius et al., 2015, p. 440). They explain the times it took to correctly triage each patient, and the number of missed or unnecessary actions taken throughout the text and by utilizing tables, pictures, and charts that were clear and concise. They explain that computer simulations had more missed and unnecessary actions than the simulations utilizing the live actors (Claudius et al., 2015, p. 441). They also explain that the amount of time that it took for the live actors to be correctly triaged was significantly less in all the cases except for the immediate level cases in which the computerized simulations were completed faster (Claudius et al., 2015, p. 441). The researchers also found out from the participants that the moulaged actors offered a more realistic encounter and provided a more stressful encounter because of the realism of the situation compared to the computer simulations.
Claudius, Kaji, Santillanes, Cicero, Donofrio, Gausche-Hill, Srinivasan, and Chang identified several limitations in their study, including the limited number of participants. They only had 33 students who completed all of the assigned scenarios, which is not a very large population for this study (Claudius et al., 2015, p. 442). A larger population would help provide more support for their conclusions drawn from this study. They also noted that the results of their study might not apply to first responders or others who have prior triage training (Claudius et al., 2015, p. 442). The long-term knowledge gained was not assessed from this study; only the immediate knowledge gained was tested directly after the simulation (Claudius et al., 2015, p. 442). The live actors were not the age of the actual patients that they were portraying in the scenarios (Claudius et al., 2015, p. 442). The researchers wanted to maintain consistency from one simulation to another, so they used adult standardized patients to portray the pediatric patients in the scenarios. The last limitation that is identified in this study is that the computerized scenarios were static and did not use full virtual reality capabilities. Thus, they could be improved in the future (Claudius et al., 2015, p. 442). While this study does not directly compare a standardized patient to a high-fidelity mannequin, it does emphasize the importance of a standardized patient and once again prove that they provide a more realistic simulation atmosphere and allow for more accurate learning experiences.
The evidenced-based research study completed by Wisborg, Bratteo, Brinchmann-Hansen, and Hansen wanted to evaluate how trauma teams from different hospitals performed using two different simulation modalities for training purposes. They state that the problem of this study is, “There is little knowledge about the educational outcome of team training with different patient models” (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers identified the purpose of this study to be, “… we wanted to examine the participants’ assessment of their educational outcome after training with either a standardized patient or a simple resuscitation mannequin” (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2).
Wisborg, Bratteo, B.-Hansen, and Hansen used mixed method to construct the design of this study. They combined the styles of a descriptive correlation and a descriptive focus group. The descriptive correlation design allows the researchers to “describe the strengths and nature of the relationship between two variables without explaining the underlying cause of that relationship” (Houser, 2018, p. 275). The descriptive focus group design allows the researchers to “have a qualitative interview with a small group who have been specifically selected to represent a target audience” (Houser, 2018, p. 214). This study correlates the responses from the participants of the trauma teams with the use of the mannequin or the standardized patient (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2).
Wisborg, Bratteo, B.-Hansen, and Hansen used two trauma teams from five different hospitals there were that 104 trauma team members that participated in the simulations (Wisborg et al., 2009, p. 3). That number was comprised of 32 doctors, 53 nurses, 19 radiographers, lab technicians (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). Fifty-one of the participants completed the simulation using the standardized patient first, while 53 people completed the simulation using the mannequin first (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). Each team from the five hospitals were randomly assigned to the two different modalities. The teams from each hospital were comprised of the staff that was required per each hospitals policy. After each simulation the individual participants answered anonymous questionnaires assessing the degree of realism and if they felt embarrassed treating the patient in that simulation (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). Then each team had a 30-minute structured debriefing (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The validity and reliability of the questionnaires and the debriefing were not stated in the article by the researchers.
Wisborg, Bratteo, B.-Hansen, and Hansen used analyzed using SPSS 11.0 (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers used a T-test and a one-way analysis of variance with Bonferroni’s correction to compare the means of the data gathered (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers also used a Chi-square test to compare frequencies of answers from the data (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers considered a p-value of less than 0.05 to be statistically significant (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The means and standard deviations were compiled and put into tables (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The focus group post discussions were analyzed using a ground theory approach (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2).
The researchers analyzed their data using SPSS 11.0 (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers used a T-test and a one-way analysis of variance with Bonferroni’s correction to compare the means of the data gathered (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers also used a Chi-square test to compare frequencies of answers from the data (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The researchers considered a p-value of less than 0.05 to be statistically significant (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The means and standard deviations were compiled and put into tables (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The focus group post discussions were analyzed using a ground theory approach (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2).
The researchers found that the participants either preferred the standardized patient or equally liked both the mannequin and standardized patient Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3). They noted that there was a significant difference between the groups in their ratings of preference between the modalities. Wisborg, Bratteo, B.-Hansen, and Hansen, (2009, p. 3) used a modality preference scale, 0 meant the participant preferred the mannequin, 50 meant the participant preferred them both equally, and 100 meant the participant preferred the standardized patient. The groups exposed to the standardized patient first had an average rating of 60, meaning that they were not overly swayed one way or another (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3). The groups that were exposed to the mannequin first rated their preference as a 76, meaning that they preferred the standardized patient compared to the mannequin (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3). The results of the realism and embarrassment ratings are stated in table two (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3). The researchers explain that both groups felt that each simulation felt equally realistic and rated it as a 53 on their scale. The groups rated their level of embarrassment as 31 from the mannequin first group and 27 from the standardized patient first group. The researchers concluded that “the teams did not experience any embarrassment as to using a standardized patient versus a mannequin” (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3).
The researchers, Wisborg, Bratteo, Brinchmann-Hansen, and Hansen, do not have a specific section for limitations listed in their research article. The researchers state that other research groups have done similar case studies, but instead of using standardized patients, they compared high and low fidelity mannequins (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2. They found that the higher fidelity mannequins had better outcomes than the lower fidelity mannequins, but they also cost much more than the simple low fidelity mannequins (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 2). The limitation presented by the researchers is that they were not able to use the high-fidelity mannequins to compare to the standardized patients. This study explains that there is “little difference between a simple resuscitation mannequin and a standardized patient when the educational goal is multiprofessional team training with an emphasis on team communication, leadership, and co-operation” (Wisborg, Bratteo, B.-Hansen, and Hansen, 2009, p. 3). This study shows that depending on the reason for the simulation; it will depend on which simulation modality should be used.
Basak, Aciksoz, Unver, and Aslan performed a study using pre-clinical nursing students to understand how using a standardized patient to each the skill of performing hygiene affected the student’s confidence when they went to clinicals and had to perform this skill on their patients. The problem identified by this study was that “In the study setting, conventional hygiene skills are taught to students in a safe and controlled laboratory environment using low-fidelity simulators… Even if students have perfected these skills in the laboratory environment, they still experience anxiety and difficulty in the actual clinical environment” (Basak, Aciksoz, Unver, and Aslan, 2018, p. 49). The researchers identified
“The aim of this study was to compare the effect of standardized patients and the use of low-fidelity mannequins in teaching hygiene care. The study aimed to test the following hypotheses: 1 The performance scores of the students in the standardized patient group regarding hygiene care are higher than the scores of students in the low-fidelity mannequin group. 2 The Student Satisfaction and Self-Confidence Scale (SSSC) and Simulation Design Scale (SDS) scores of the standardized patient group are higher than those of the low-fidelity mannequin group. 3 The self-efficacy scores of the students in the standardized patient group in hygiene care in the clinical environment are higher than those of the students in the low-fidelity mannequin group.” (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50).
This study is an example of a “randomized control study” (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). This type of study is defined as “an experiment in which subjects are randomly assigned to groups; one receives an experimental treatment while another serves as a control group. The experiment has high internal validity, so the researchers can draw conclusions regarding the effects of treatment” (Houser, 2018, p. 486). The study used 80 first-year nursing students at a nursing school in Turkey (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). The students had to volunteer their participation and participate in theoretical lectures on hygiene education (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). There were 40 students randomly assigned to the two groups. One group used the standardized patient, while the other used the low-fidelity mannequin (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). This was an ample size population for this initial study. The data was collected by the students completing a “Students’ satisfaction and self-confidence scale” which consisted of 12 points of self-assessment and used the 5-point Likert-type scale (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). The students also were graded on their performance of hygiene care. There was a checklist containing 16 steps, and they were awarded points based on whether they performed that step and how well it was done (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). The researchers also prepared a clinical practice feedback form so that the students could answer ten questions and express their feelings about this simulation (Basak, Aciksoz, Unver, and Aslan, 2018, p. 50). The completion of all of these assessment tools and scales allowed the researchers to obtain a full understanding of how each simulation modality affected the learning and confidence of the students.
The data was analyzed using a Statistical Package for the Social Sciences for Windows version 17.0. The normality of data was tested using the Kolmogorov-Smirnov test. The descriptive data was analyzed by calculating an “arithmetic mean and standard deviation, minimum-maximum, frequency, percentage” (Basak, Aciksoz, Unver, and Aslan, 2018, p. 52). The results of this were compared using a Mann Whitney U test, Fisher’s exact test, and Spearman’s rank correlation. The researchers considered a p-value of <0.05 to be statistically significant (Basak, Aciksoz, Unver, and Aslan, 2018, p. 52). The researchers found that the control group or the group who used the mannequin tended to score lower than the intervention group or the group who used the standardized patient on the design on the simulation design and students’ satisfaction and self-confidence scales (Basak, Aciksoz, Unver, and Aslan, 2018, p. 52). The students who used the mannequins also scored lower on their actual performance of hygiene care than the students who used the standardized patient (Basak, Aciksoz, Unver, and Aslan, 2018, p. 52). The students who used the standardized patient also reported feeling more confident than the mannequin group when performing these skills in the clinical environment (Basak, Aciksoz, Unver, and Aslan, 2018, p. 53). The standardized patient group also experienced fewer physical reactions such as tremors or palpitations and felt more comfortable and confident when performing their hygiene care; whereas the mannequin group experienced more tremors facial flushing and palpitations and did not feel as comfortable (Basak, Aciksoz, Unver, and Aslan, 2018, p. 53).
Basak, Aciksoz, Unver, and Aslan identify that the use of only one nursing school in this study created a limitation to this study. They also identified that the skill performance of the students was not evaluated during actual clinical practice, and they were only able to obtain the students feedback concerning their experiences. This study helps exemplify the reasons why it is important to use a standardized patient when learning basic skills. It demonstrates how standardized patients help relax the student and helps build their confidence when it comes to being able to perform their skills.
Tuzer, Dinc, and Elcin (2016) conducted a study to compare the use of high-fidelity simulators to standardized patients when teaching undergraduate nursing students. The researchers identify the problem of this study to be that “There is a growing body of literature that reveals the educational value of various simulation modalities; however, few studies have compared the effectiveness of high-fidelity simulation and standardized patients” (Tuzer, Dinc, and Elcin, 2016, p. 121). The researches stated that the purpose of this study is “compare the effectiveness of two simulation techniques on specific physical examination skills of nursing students” (Tuzer, Dinc, and Elcin, 2016, p. 121). In this study, there was a mixed-methods explanatory sequential design used. This design contained a quantitative part and sub-sequentially a qualitative part. The quantitative aspect assessed the student’s knowledge and abilities on thorax, lung, and cardiac examinations across both high-fidelity simulators and standardized patients (Tuzer, Dinc, and Elcin, 2016, p. 122). In the qualitative component of the students were asked to elaborate on the quantitative results (Tuzer, Dinc, and Elcin, 2016, p. 122).
In this study, the researchers used convenience sampling and chose 52 nursing students who were enrolled in a physical examination course at a university (Tuzer, Dinc, and Elcin, 2016, p. 122). Upon the nursing students being chosen, they were split into two equal groups and were randomly assigned to either a standardized patient or a high-fidelity simulator. The convenience sample allowed the researchers to use a preformed group to conduct their research on instead of having to go through the process of finding and establishing their own group. There were various forms and assessments and examinations that were used when collecting data from this study. The first form that was used was the Debriefing Form. It asked various short response questions to gauge what knowledge the students gained and to determine what they felt throughout the simulations. The other form that was used was the Focus Group Form. It was used to ask the students assessment of the learning environment and the processes used. Students were also given an exam entitled, Evaluating the Level of Knowledge on Thorax, Lung, and Cardiac Examination, which assessed their knowledge before and after the simulations. One other additional data collection form that was used was the Skills Assessment and the Cardiac Examination Skills Assessment. This assessment used this scale, “‘Not observed = 0,” “Insufficient/Mistaken = 1,” or “Correct/Complete = 2’” (Tuzer, Dinc, and Elcin, 2016, p. 122). The data collection process was efficient and well thought out. In the article, it is stated that “In order to ensure the content validity of the data collection forms, two lecturers in the Fundamentals of Nursing Department of the Nursing Faculty and one physician provided feedback on the forms” (Tuzer, Dinc, and Elcin, 2016, p. 122). They were based on the information the researchers received regarding the effectiveness of their forms they made the needed changes.
The data from this study was analyzed using IBM SPSS Statistics for Windows, Version 21.0, which is a data analysis computer software system. The pre- and post-test scores were converted into percentile values. Tuzer, Dinc, and Elcin (2016) used the Shapiro Wilk test to evaluate the scores conformity into a normal distribution. They used a paired t-test to compare the knowledge and performance of the participants for both of the education methods used. They also used the independent sample t-test to evaluate the difference in the scores between the groups. The researchers also used audio recordings and grouped the like responses from the participants. The results of the study were broken down into three parts which looked at the students gain of knowledge, skills, and opinions. The researches found that there was a significant difference in the knowledge gained for both groups between the pre- and post-tests. The group that utilized the standardized patient produced significantly higher scores than the high-fidelity simulator group. The skills acquisition results determined that both groups had similar scores on the post-simulation assessment test. The scores for both groups were higher on the Real Patient Assessment than the Post-Simulation Assessment. The student opinions section was broken down into four sections: training environment, briefing session, debriefing session, and simulation technique. Both groups expressed that the environment and techniques facilitated a positive learning environment. The students also identified that the debriefing session helped them identify their “mistakes, drawbacks, what they forgot to do, remember their deficiencies, construct a permanent body of knowledge, raise awareness, and increased their confidence in performing the application” (Tuzer, Dinc, and Elcin, 2016, p. 123). The students who used the mannequin stated that they were less anxious and felt like they learned in a professional because they could not actually cause harm to a real patient if they made a mistake (Tuzer, Dinc, and Elcin, 2016, p. 123). They also identified positive advantages to using the mannequin such as clearly being able to identify sounds and they were not stressed about it being a real patient, so they could focus on their assessment (Tuzer, Dinc, and Elcin, 2016, p. 123). The students who used the standardized patient stated that they initially felt stressed, but eventually became comfortable with assessing real patients (Tuzer, Dinc, and Elcin, 2016, p. 123). They identified that they could not identify or hear sounds as clearly, but the information that they learned stuck with them (Tuzer, Dinc, and Elcin, 2016, p. 124). Both groups identified that they felt their communication and skills were improved by participating in the simulation experiences (Tuzer, Dinc, and Elcin, 2016, p. 124).
Tuzer, Dinc, and Elcin (2016, p. 125) identified that convenience sampling at one institution was a limitation to their study because it may not be representative of the larger population. Another limitation to this study was a threat to the internal validity of their findings due to the potential of the test-retest effects because the repeated use of the test may have allowed for the scores and performance to be improved. This study demonstrates and provides support for the use of standardized patients in simulations because they help build the confidence of students when they must deal with real patients. While they may not produce the sounds that a simulator can as clearly, they are more realistic, and the students are able to remember learned information better from their experience when using and learning from a standardized patient.
The research question posed for this review was: In healthcare professionals, education and continuing education are standardized patient actors compared to high fidelity simulation mannequins more effective in learning and simulating realistic situations? Four of the five research articles review explained that standardized patients had a positive impact on the participants experience during the simulation (Alsaad et al., 2017; Basak et al. 2018; Claudius et al. 2015; Tuzer et al. 2016). One study done by Wisborg et al. (2009) did not demonstrate a preference by the participants of either the mannequin or the standardized patient. However, this study did note that the simulation modality chosen should be based on the objective of a simulation. Such as if the objective is communication between the provider and patient than a standardized patient is the better option, but if the objective is to perform an invasive procedure such as inserting a tracheostomy than a mannequin is the better and safer option. The five research studies used different designs and methods to determine statistics providing evidence of the relationships between healthcare professionals or students and the amount of learning occurred and realism of situations during simulations utilizing different modalities.
The first article by Alsaad et al. (2017) a randomized cohort study was performed to determine whether the internal medicine residents could identify and treat the four decompensating patients and the how realistic they felt the simulations were. This article found “Our study results indicate that utilizing SPs was more effective and better mimicked realistic patient care than the use of a mannequin in acutely decompensating patient scenarios for IM residents.” (Alsaad et al., 2017, p. 485). Claudius et al. (2015) assessed the ability of a convenience sample of students to correctly and quickly triage pediatric mass-casualty victims according to the JumpSTART treatment protocols using adult standardized patients portraying children compared to a computerized simulation. They found, “Among a limited group of pre-clinical medical students, the use of live adult actors portraying children in an MCI allowed for higher JumpSTART triage accuracy and shorter time to triage when compared to computerized scenarios.” (Claudius et al., 2015, p. 442). Wisborg et al. (2009) compared trauma teams from five different hospitals in a mixed-method study. They combined the styles of a descriptive correlation and a descriptive focus group. From this study, they found “the focus group discussions revealed no special preference for either using a mannequin or a standardized patient in general, but a slight preference for using a standardized patient if the simulated patient is supposed to be able to talk and interact with the trauma team. If the patient was supposed to be unconscious, no preference was expressed.” (Wisborg et al., 2009, p. 3). They then concluded that “there seems to be little difference between a simple resuscitation mannequin and a standardized patient when the educational goal is multiprofessional team training with an emphasis on team communication, leadership, and co-operation.” (Wisborg et al., 2009, p. 4). Basak et al. (2018) performed a randomized controlled study in which they assessed first-year nursing student’s performance of providing hygiene care on mannequins versus standardized patients. This study found that “The students in this group (the standardized patient group) also showed less stress-related physical reactions in their first experience in the actual environment and experienced less difficulty with an actual patient during practice, felt more adequate, and transferred the skills they had learned in the laboratory to clinical practice better.” (Basak et al., 2018, p. 54). Tuzer et al. (2016) performed a mixed-method explanatory sequential design study using nursing students performing a thorax, lung, and cardiac examination on high-fidelity simulators compared to standardized patients. They stated, “The results of this study revealed that the use of standardized patients was more effective in increasing the knowledge level of students on thorax, lung, and cardiac examinations than the use of a high-fidelity simulator; however, there was no significant difference in the improvement in performance level between simulation techniques. In contrast, practice on real patients increased performance scores for all students, with no significant differences between groups.” (Tuzer et al., 2016, p. 126).
Most of the studies cited small sampling size or only using one cohort instead of using multiple sites and larger populations as being limitations to their studies (Alsaad et al., 2017; Basak et al. 2018; Claudius et al. 2015; Tuzer et al. 2016). All of the studies stated that their results might not be transferable to other studies because of the specific details and the small sizes of their studies (Alsaad et al., 2017; Basak et al. 2018; Claudius et al. 2015; Tuzer et al. 2016; Wisborg et al. 2009). This leaves a lot of room for future, larger studies to be conducted assessing whether or not the findings from these smaller studies are transferable to a larger population.
From the four out of five articles, it can be determined that standardized patients have a positive influence on healthcare professionals and students in simulation settings. The exception in this research review was an article by Wisborg et al. (2009) as they did not find a significant difference in preference between a mannequin or standardized patient from the participants in their study. The results of that study did note that even though the participants did not have a modality preference, they felt that the standardized patient enhanced the simulation by enhancing the interactions between the participants and the patient. The research question posed was: In healthcare professionals, education and continuing education are standardized patient actors compared to high fidelity simulation mannequins more effective in learning and simulating realistic situations? As a result of the majority of articles exemplifying the positive influence that standardized patients have on simulation, it can be concluded that they improve the experience that the participant has during a simulation, and they should continue to be implemented as much as possible. Although there is much research that still needs to be performed, these small studies can help educational institutions and continuing education programs develop simulations using standardized patients to help improve the realism and educational experiences from the simulation. None of the articles were able to prove a negative influence on the participants or a less realistic simulated experience from the use of standardized patients in the simulations. Thus, the use of standardized patients creates an equal or even better learning experience and realistic simulated environment for the healthcare professional or student participants and should be utilized as much as possible.
- Alsaad, A. A., Davuluri, S., Bhide, V. Y., Lannen, A. M., & Maniaci, M. J. (2017). Assessing the performance and satisfaction of medical residents utilizing standardized patient versus mannequin-simulated training. Advances in Medical Education and Practice, 8, 481-486. doi:10.2147/AMEP.S134235
- Basak, T., Aciksoz, S., Unver, V., & Aslan, O. (2018). Using standardized patients to improve the hygiene care skills of first-year nursing students: A randomized controlled trial. Collegian, 26(1), 49–54. doi: 10.1016/j.colegn.2018.03.005
- Claudius, I., Kaji, A., Santillanes, G., Cicero, M., Donofrio, J. J., Gausche-Hill, M., Chang, T. P. (2015). Comparison of computerized patients versus live moulaged actors for a mass-casualty drill. Prehospital and Disaster Medicine, 30(5), 438-442. doi:10.1017/S1049023X15004963
- Houser, J. (2018). Nursing research: Reading, using, and creating evidence (4th ed.). Burlington, MA: Jones & Bartlett Learning.
- Statistics Solutions (2019). Mann-Whitney U test. Retrieved from https://www.statisticssolutions.com/?s=mann+whitney+u+test
- Tuzer, H., Dinc, L., & Elcin, M. (2016). The effects of using high-fidelity simulators and standardized patients on the thorax, lung, and cardiac examination skills of undergraduate nursing students. Nurse Education Today, 45, 120–125. doi: 10.1016/j.nedt.2016.07.002
- Wisborg, T., Brattebø, G., Brinchmann-Hansen, A., & Hansen, K. S. (2009). Mannequin or standardized patient: Participants’ assessment of two training modalities in trauma team simulation. Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine, 17(1), 59-59. doi:10.1186/1757-7241-17-59
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: