JMIR Publications

JMIR Research Protocols

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 17.02.16 in Vol 5, No 1 (2016): Jan-Mar

This paper is in the following e-collection/theme issue:

    Protocol

    Exploring a New Simulation Approach to Improve Clinical Reasoning Teaching and Assessment: Randomized Trial Protocol

    1Sainte-Justine Hospital, Department of Neonatology, University of Montreal, Montreal, QC, Canada

    2Faculty of Education, Department of Administration and Foundations of Education, University of Montreal, Montreal, QC, Canada

    3Centre for Applied Pedagogy in Health Sciences, University of Montreal, Montreal, QC, Canada

    4Faculty of medicine, (UIGP) Primary Care Unit, University of Geneva, Geneva, Switzerland

    Corresponding Author:

    Thomas Pennaforte, MD

    Sainte-Justine Hospital

    Department of Neonatology

    University of Montreal

    3175 Ch de la Côte-Sainte-Catherine

    Montreal, QC, H3T 1C4

    Canada

    Phone: 1 514 965 4652

    Fax:1 514 865 4652

    Email:


    ABSTRACT

    Background: Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback.

    Objective: The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents.

    Methods: This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares.

    Results: This study is in its preliminary stages and the results are expected to be made available by April, 2016.

    Conclusions: This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.

    JMIR Res Protoc 2016;5(1):e26

    doi:10.2196/resprot.4938

    KEYWORDS

    Crowdfunding campaign to support this specific research

    We help JMIR researchers to raise funds to pursue their research and development aimed at tackling important health and technology challenges. If you would like to show your support for this author, please donate using the button below. The funds raised will directly benefit the corresponding author of this article (minus 8% admin fees). Your donations will help this author to continue publishing open access papers in JMIR journals. Donations of over $100 may also be acknowledged in future publications.

    keyboard with crowdfunding key instead of enter key

    Suggested contribution levels: $20/$50/$100



    Introduction

    Background

    The Importance of Clinical Reasoning in Medicine

    Diagnostic errors account for more than 8% of adverse events in medicine and up to 30% of malpractice claims [1]. These errors may be related to the working environment but clinical reasoning issues are involved in about 75% of the cases, either alone or in association with system failures [2].

    In this context, clinical reasoning is a crucial skill for all physicians regardless of their area of expertise and becomes a central aim of medical education [3]. The clinical reasoning process is largely supported by several decades of research in cognitive psychology and has been extensively published [4-6]. The most accepted model of clinical reasoning is called dual-process framework and consists of two independent systems [7]. System-1 is automatic, intuitive, nonanalytical, and error-prone. It leads to the immediate recognition of the clinical constellation and the generation of a working diagnostic hypothesis. System-2 is slower, more analytical, and conscious [8,9]. Novices employ this analytic mode of reasoning more frequently than their experienced counterparts because they lack the necessary experience for System-1 reasoning. Dual-process theory posits that both systems are simultaneously required in most clinical scenarios and has been associated with better diagnostic outcomes [10]. However, in what situations does the valence go towards one system or another remains unclear and how both systems are activated and used is still under study and debated [11-13]. Preliminary conclusions from recent publications describe that the analytical system is primarily used in the following situations [14,15]: when time permits, when there are high-stakes outcomes, when the situation is complex, when the decision-maker is facing ambiguous, nonroutine or ill-defined problems, and in the context of uncertainty. In contrast, routine problems associated with a higher level of certainty would be more often dealt with by the intuitive system, especially when time is lacking [11-13].

    Another challenge for medical educators is the assessment of residents’ clinical reasoning [3]. Because of the paucity of scientific evidence about optimal evaluation, both quantitative and qualitative clinical reasoning assessment tools have been reported [5,16-19]. However, the following general issues arise from these tools: (1) diagnostic reasoning must be inferred from behavior because it is not a discrete and measurable quality; (2) most of these instruments are performed in the classroom that emphasize the assessment of System-2, but not System-1 reasoning nor the shift between automatic and analytic reasoning [15,16]; and (3) rater’s personal knowledge, experience, ability, and cognitive biases influence his or her adjudication of a learner’s performance in a nonstandard fashion [20-24]. Moreover, reasoning assessment can be highly complex and dependent upon the context. Durning et al [25] have recently reported the influence of three environmental factors on clinical reasoning: (1) the patient’s specific problem (patient factors); (2) the setting in which the patient is evaluated (encounter factors) [10,26]; and (3) human-factors such as fatigue, well-being, and sleepiness (doctor factors). They emphasize the importance of measuring the environment as a part of the signal, rather than part of the “noise”, which is to be minimized and generally ignored [27-29].

    Simulation-Based Education as a Strategy to Enhance Clinical Reasoning Skill

    Simulation-based education (SBE) has recently emerged as an instrument with potential to assess diagnostic reasoning [16,30,31]. Based on Kolb’s learning cycle [32], true learning is depicted as a four-part process in a cycle. Individuals learn through concrete experience (phase 1), reflection on the experience (phase 2), conceptualization of their reflective observations into more abstract models (phase 3), and experimentation of these new principles and conclusions to guide subsequent decisions and actions that lead to new concrete experiences (phase 4). Phase 2 and phase 3 are components of debriefing, a learning activity that generally follows a simulation experience. According to many authors, debriefing could provide an opportunity for residents and faculty to re-examine what occurred during the simulation process and detect possible flows in the reasoning process [33-45].

    An important question remains whether exploration of the diagnostic process, by providing an opportunity for learners to reflect upon past clinical decisions, is more effective after the scenario or if it is integrated during the simulation session [45,46]. In the classical approach of simulation, debriefing follows the simulation experience. However, the educational valence of this type of session presents several limitations concerning the clinical reasoning assessment. First, this way of reflection-on-action means that it is mainly the analytical part of the reasoning process that is explored during the debriefing, without focusing on the intuitive process [47]. Second, after a stressful scenario, residents frequently forget or modify what they said or thought according to the evolution of the case, even if video recordings provide insight into what may not be documented in the medical record or fully observed in real time [45].

    We believe that changes in the organization of a simulation session could allow better assessment of both System-1 and 2 of the reasoning process. First, reflection-in-action should be encouraged by concurrent verbalization to let the tutors know about the student’s intuitive System-1 thinking during management of a patient [11-13]. Second, reflection-on-action, which reflects analytical System-2 should be encouraged by in-simulation interruptions at key moments of the clinical reasoning process [48,49]. These interruptions could be assimilated to a dynamic and decision-dense environment where clinical reasoning constructs must be considered, as studies suggest that the average time on particular tasks is limited to less than 2 minutes, and interruptions occur every 2 to 10 minutes in the emergency department [50-52]. Finally, the active experimentation of newly acquired conceptualizations in a subsequent part of the scenario may avoid the learner’s return to actions based on habits and nonreflective experience [32]. Based on these arguments, we present a new approach of SBE, called simulation with iterative discussions (SID). The simulation session is designed as a single scenario with a computerized mannequin where the instructor interrupts the flow of the scenario at three key moments to cue residents towards the appropriate medical management of the case. The objectives of the session are to build a scaffold of clinical reasoning competencies throughout the scenario while continuing to manage the patient, and to improve concept acquisition and retention with spaced learning. We believe that a closer assessment of both System-1 (by concurrent verbalization during patient management) and System-2 (during iterative discussions) will improve students’ capability to self-improve clinical reasoning skills in an authentic setting.

    Why Is It Applied in Neonatology?

    Implementing a new educational strategy that assesses clinical reasoning should be particularly exciting in a busy clinical environment such as neonatology. In contrast to the well-established management of neonatal emergencies at birth [53], most daily clinical situations managed in neonatology are nonroutine or ill-defined problems marked by high levels of complexity and unpredictability, requiring that clinical reasoning is solely based on the analytical System-2 [3,6,14,15,54]. Moreover, the frequency and the impact of diagnostic errors and cognitive biases increase in emergency settings due to several factors such as stress, fatigue, circadian disruptions, time constraint, and noisy environment. In these high-risk situations, physicians rely on System-1 process [55-58]. By enabling both System-1 and 2 of clinical reasoning, practicing neonatology requires robust clinical reasoning abilities in addition to cognitive, technical, and behavioral skills.

    Aim of the Study and Working Hypotheses

    The aim of this study is to explore how clinical reasoning abilities of residents in General Pediatrics and Neonatal-Perinatal Medicine evolve and are learned with the SID in comparison to the classical approach of simulation.

    We hypothesize that (1) SID allows better assessment of both System-1 and 2 of clinical reasoning; (2) SID promotes higher self-progression of the clinical reasoning process when compared to the classical approach of simulation; (3) concurrent verbalization benefits mainly novice residents with underdeveloped System-1 reasoning process; and (4) iterative discussions benefit both novice and expert residents by enhancing System-2 processes.


    Methods

    Setting and Population

    The study will take place at the Mother-Child Simulation Center at CHU Sainte-Justine, between January and March 2016. CHU Sainte-Justine is a standalone pediatric center that houses a 65 bed level 3 Neonatal Intensive Care Unit (NICU). The simulated setting will be a NICU. The simulation center is equipped with appropriate audio-visual equipment.

    Residents enrolled in the General Pediatrics and Neonatal-Perinatal Medicine programs will be the target population for this study. This population is selected because (1) residents are regularly exposed to simulation training during their curriculum, and (2) their clinical reasoning ability improves between the 1st and the 6th year of residency, mainly with increasing clinical exposure through rotations [59]. All postgraduate year (PGY) 1 to 6 residents enrolled in the General Paediatrics and Neonatal-Perinatal Medicine programs at Université de Montréal between May 1 and June 30, 2015 will be eligible for inclusion. There will be no exclusion criteria.

    Ethical Considerations

    Each resident will be approached for consent by one of the authors. He/she will be informed that participation or lack of participation in the study will not impact residency training assessment. There will be no financial incentive to participate, and participants will be able to opt out at any moment of the study. The findings of this study will be treated anonymously. The project has been approved by CHU Sainte-Justine’s institutional review board on July 15, 2014.

    Study Design

    This is a prospective exploratory nonblinded randomized mixed-methods study. Both quantitative and qualitative research methods will be used simultaneously as to comprehensively explore how clinical reasoning develops through both simulation modalities.

    Randomization will be stratified according to two different groups depending on their level of exposure to the NICU: (1) novice residents (PGY1 and 2, exposed to less than 8 weeks to the NICU), and (2) expert residents (PGY3-6, exposed to at least 8 weeks to the NICU). These residents have been respectively exposed to at least 5 or 15 simulation sessions during their residency training. Residents will be randomly (by draw of names) allocated by the primary investigator to group A or group B (Figure 1). Group A will be exposed to the SID approach, whereas group B will be exposed to the classical approach of simulation. Participants will be scheduled to complete the study protocol in 60 minutes. In the first 5 minutes, participants will complete the presimulation questionnaire and receive a brief introduction including a period to get physically familiarized with the mannequin (SimNewB; Stavanger, Norway). In the next 30 minutes, residents will complete the clinical simulation scenario after reading a clinical vignette. At the end of the session, participants will have 5 minutes to complete the postsimulation questionnaire (see Multimedia Appendix 1). The course will end with a 20 minutes semistructured interview.

    Figure 1. Study protocol.
    View this figure

    Intervention

    Personnel

    An instructor experienced with programming, control of the computerized mannequin, and the art of debriefing will be in charge of running the scenario according to the residents’ actions and will conduct the iterative discussions and debriefings. A facilitator will also be present in the simulation room to ensure the flow of the scenario and will provide necessary information about the case upon request from the resident. Standardized health professionals (respiratory therapist and/or a nurse working in the Sainte-Justine hospital NICU) will be present and will portray a specialist from their proper field.

    Description of the Intervention

    The SID approach (Figure 2a) consists of a scenario interrupted at three moments and followed by a short debriefing. According to Kuhn’s steps of the medical reasoning process [60], the session is divided into “data gathering”, “data integration,” and “data confirmation”. Each part consists of two phases. First, the participant is asked to manage a simulated patient based on a real-life and complex case. Complexity, uncertainty and environmental factors such as doctor, patients, and encounter factors [25] are voluntarily embedded. The participant is also asked to verbalize his or her first intuitive diagnosis as it comes in mind by System-1 activation. Second, the scenario stops and the instructor questions the participant on his clinical reasoning process at that point in time by System-2 activation. Each interruption must be as short as possible in order to keep the trainee in action, and is ended by a one-sentence reconceptualization of the scenario by the instructor before pursuing the scenario (for the two first stops). Discussions include questions regarding data gathering and the rationale of ordered investigations (first stop), data integration, and how investigation results helped reach a diagnosis (second stop), and finally data confirmation and how the management decisions were reached (third stop). There is neither feedback nor guidance from the instructor during the stops in order to not interfere with the participant’s ongoing clinical reasoning process. These stops are “discussions” and not “debriefings”. A short general debriefing ends the session, and provides feedback on reasoning, procedural skills, and knowledge by highlighting the learnt key messages.

    The classical approach (Figure 2b) consists of a scenario with no intervention provided by the instructor until debriefing. As in the SID approach, the participant must verbalize his reasoning process and the diagnostic hypotheses as they come to mind during the scenario. Immediately after the scenario, a facilitated debriefing is performed. This debriefing lasts two to three times the length of the scenario and focuses on the participant’s clinical reasoning skills.

    Figure 2. Simulation formats. This figure represents structures of (a) SID and (b) classical approach of simulation, with approximate timing in minutes. (a) A three-time interrupted scenario, with three stops that represent iterative discussions concerning data gathering, data integration, and data confirmation. There should be no guidance by the instructor until the real and short debriefing ending the session. (b) A one-shot scenario followed by a true debriefing conducted by the instructor according to the item checked in the clinical reasoning assessment tool. As a standard debriefing, retroaction from the instructor will be possible.
    View this figure
    Clinical Reasoning Assessment Tool

    The Clinical Reasoning Assessment Tool (Figure 3) aids the instructor to identify proper questions during the iterative discussions (SID approach) and in conducting debriefing (classical approach). The authors of the present study have developed this tool because of the absence of such a tool in the literature. It is based on Graber’s classification of diagnostic errors [2,25], Audétat’s practical guide [61] to assist clinical teachers in detecting clinical reasoning difficulties and follows Kuhn’s steps of the medical reasoning process [60]. The objective of this tool is to help the instructor detect the student’s type of diagnostic errors focusing on his reasoning process, and then to determine appropriate questions to ask the student in order to let him verbalize and possibly self-correct his reasoning process. Three types of environmental factors leading to diagnostic errors may be involved in the reasoning process: nonfault factors (also named patients errors, which are out of the control of the physician), human factors and cognitive factors (also named doctor errors) and system factors (also named encounters errors, due to organizational or institutional flaw). Specific failure in the doctor’s cognitive process may be due to faulty knowledge, faulty recognition of cognitive biases or faulty data gathering (orange squares), data integration (green squares), and data confirmation (blue squares). One or two questions per section are suggested to the instructor to allow for exploration of each type of diagnostic error. Finally, a list of the main cognitive biases according to Croskerry et al [62] is provided and a suggested approach is presented. Once the type of error is identified, the instructor has to find one or two specific questions in order to let the student verbalize (or self-correct) his reasoning mistake.

    Scenario

    In order to stimulate the reasoning process, the chosen topic has to be realistic, and hold a range of differential diagnoses. Investigations should be necessary to precise or refute diagnoses in the absence of specific clinical signs. Uncertainty needs to be deliberately embedded. Management must be complex with controversies concerning treatments. Moreover, a range of environmental factors and events can be added to challenge team members by generating dissonance and failures in order to optimize efficiency of simulated team training and adult learning. The amount of provided information has to be minimal and nonspecific aiming to stimulate additional questions from the participant. Answers to participants’ questions must be standardized and should cover a range of possible differential diagnoses. Finally, procedural and relational skills must be embedded in order to portray as closely as possible the real-life environment.

    The general structure of the scenario must follow three parts so it can be performed in a single run-through (for the classical approach) or interrupted (for the SID approach). First, a nonmonitored patient presents with minor symptoms but remains clinically stable. The participant has to check the vital signs, ask the nurse for the history and the results of the physical examination, and order investigations. Second, the patient presents with acute collapse. The participant has to interpret investigation results while managing the acutely ill patient. Third, the participant must present a summary of the situation and a treatment plan to his supervisor while pursuing management of the patient. For the SID approach, the scenario is stopped at two times: (1) after ordering investigations, and (2) after receiving results of the investigations and prior to the call from the supervisor. The last stop occurs at the end of the scenario.

    Based on these principles, the chosen scenario consists of an infant with disseminated herpes simplex virus infection presenting with secondary septic shock. A newborn will present with tachycardia and will evolve towards hypoxemia and hypotension, requiring intubation and volume expansion. Laboratory findings will reveal viral sepsis with leucopenia, thrombocytopenia, increased C reactive protein, and elevated liver enzymes. Finally, skin blisters will appear during the transfer of information from the participant to the supervisor and will confirm the diagnosis.

    Figure 3. Clinical reasoning assessment tool is a useful tool for detecting diagnostic errors (such as nonfault, human factors, cognitive, and system-related) and clinical reasoning difficulties according to Kuhn classification (data gathering, data integration, and data confirmation). Instructor has to read questions for each category of error or difficulty and compare with the student performance during the simulation session. By checking all sort or errors concerning clinical reasoning, this tool permit to build specific questions for the student (without feedback for SID approach) or to construct his debriefing (with feedback for classical approach of simulation).
    View this figure

    Data Collection and Measurement Tools

    Presimulation Questionnaire

    This questionnaire will include demographic data (gender, age, year of graduation, number of previous experiences with simulation, learning style, and curriculum followed), degree of self-assessed subjective stress, and self-evaluation of clinical reasoning performances (both using a 10 point Likert-type scale question).

    In-Simulation Clinical Reasoning Assessment

    In constructing the simulation scenario and vignette, defined cues will be embedded to stimulate hypothesis generation using System-1 of clinical reasoning. After reading the initial patient presentation in the vignette, residents are asked to submit their diagnostic hypothesis by writing. Then, during the scenario, the participant’s verbalization (which is also stimulated by given cues from the nurse) of diagnosis coming to mind will be audio recorded. Both the written and audio-recorded hypotheses will be compared to the diagnoses generated by an expert panel that will be submitted to the same scenario. For example, the presence of thrombocytopenia during the data confirmation part should lead to verbalization of (1) bacterial infection, (2) viral infection, (3) intrauterine grown retardation, and (4) platelets immunization. The presence of skin lesions during the data confirmation part should lead to verbalization of (1) herpetic infection, (2) bacterial infection, and (3) varicella infection.

    Iterative discussions supported by the Clinical Reasoning Assessment Tool during SID have been designed to allow development and exploration of System-2 of clinical reasoning regarding data collection, diagnostic hypotheses generation, new data interpretation, and management plan. In the classical approach, it is hypothesized that System-2 is discussed during the debriefing period after the simulation. Exploration of how both simulation approaches impact on performance of System-2 will be done during the semistructured interviews (see below).

    Postsimulation Questionnaire

    Postsimulation questionnaire (Multimedia Appendix 1) will assess residents’ self-reported improvement in clinical reasoning and level of satisfaction regarding the simulation approach (both using a 10 point Likert-type scale question). The questionnaire will be designed based on previous literature [63-65] and will be pilot tested with three residents. According to the simulation approach, residents will also complete 5 to 12 questions asking them to rate statements using a four-point Likert-type scale, ranging from 1 (strongly disagree) to 4 (strongly agree).

    Semistructured Interviews

    An individual semistructured interview will explore “how” and “why” students’ clinical reasoning ability develops through both simulation approaches (SID or classical). Interviews will be conducted using techniques inspired from the explicitation interview during which the interviewer supports the participant, without induction, toward the evocation of a specified experience [66]. After icebreakers, interviews will be constituted of two distinctive parts. First, the interviewer will explore the residents’ reasoning process during the simulation experience. Second, he will focus on residents’ perception of the simulation experience, including possible advantages and challenges of each simulation type, possible improvements to each methodology, and the perceived impact of SID on the participants’ learning. Interviews will be audio recorded and then transcribed for analysis.

    Data Analysis

    This is an exploratory study that will aid in planning a future larger randomized controlled trial. The target population of residents enrolled in the General Pediatrics and Neonatal-Perinatal Medicine programs represents approximately 50 residents. Based on literature from qualitative research inquiry, the adequate sample size consists of the number of participants at which saturation of data is achieved. This well-described process permits cessation of recruitment when additional data does not bring new properties to unsaturated categories [67]. Different authors agree that saturation is reached after 20 to 25 participants [67,68].

    Quantitative data will be analyzed using SPSS 20.0 (IBM SPSS, Chicago, IL). Data will be analyzed using descriptive statistics for all variables. For each Likert-scale type question, a significant cut-off will be pre-established. Comparison of positive versus negative responses will be done with the use of chi-square and Fischer’s exact test for nonparametric variables. Number, nature, and order of diagnostic hypotheses will be compared to the responses of a panel of 10 neonatologists from our hospital. A multivariate analysis will be used to investigate potential effects of graduation year, gender, and prior simulation experience on the residents’ evaluation of their clinical reasoning. Statistical significance will be defined as a probability value of <.05.

    Qualitative data will help describe if SID allows better assessment of System-1 and 2 of clinical reasoning compared to the classical approach of simulation. Data from audio-recorded semistructured interviews will be analyzed using NVivo 9.0 software. Analysis will occur as data collection pursues. Recurring themes or distinctive aspects about each student’s response will be noted. These notes will be reviewed and expanded as the research continues until saturation of data. Data will be de-identified so that all participants will remain anonymous. Through content analysis, the information will then be categorized according to the principle of convergence [69]. The research team will use deductive analysis and review of all written transcripts.


    Results

    This study is in its preliminary stages and the results are expected to be made available by April, 2016.


    Discussion

    Clinical reasoning is an essential skill for everyday medical practice. However, many questions remain regarding how efficiency of the reasoning processes can be most accurately measured [16]. We believe that medical simulation could represent an effective environment for clinical reasoning assessment, as participants are immersed in an authentic and controlled setting [70]. Moreover, SID, an innovative approach to simulation, could provide a closer assessment of both System-1 and 2 of clinical reasoning by the combination of concurrent verbalization and iterative discussions at key steps of the reasoning process.

    In this exploratory randomized study, comparing SID to the classical approach of simulation, we expect to find a higher progression of residents’ self-assessed clinical reasoning process with SID. More precisely, iterative discussions, by allowing reflection, will lead to improvement in System-2 clinical reasoning process while concurrent verbalization during management of the mannequin will enhance performance of System-1. Verbalization during the classical approach will also have a similar impact on System-1 clinical reasoning. Finally, residents will demonstrate a higher level of satisfaction in the SID approach of SBE.

    This is the first randomized study comparing a new simulation approach to the classical mode for developing clinical reasoning skills. In addition, incorporating a qualitative piece to the study with the goal of exploring how each simulation approach impacts on residents’ clinical reasoning process is of great interest. Residents of different training levels are included in the study allowing to explore the phenomenon from perspectives of individuals with various levels of clinical reasoning performances. There are also a few limitations to the study. The small sample size does not allow for statistical generalizability of quantitative results. The absence of a robust pre and post assessment of residents’ clinical reasoning abilities might not portray the exact impact of each intervention. However, in the absence of a gold-standard tool in clinical reasoning evaluation, this exploratory study could provide detailed and useful preliminary data for the effectiveness of such a new simulation approach. Finally, the case specificity of one unique scenario could limit generalizability of the results.

    The findings of the study will be of benefit to medical educators, training programs, and residents who participate in these programs. This study will further our understanding of the complexity of clinical reasoning, and how delivery of the curriculum should be modified to assist residents in better developing their clinical reasoning abilities in neonatology but also in others specialties. This study demonstrates the feasibility of a SBE session where scenarios are built according to clinical reasoning and reflective practice theories and help to put emphasis on clinical reasoning teaching and assessment. For medical residents, development of a simulation approach that assesses their clinical reasoning abilities will provide them with an opportunity to receive feedback about components of clinical reasoning which need to be improved. It will also provide medical teachers with an opportunity to better understand the students’ diagnostic process. Overall, insight into the clinical reasoning process by SBE may contribute to changes in medical education curriculum development and implementation. This may provide residents with better opportunities to develop clinical reasoning by SBE and become clinically competent doctors.

    The future should concentrate on optimizing clinical reasoning assessment. A SID session could integrate a combination of well-described clinical reasoning evaluation tools such as script concordance tests [71] or clinical reasoning problems [72]. The validation of such an instrument could provide an essential educational tool for formative or summative assessment of medical students, whatever their level of training or their specialty. In a greater future, assessment of long-term retention of acquired clinical reasoning skills after exposition to SID should be explored once the basics have been settled.

    Acknowledgments

    This study is funded by the 2014 Royal College Medical Education Research Grant and the 2014 CHU Sainte-Justine Medical Education Research Grant. We would also like to thank the “Journées Françaises de Réanimation Néonatale” and the “Réseau Mère-Enfant de la Francophonie” for their support.

    Conflicts of Interest

    None Declared.

    Multimedia Appendix 1

    Post-simulation questionnaire.

    JPG File, 107KB

    References

    1. Nendaz M, Perrier A. Diagnostic errors and flaws in clinical reasoning: mechanisms and prevention in practice. Swiss Med Wkly 2012;142:w13706 [FREE Full text] [CrossRef] [Medline]
    2. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005 Jul 11;165(13):1493-1499. [CrossRef] [Medline]
    3. Cutrer WB, Sullivan WM, Fleming AE. Educational strategies for improving clinical reasoning. Curr Probl Pediatr Adolesc Health Care 2013 Oct;43(9):248-257. [CrossRef] [Medline]
    4. Nendaz MR, Gut AM, Louis-Simonet M, Perrier A, Vu NV. Bringing explicit insight into cognitive psychology features during clinical reasoning seminars: a prospective, controlled study. Educ Health (Abingdon) 2011 Apr;24(1):496. [Medline]
    5. Elstein AS. Thinking about diagnostic thinking: a 30-year perspective. Adv Health Sci Educ Theory Pract 2009 Sep;14 Suppl 1:7-18. [CrossRef] [Medline]
    6. Norman G. Research in clinical reasoning: past history and current trends. Med Educ 2005 Apr;39(4):418-427. [CrossRef] [Medline]
    7. Monteiro SM, Norman G. Diagnostic reasoning: where we've been, where we're going. Teach Learn Med 2013;25 Suppl 1:S26-S32. [CrossRef] [Medline]
    8. Evans Jonathan St B T. Dual-processing accounts of reasoning, judgment, and social cognition. Annu Rev Psychol 2008;59:255-278. [CrossRef] [Medline]
    9. Stanovich KE, West RF. On the relative independence of thinking biases and cognitive ability. J Pers Soc Psychol 2008 Apr;94(4):672-695. [CrossRef] [Medline]
    10. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ 2010 Jan;44(1):94-100. [CrossRef] [Medline]
    11. Custers Eugène J F M. Medical education and cognitive continuum theory: an alternative perspective on medical problem solving and clinical reasoning. Acad Med 2013 Aug;88(8):1074-1080. [CrossRef] [Medline]
    12. Norman G, Monteiro S, Sherbino J. Is clinical cognition binary or continuous? Acad Med 2013 Aug;88(8):1058-1060. [CrossRef] [Medline]
    13. Pelaccia T, Tardif J, Triby E, Charlin B. An analysis of clinical reasoning through a recent and comprehensive approach: the dual-process theory. Med Educ Online 2011;16 [FREE Full text] [CrossRef] [Medline]
    14. Croskerry P. Critical thinking and reasoning in emergency medicine. In: Croskerry P, Cosby KS, Schenkel SM, Wears RL, editors. Patient safety in emergency medicine. Philadelphia, PA: Lippincott Williams & Wilkins; 200!:213-218.
    15. Moulton CE, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: a new model of expert judgment. Acad Med 2007 Oct;82(10 Suppl):S109-S116. [CrossRef] [Medline]
    16. Ilgen JS, Humbert AJ, Kuhn G, Hansen ML, Norman GR, Eva KW, et al. Assessing diagnostic reasoning: a consensus statement summarizing theory, practice, and future needs. Acad Emerg Med 2012 Dec;19(12):1454-1461 [FREE Full text] [CrossRef] [Medline]
    17. Higgs J. R.S.L., Assessing clinical reasoning, in Clinical reasoning in the health professions, J.M. In Higgs J, Loftus S, Christensen N. In: Clinical reasoning in the health professions. Edinburgh: Elsevier, Churchill Livingstone; 2008.
    18. Schuwirth L. Is assessment of clinical reasoning still the Holy Grail? Med Educ 2009 Apr;43(4):298-300. [CrossRef] [Medline]
    19. Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach 2011;33(3):206-214. [CrossRef] [Medline]
    20. Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: construct alignment improves the performance of workplace-based assessment scales. Med Educ 2011 Jun;45(6):560-569. [CrossRef] [Medline]
    21. Gingerich A, Regehr G, Eva KW. Rater-based assessments as social judgments: rethinking the etiology of rater errors. Acad Med 2011 Oct;86(10 Suppl):S1-S7. [CrossRef] [Medline]
    22. Holmboe ES, Hawkins RE, Huot SJ. Effects of training in direct observation of medical residents' clinical competence: a randomized trial. Ann Intern Med 2004 Jun 1;140(11):874-881. [Medline]
    23. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ 2011 Oct;45(10):1048-1060. [CrossRef] [Medline]
    24. Kogan JR, Hess BJ, Conforti LN, Holmboe ES. What drives faculty ratings of residents' clinical skills? The impact of faculty's own clinical skills. Acad Med 2010 Oct;85(10 Suppl):S25-S28. [CrossRef] [Medline]
    25. Durning S, Artino AR, Pangaro L, van der Vleuten Cees P M, Schuwirth L. Context and clinical reasoning: understanding the perspective of the expert's voice. Med Educ 2011 Sep;45(9):927-938. [CrossRef] [Medline]
    26. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract 2009 Sep;14 Suppl 1:37-49. [CrossRef] [Medline]
    27. Lave L, Wenger E. W., E. In: Situated learning: legitimate peripheral participation. Cambridge [England]: Cambridge University Press; 1991.
    28. Jonassen D, Land S. Situated cognition in theoreticalpractical context. In: Theoretical foundations of learning environments. Mahwah, N.J: L. Erlbaum Associates; 2000.
    29. Bredo E. Reconstructing educational psychology: Situated cognition and Deweyian pragmatism. Educational Psychologist 1994;29(1):23-35. [CrossRef]
    30. Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 2011 Sep 7;306(9):978-988. [CrossRef] [Medline]
    31. Norman G, Dore K, Grierson L. The minimal relationship between simulation fidelity and transfer of learning. Med Educ 2012 Jul;46(7):636-647. [CrossRef] [Medline]
    32. Kolb D. Experiential learning: experience as the source of learning and development. Englewood Cliffs, N.J: Prentice-Hall; 1984.
    33. Baldwin KB. Friday night in the pediatric emergency department: a simulated exercise to promote clinical reasoning in the classroom. Nurse Educ 2007;32(1):24-29. [Medline]
    34. Henrichs B, Rule A, Grady M, Ellis W. Nurse anesthesia students' perceptions of the anesthesia patient simulator: a qualitative study. AANA J 2002 Jun;70(3):219-225. [Medline]
    35. Lasater K. High-fidelity simulation and the development of clinical judgment: students' experiences. J Nurs Educ 2007 Jun;46(6):269-276. [Medline]
    36. Kyrkjebø JM, Brattebø G, Smith-Strøm H. Improving patient safety by using interprofessional simulation training in health professional education. J Interprof Care 2006 Oct;20(5):507-516. [CrossRef] [Medline]
    37. Henneman EA, Cunningham H. Using clinical simulation to teach patient safety in an acute/critical care nursing course. Nurse Educ 2005;30(4):172-177. [Medline]
    38. Comer SK. Patient care simulations: role playing to enhance clinical understanding. Nurs Educ Perspect 2005;26(6):357-361. [Medline]
    39. Peteani LA. Enhancing clinical practice and education with high-fidelity human patient simulators. Nurse Educ 2004;29(1):25-30. [Medline]
    40. Feingold CE, Calaluce M, Kallen MA. Computerized patient model and simulated clinical experiences: evaluation with baccalaureate nursing students. J Nurs Educ 2004 Apr;43(4):156-163. [Medline]
    41. Spunt D, Foster D, Adams K. Mock code: a clinical simulation module. Nurse Educ 2004;29(5):192-194. [Medline]
    42. Dreifuerst KT. The essentials of debriefing in simulation learning: a concept analysis. Nurs Educ Perspect 2009;30(2):109-114. [Medline]
    43. Issenberg SB, McGaghie WC, Petrusa ER, Lee GD, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005 Jan;27(1):10-28. [CrossRef] [Medline]
    44. McGaghie W, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003-2009. Med Educ 2010 Jan;44(1):50-63. [CrossRef] [Medline]
    45. Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing as part of the learning process. Simul Healthc 2011 Aug;6 Suppl:S52-S57. [CrossRef] [Medline]
    46. Kelley P. Making minds: what's wrong with education, and what should we do about it?. New York, NY: Routledge; 2007.
    47. Norman G, Brooks L, Colle C, Hatala R. The Benefit of Diagnostic Hypotheses in Clinical Reasoning: Experimental Study of an Instructional Intervention for Forward and Backward Reasoning. Cognition and Instruction 1999 Dec;17(4):433-448. [CrossRef]
    48. Schön D. Educating the Reflective Practitioner : Toward a New Design for Teaching and Learning in the Professions (Higher Education Series). San Francisco: Jossey-Bass; 1987.
    49. Norman G, Sherbino J, Dore K, Wood T, Young M, Gaissmaier W, et al. The etiology of diagnostic errors: a controlled trial of system 1 versus system 2 reasoning. Acad Med 2014 Feb;89(2):277-284 [FREE Full text] [CrossRef] [Medline]
    50. Gonzalez C. Learning to make decisions in dynamic environments: effects of time constraints and cognitive abilities. Hum Factors 2004;46(3):449-460. [Medline]
    51. Westbrook JI, Coiera E, Dunsmuir William T M, Brown BM, Kelk N, Paoloni R, et al. The impact of interruptions on clinical task completion. Qual Saf Health Care 2010 Aug;19(4):284-289. [CrossRef] [Medline]
    52. Chisholm CD, Dornfeld AM, Nelson DR, Cordell WH. Work interrupted: a comparison of workplace interruptions in emergency departments and primary care offices. Ann Emerg Med 2001 Aug;38(2):146-151. [CrossRef] [Medline]
    53. Zaichkin J, Weiner GM. Neonatal Resuscitation Program (NRP) 2011: new science, new strategies. Neonatal Netw 2011;30(1):5-13. [CrossRef] [Medline]
    54. Anderson JM, Warren JB. Using simulation to enhance the acquisition and retention of clinical skills in neonatology. Semin Perinatol 2011 Apr;35(2):59-67. [CrossRef] [Medline]
    55. Redelmeier D, Tu JV, Schull MJ, Ferris LE, Hux JE. Problems for clinical judgement. Obtaining a reliable past medical history. CMAJ 2001;164(6):2-13.
    56. Kuhn G. Circadian rhythm, shift work, and emergency medicine. Ann Emerg Med 2001 Jan;37(1):88-98. [CrossRef] [Medline]
    57. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003 Aug;78(8):775-780. [Medline]
    58. Croskerry P. ED Cognition: any decision by anyone at any time. CJEM 2014;16(1):E [FREE Full text]
    59. Williams RG, Klamen DL, Hoffman RM. Medical student acquisition of clinical working knowledge. Teach Learn Med 2008;20(1):5-10. [CrossRef] [Medline]
    60. Kuhn GJ. Diagnostic errors. Acad Emerg Med 2002 Jul;9(7):740-750 [FREE Full text] [Medline]
    61. Audétat M, Laurin S, Sanche G, Béïque C, Fon NC, Blais J, et al. Clinical reasoning difficulties: a taxonomy for clinical teachers. Med Teach 2013;35(3):e984-e989. [CrossRef] [Medline]
    62. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual Saf 2013 Oct;22 Suppl 2:ii58-ii64 [FREE Full text] [CrossRef] [Medline]
    63. Li J, Li QL, Li J, Chen ML, Xie HF, Li YP, et al. Comparison of three problem-based learning conditions (real patients, digital and paper) with lecture-based learning in a dermatology course: a prospective randomized study from China. Med Teach 2013;35(2):e963-e970. [CrossRef] [Medline]
    64. Van Heukelom Jon N, Begaz T, Treat R. Comparison of postsimulation debriefing versus in-simulation debriefing in medical simulation. Simul Healthc 2010 Apr;5(2):91-97. [CrossRef] [Medline]
    65. Kania RE, Verillaud B, Tran H, Gagnon R, Kazitani D, Huy Patrice Tran Ba, et al. Online script concordance test for clinical reasoning assessment in otorhinolaryngology: the association between performance and clinical experience. Arch Otolaryngol Head Neck Surg 2011 Aug;137(8):751-755. [CrossRef] [Medline]
    66. Maurel M. The explicitation interview: examples and applications. J Conscious Stud 2009;16:58-59.
    67. Charmaz K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. Introducing Qualitative Methods, ed. D. In: Constructing grounded theory: a practical guide through qualitative analysis. London: SAGE; 2006.
    68. Green J, Thorogood N. Qualitative methods for health research. Thousand Oaks, Calif: Sage; 2009.
    69. Patton M. Qualitative research & evaluation methods. 3 ed 2002;, CA.
    70. Gaba D. The future vision of simulation in health care. Quality and Safety in Health Care 2004 Oct 01;13(suppl_1):i2-i10. [CrossRef]
    71. Charlin B, Tardif J, Boshuizen HP. Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med 2000 Feb;75(2):182-190. [Medline]
    72. Groves M, Scott I, Alexander H. Assessing clinical reasoning: a method to monitor its development in a PBL curriculum. Med Teach 2002 Sep;24(5):507-515. [CrossRef] [Medline]


    Abbreviations

    NICU: neonatal intensive care unit
    PGY: postgraduate year
    SBE: simulation-based education
    SID: simulation with iterative discussions


    Edited by G Eysenbach; submitted 14.07.15; peer-reviewed by K Blondon, C Vaitsis, N Stathakarou; comments to author 09.10.15; revised version received 04.11.15; accepted 05.11.15; published 17.02.16

    ©Thomas Pennaforte, Ahmed Moussa, Nathalie Loye, Bernard Charlin, Marie-Claude Audétat. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 17.02.2016.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included.