Published on in Vol 10, No 10 (2021): October

Preprints (earlier versions) of this paper are available at, first published .
Technology-Supported Guidance Model to Support the Development of Critical Thinking Among Undergraduate Nursing Students in Clinical Practice: Protocol of an Exploratory, Flexible Mixed Methods Feasibility Study

Technology-Supported Guidance Model to Support the Development of Critical Thinking Among Undergraduate Nursing Students in Clinical Practice: Protocol of an Exploratory, Flexible Mixed Methods Feasibility Study

Technology-Supported Guidance Model to Support the Development of Critical Thinking Among Undergraduate Nursing Students in Clinical Practice: Protocol of an Exploratory, Flexible Mixed Methods Feasibility Study


1Lovisenberg Diaconal University College, Oslo, Norway

2Department of Health and Nursing Sciences, University of Agder, Kristiansand, Norway

3VID Specialized University, Oslo, Norway

Corresponding Author:

Jaroslav Zlamal, MHP

Lovisenberg Diaconal University College

Lovisenberggata 15b

Oslo, 0456


Phone: 47 95963522


Background: Critical thinking is an essential set of skills in nursing education, and nursing education therefore needs a sharper focus on effective ways to support the development of these skills, especially through the implementation of technological tools in nursing education.

Objective: The aim of this study protocol is to assess the feasibility of a technology-supported guidance model grounded in the metacognition theory for nursing students in clinical practice.

Methods: Both quantitative (research questionnaires) and qualitative (focus group interviews) approaches will be used to collect data for a feasibility study with an exploratory, flexible mixed methods design to test a newly developed intervention in clinical practice.

Results: The intervention development was completed in December 2020. The intervention will be tested in 3 independent nursing homes in Norway.

Conclusions: By determining the feasibility of a technology-supported guidance model for nursing students in clinical practice, the results will provide information on the acceptability of the intervention and the suitability of the outcome measures and data collection strategy. They will also identify the causes of dropout and obstacles to retention and adherence.

International Registered Report Identifier (IRRID): DERR1-10.2196/31646

JMIR Res Protoc 2021;10(10):e31646




Critical thinking is an important outcome of nursing education [1], and clinical practice is essential for its development [2]. In clinical practice, a nurse preceptor serves as a tutor or mentor to guide nursing students toward the acquisition of necessary skills [3].

Nursing students may experience challenges and difficulties in their clinical practicum, such as not knowing who the main nurse preceptor responsible for guidance is, receiving limited guidance, experiencing a change of nurse preceptor, or having a poor relationship with the nurse preceptor [4]. Likewise, nurse preceptors may lack the resources, experience, and training in guiding nursing students [5-7]. These challenges in guiding nursing students in clinical practice may negatively influence their development of critical thinking [8].

The introduction of technological tools in nursing education has opened new possibilities for addressing these challenges and improving outcomes related to critical thinking [9], but only a few studies have examined the effectiveness of technological tools in supporting the development of critical thinking skills in nursing students. Strandell-Laine developed a technological intervention to improve cooperation between nursing students and nurse educators to improve self-efficacy and nursing competence; the intervention was not significantly effective in improving individual outcomes, but it strengthened communication between students and nurse educators [10]. Mettiäinen developed a technology-based app for feedback and assessment in the clinical guidance of nursing students [11]. In a pilot study, Mettiäinen et al [11] found that nursing students had positive attitudes toward the use of technological tools (eg, apps) during their guidance in clinical practice, and she concluded that such apps are a viable option for the guidance of nursing students in clinical practice.

Owing to the importance of critical thinking in nursing education, interventions that support critical thinking and its development are needed. This study provides a protocol for a feasibility study, which is one stage of a complex intervention [12]. The feasibility study is a part of the main study, Technology-Supported Guidance to Increase Flexibility, Quality, and Efficiency in the Clinical Practicum of Nursing Education, conducted at Lovisenberg Diaconal University College (LDUC), Oslo, Norway. The main study included a mixed methods systematic review, feasibility study, randomized controlled trial (RCT), and follow-up study. Protocols for the systematic review of mixed methods [13] and RCTs [14] have already been published.

Study Aim

The overall aim of this study is to explore the feasibility of a technology-supported guidance model for nursing students in clinical practice.


The purpose of this study is to assess the feasibility and acceptability of a newly developed technology-supported guidance model in clinical practice among nursing students, nurse preceptors, and nurse educators; assess the feasibility and suitability of the primary and secondary outcome measures; assess the recruitment strategy; assess the data collection strategy; and identify potential causes of dropout and hindrances to participant recruitment, retention, intervention fidelity, and adherence to the intervention.

Research Questions

How feasible and acceptable is the newly developed technology-supported guidance model and the overall intervention among nursing students, nurse preceptors, and nurse educators? Are the outcome measures feasible and suitable for an RCT? How feasible is the chosen data collection strategy? How suitable is the participant recruitment strategy? What causes dropout and what hindrances can occur in relation to recruitment, retention, intervention fidelity, and adherence? How can these hindrances be minimized?


According to Giangregorio and Thabane [15], there is no universal agreement on the definitions of feasibility and pilot studies. Some definitions may overlap, whereas others distinctively differ in their understanding of feasibility and pilot studies. The Medical Research Council Framework for Complex Interventions does not make a clear distinction [16], whereas the National Institute of Health Research in the United Kingdom defines feasibility studies as those that are conducted in the early stages of the research process, before a pilot study, and aim to answer specific questions related to potentially conducting a given intervention research. Pilot studies are then defined as small versions of a main study that aim to determine whether all the components of the main study work together [12].

This study adopts the understanding of feasibility studies outlined by the National Institute of Health Research and focuses on the feasibility stage of intervention research, aiming to inform an RCT. The protocol has been written according to the Standard Protocol Item: Recommendations for Interventional Trials (SPIRIT) checklist [17], Medical Research Council Framework for Complex Interventions [16], and Template for Intervention Description and Replication (TIDieR) [18].

Feasibility studies have, by their nature, an exploratory design that aims to justify a full-scale effectiveness study [19]. In this study, we plan a flexible, convergent, and mixed methods exploratory design. A flexible exploratory design allows for changes during the course of the study, which can inform adjustments to the intervention and final intervention design [19], whereas a convergent mixed methods design allows the comparison of quantitative and qualitative data to confirm or disprove the findings of each approach [20]. Quantitative data will be collected from questionnaires and from the use data of the Technology-Optimized Practice Process in Nursing (TOPP-N) app [21]. Qualitative data will be collected from focus group interviews with participating nursing students, nurse preceptors, and nurse educators. Quantitative data will be analyzed using descriptive statistical methods [22]. We will calculate means, medians, SDs, skewness, and kurtosis [23,24] and report sample sizes and sample demographics [24], such as ages of participants, last completed education, and previous working experience in health care. A thematic analysis approach will be applied to qualitative data. The data will be coded, and the codes will be grouped into themes [25]. The quantitative and qualitative data will be integrated in a side-by-side comparison and interpreted in the Discussion section. Qualitative data will be reported and interpreted first and then compared with the quantitative findings to answer the research questions of the feasibility study [20].

Study Setting

The feasibility study will be conducted at 3 nursing homes, 1 in the county of Oslo, Norway, and 2 in the county of Kristiansand, Norway. The institutions were chosen based on previous cooperation and agreement in developing or testing the intervention.

Eligibility Criteria

The study will use a consecutive sampling strategy. Eligible participants include first-year undergraduate nursing students at LDUC and the University of Agder (UiA), nurse preceptors (registered nurses) and nurse educators at the participating institutions, nursing students in clinical practice, nurse preceptors and nurse educators guiding nursing students in clinical practice, and participants who are willing to provide signed informed consent.

Intervention Description

Intervention Name

The name of the intervention is Technology-Supported Guidance Model (TSGM).

Goal of the Elements Essential for the Intervention

The main element of the TSGM is the TOPP-N app [21], which helps students identify their need for guidance and stimulates reflection on their learning goal and what has been learned through their completion of electronic reports (e-reports).

Nurse preceptors and nurse educators can follow up on the progress of students and tailor their guidance based on their needs.

Nurse educators follow the students’ guidance and intervene as necessary when automatically prompted by the guidance app.

A digital version of the Assessment of Clinical Education (AssCE) [26] mediates the summative evaluation of student performance during clinical practice with either in-person or virtual meetings.


Materials include the TOPP-N app [21] with a digital AssCE [27] module, accessible from mobile phones, tablets (Apple [iOS] or Android operating system), and web browsers (all standard browsers are supported).

The app can be accessed from a web browser [21] or from Apple or Android systems downloadable from the Apple Store and Google Play, respectively. The informational materials in the training include flyers, posters, instructional videos, a Facebook group, and formal and informal meetings (Multimedia Appendix 1).

Videos can be found on the web [28].


Nursing students use the TOPP-N app (Figure 1) [21] daily and must complete e-reports before and after their shift in clinical practice. The e-reports comprise checklists built on AssCE [27], each of which is accompanied by a scale on which the students indicate their need for guidance in specific learning activities. The checklist offers the possibility of further written elaboration.

Figure 1. Screenshot of the Technology Optimized Practice Process in Nursing app.
View this figure

Nurse preceptors are required to give feedback on the daily performance of students and on completed e-reports through the TOPP-N app [21]. Feedback is given every day after the students have completed their reports.

Nurse educators follow the students’ progress through the TOPP-N app [21] and intervene as necessary, when automatically prompted by the guidance app.

Summative assessment is done in the app with the help of the digital AssCE [27] in weeks 3 to 4 and 6 to 8 of the students’ clinical practice. The summative assessment is conducted as an individual meeting (physical or virtual) in which students, nurse preceptors, and nurse educators participate.

Delivery of Intervention

The intervention is delivered digitally by the TOPP-N app [21]. Daily guidance is delivered by nurse preceptors and, when necessary, by nurse educators. Summative assessment is delivered by nurse preceptors and nurse educators in collaboration with nursing students.

Modes, Place, and Frequency of Intervention Delivery

The intervention is delivered digitally through the TOPP-N app [21] and in virtual and face-to-face meetings between nursing students, nurse preceptors, and nurse educators, in 1 nursing home in Oslo, Norway, and 2 nursing homes in Kristiansand, Norway. It is delivered daily during 6 to 8 weeks of clinical practice.

Intervention Monitoring

The intervention is monitored digitally by oversight of the participants’ activities and their interactions in the TOPP-N app [21].

Criteria for Modifying or Discontinuing an Intervention

Chan et al [17] highlighted the necessity of carefully considering when an intervention should be modified or stopped, and progression criteria are necessary elements of feasibility and pilot studies to evaluate whether a full-scale trial is viable [26]. Avery et al [29] proposed a traffic light system for progression criteria: green (go, indicates the criteria are met); amber (amend, indicates a need for change and adjustment); and red (stop, indicates that one should not move to a larger trial). Following Avery et al [29], the progression criteria are as follows: green (intervention proceeds as planned, and no problems are discovered), amber (problems are discovered and appropriate remedies are devised, and the intervention proceeds with close monitoring), and red (problems cannot be amended, and the intervention does not continue).

Adherence to the Intervention Protocol

Adherence describes the behavior of participants that aligns with the intervention and has been assigned to the participants [17]. Poor adherence may complicate statistical analysis, reduce the statistical power of the study, and result in underestimation of the efficacy of the intervention [30]. In this study, the guidance app has a built-in system that reminds participants to fill out e-reports and complete other required tasks.

Concomitant Activities and Other Activities Outside of Intervention

No limitations are imposed on the participants in relation to concomitant activities or other activities outside of the intervention.


The primary outcome is critical thinking. The secondary outcomes are self-efficacy, clinical learning environment, metacognition and self-regulation, technology acceptance, and competence of mentors. Table 1 provides a detailed overview of these outcomes.

Table 1. Outcomes.
Primary outcome

Critical thinking Purposeful and self-regulatory judgment resulting in interpretation, analysis, evaluation, and inference efficacy [31].
Secondary outcomes

Self-efficacySelf-perceived ability to perform a task in a competent and effective manner [32,33].

Satisfaction with the clinical learning environmentA clinical learning environment that provides students with professional development and is a foundation for a supervisory relationship [34].

Technology acceptanceAcceptance or rejection of the use of new technology by users, with a focus on users’ perceptions, attitudes, and intentions in the use of new technology [35].

Use of metacognitive processesUse of metacognitive processes in clinical practice [36].

Mentors’ competenceLevel of competencies of mentors in clinical practice [37].

Participant Timeline

The participant timeline is shown in Figure 2.

Figure 2. Participant timeline.
View this figure
Sample Size

Traditional sample size calculations are not suitable for feasibility studies, as their aim is not hypothesis testing [38,39], yet a feasibility study requires a proper sample size justification [39], especially in relation to its objectives [40]. Lancaster et al [40] have proposed 30 participants as the rule of thumb, but recommendations vary from 12 to 50 participants [41]. The current estimate for a sufficient number of participants, as described by Billingham et al [41], is between 12 and 50. For this study, we have decided to recruit a total of 32 nursing students (16 from LDUC and 16 from UiA) and 27 nurse preceptors (13 from LDUC and 14 from UiA).


The participants will be recruited from first-year undergraduate nursing students at LDUC, Oslo, Norway, and UiA Kristiansand, Norway. Drawing on the recommendations for recruitment in health research, the recruitment process will provide sufficient information about the overall study in meetings with the target group and will highlight its aim and benefits for participants [42]. To boost recruitment, we intend to maintain a prominent presence on social media.

Data Collection Methods

Data for the primary outcome will be collected using the Norwegian version of the Health Science Reasoning Test [43]. Data for the secondary outcomes will be collected using the Norwegian version of the Self-Efficacy in Clinical Performance [44], Clinical Learning Environment, Supervision and Nurse Teacher [45,46], Technology Acceptance Model 3 [47], Mentors Competence Instrument [37], and Self-Regulation and Metacognition in Clinical Practice instruments (self-created questionnaire for the purposes of this study).

In addition, data will be gathered from the TOPP-N app [21], and questionnaires will solicit self-reported sociodemographic data and evaluations of participation in the feasibility study. All data collection instruments will be administered digitally.

Data will also be collected through focus group interviews with nursing students, nurse preceptors, and nurse educators. The interviews will be conducted separately for each group using an interview guide and will last 60 minutes. One researcher will be the interviewer and the other a moderator. All focus groups will be conducted digitally using Zoom (Zoom Video Communications, Inc) videoconferencing version 5.6.5 [48]. Video from the interviews will be recorded, but only the sound file will be stored, and the video will be deleted at the end of the focus group interviews. Table 2 provides an overview of the data collection instruments. Textbox 1 presents the planned focus group interview topics.

Table 2. Overview of data collection instruments.
Measuring instrumentCharacteristics of the instrumentInternal validity
  • Multiple-choice test, 38 questions
  • Measurement of overall level of critical thinking
  • Measurement of detailed scores of analysis, interpretation, inference, evaluation, explanation, induction, deduction, and numeracy
Cronbach α of .76 for the overall instrument [49]
  • Measurement of self-efficacy on 37 items in 4 subscales: assessment, diagnosis and planning, implementation, and evaluation
Cronbach α for each item ranging from .90 to .92 [44]
  • Measurement of satisfaction with the clinical learning environment on 45 items in three major themes: learning environment, supervisory relationship, and role of the nurse teacher
Cronbach α for each item ranging from .81 to .98 [45,46]
TAM 3d
  • Measurement of acceptance of new technology on 37 items
Cronbach α for each item ranging from .77 to .87 [50]
  • Measurement of level of use of self-regulation and metacognitive processes; measured on 11 items
Data not available
Sociodemographic data
  • Year of birth, sex, last completed education, length of employment in health care with direct patient contact
Data not available
Evaluation of the feasibility study
  • Evaluation of participation in the feasibility study
Data not available

aHSRT: Health Sciences Reasoning Test.

bSECP: Self-Efficacy in Clinical Performance.

cCLES+T2: Clinical Learning Environment, Supervision and Nurse Teacher.

dTAM 3: Technology Acceptance Model 3.

eSMCP: Self-Regulation and Metacognition in Clinical Practice.

Planned topics in focus group interviews.

Nursing students

  • Platform from which Technology Optimized Practice Process in Nursing (TOPP-N) has been used
  • Use of TOPP-N
  • Contribution of TOPP-N to performance of students in clinical practice
  • Contribution of TOPP-N in receiving guidance from nurse preceptors
  • Future needs for support when using TOPP-N
  • Experience with filling out questionnaires and taking the critical thinking test
  • Recruitment to intervention

Nurse preceptors

  • Platform from which TOPP-N has been used
  • Use of TOPP-N
  • Contribution of TOPP-N in student guidance
  • Comparison of using TOPP-N in guidance of students with earlier guidance without TOPP-N
  • Future features and needs in TOPP-N
  • Contribution to research which includes filling out questionnaire and time use
  • Recruitment to intervention

Nurse educators

  • Platform from which TOPP-N has been used
  • Use of TOPP-N
  • Contribution of TOPP-N in student guidance
  • Comparison of using TOPP-N in guidance of students with earlier guidance without TOPP-N
  • Future features and needs in TOPP-N
  • Contribution to research which includes filling out questionnaire and time use
  • Recruitment to intervention
Textbox 1. Planned topics in focus group interviews.

Data Retention

To maintain interest in the study, announcements will be placed on the learning management platform Canvas (Instructure, Inc) [51], and nurse educators will closely communicate with students to support them as necessary. A dedicated support person will also be available to the participants.

Data Management

Participants’ personal information and sociodemographic data and data from Self-Efficacy in Clinical Performance, Clinical Learning Environment, Supervision and Nurse Teacher, Technology Acceptance Model 3, and Self-Regulation and Metacognition in Clinical Practice will be collected by the Questback Management System (Questback Group AS) [52] and the results stored in the Questback system.

The Health Sciences Reasoning Test is conducted through the Insight Assessment testing system [43], a division of California Academic Press. The anonymous results are stored in the Insight Assessment system. A backup of personnel data and the results of the critical thinking test and other questionnaires will be stored on a Kingston DataTraveller 2000 USB stick with AES 256-bit encryption.

Methods of Analysis

For quantitative analysis, we will use SPSS, version 26 (IBM Corporation) [53]. For qualitative analysis, we will use MAXQDA Analytics Pro 2020 version 9 (VERBI GmbH) [54].

Program Theory

The intervention is theoretically based on the concept of metacognition, which is regarded as a higher-order thinking skill and describes the cognitive process of thinking about one’s own thinking [55]. It is the ability to be aware of, reflect on, and use strategies during cognitive tasks. People who demonstrate high metacognitive abilities tend to be more focused, thoughtful, and strategic in making decisions and solving problems [56]. Thus, they view their own competence as a dynamic and formable entity, which motivates them to learn from previous knowledge and experiences and seek new solutions. Metacognition is often framed as a highly cognitive skill; however, there is a high correlation between metacognition and self-regulation, which means that metacognition also depends on motivational elements, such as goal setting, determination, and attention control [55,57].

Metacognition is used as a theoretical framework for TSGM because research shows a close interrelationship between metacognition and critical thinking [58]. Thus, we assume that, if the guidance app supports metacognitive skills of students in clinical practice, it will also have a positive effect on their critical thinking skills. The interrelationship between the two concepts can be traced to the importance of self-monitoring and self-reflection in understanding information and thinking through discussions regarding learning and problem-solving [59].

More specifically, the intervention will build on the principles of the metacognitive cycle, which comprises three main phases that together make up a metacognitive process. The first phase is planning and setting goals. Goal setting is an important part of metacognition, as it prepares students to be attentive, aware, and focused on the learning objectives and strategies they will use in pursuing them.

Experts often take more time than novices do in preparing to solve a problem [60]. As metacognitive masters in their domain, they show the wisdom of making considerable preparation before entering the second phase of the cycle.

In the second phase, the planned strategies are applied in the situation. Here, it is important not to be constrained by the planned actions and to maintain self-awareness and higher-order thinking during the activity so that ongoing decisions can be adapted to situational demands.

The third phase occurs after the situation has played out. Now, it is important to engage in critical self-evaluation and reflect on how the applied strategies dealt with situational demands and contributed to achieving the goals established in the first phase. Furthermore, self-evaluation will provide invaluable information when once again entering the first phase and planning new goals and strategies. An important part of this process is the feedback from nurse preceptors, which further stimulates critical self-evaluation and reflection.

These phases may be further influenced by factors such as task constraints, beliefs about learning, awareness of one’s own strengths and weaknesses, and individual motivation. Research also shows that metacognition, similar to most cognitive abilities, is not a wholly general ability [55], meaning that advanced metacognitive abilities are not necessarily transferred from one domain to another and that they should be practiced in the relevant context. Figure 3 is a diagrammatic representation of the program theory.

Figure 3. The program theory. AssCE: Assessment of Clinical Education; TOPP-N: Technology-Optimized Practice Process in Nursing.
View this figure

Research Ethics Approval

The study was approved on December 21, 2020, by the Norwegian Centre for Research Data (reference number: 338576).

Changes in Protocol

Protocol modifications will be communicated in subsequent publications in research journals.


Each participant signs a written informed consent form. Informed consent is obtained digitally through Questback [52]. The students are thoroughly informed (both verbally and in writing) that participation or nonparticipation in the research project will not affect their study progression or the evaluation of their performance. None of the researchers participating in this research study was involved in any form of formal teaching, evaluation, or student follow-up. This is important in preventing potential conflicts of interests [17].

Confidentiality of Information

On agreeing to participate, each participant receives a numerical code, which is their identifiable information. The numerical codes will be kept separately from the actual list of the participants.

Dissemination Policy

According to Craig et al [16], results should be disseminated actively and targeted in a way that makes them easily understandable and accessible. The research findings will be disseminated by publishing research articles in open-access research journals. In addition, the research team of the study will ensure a strong presence on social media and promote the publication of relevant articles in the daily press, where the findings and news about the research results will be disseminated in a manner easily understandable to a wider audience. Conference participation is also part of the dissemination strategy of the study.

The feasibility study was completed in March of 2021. Quantitative data (from questionnaires) were collected at baseline, before the feasibility study began, and after its completion. We collected qualitative data (focus group interviews) in April of 2021. Table 3 provides a detailed timeline of the further stages of the analysis. This study is expected to conclude in January 2022.

Table 3. Detailed timeline of further stages of analysis.
Data analysisTimeline
Quantitative data

Calculation of means, medians, SDs, skewness, kurtosisAugust 2021

Reporting of sample sizes and sample demographicsAugust 2021
Qualitative data

Transcription of focus group interviewsJune to July 2021

Analysis of focus group interviewsAugust to October 2021

Integration of qualitative and quantitative dataNovember to January 2022


Critical thinking is an essential skill set in nursing [61], and previous research underscores the need for more quantitative approaches to critically evaluate how critical thinking skills are developed, especially among nursing students in a clinical setting [62].

Significance of Results

The feasibility study offers the advantage of testing and fine-tuning certain parts of the main study [63].


A limitation of this study is that the intervention has many complex parts that require close monitoring and follow-up, and the feasibility study runs alongside the control group arm of the trial. Consequently, it may not be possible to use all the results to fine-tune the intervention and the trial (eg, the choice of outcome or data collection instruments). The decision to run the feasibility study alongside the control group arm of the trial was made for practical reasons related to how the curriculum and clinical practice are organized, particularly in the context of the influence of the COVID-19 pandemic on the operation of clinical practice.


The results will determine the acceptability and suitability of the intervention, as well as the information collection strategy and outcome measures for a technology-supported guidance model for nursing students in clinical practice, as well as dropout causes, adherence challenges, and retention.


The authors thank Ørjan Flygt Landfald, Lovisenberg Diaconal University, Oslo, Norway, for guidance in pedagogical and metacognitive processes and for cocreating the Self-Regulation and Metacognition in Clinical Practice questionnaire; the Norwegian Agency For International Cooperation and Quality Enhancement and Lovisenberg Diaconal University College for the funding; and Lovisenberg Diaconal University College for sponsoring this study. The funding source played no part in the design of the study, nor will it take part in its execution or analysis or interpretation of data.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Poster used in participant recruitment.

PDF File (Adobe PDF File), 446 KB

  1. Hunter S, Pitt V, Croce N, Roche J. Critical thinking skills of undergraduate nursing students: description and demographic predictors. Nurse Educ Today 2014 May;34(5):809-814. [CrossRef] [Medline]
  2. Paul SA. Assessment of critical thinking: a Delphi study. Nurse Educ Today 2014 Nov;34(11):1357-1360. [CrossRef] [Medline]
  3. Skaalvik MW. Bedre kvalitetssikring av praksis. Sykepleien 2015(4):58-62. [CrossRef]
  4. Warne T, Johansson U, Papastavrou E, Tichelaar E, Tomietto M, Van den Bossche K, et al. An exploration of the clinical learning experience of nursing students in nine European countries. Nurse Educ Today 2010 Nov;30(8):809-815. [CrossRef] [Medline]
  5. Aigeltinger E, Haugan G, Sørlie V. Utfordringer med å veilede sykepleierstudenter i praksisstudier. Sykepleien 2012(2):160-166. [CrossRef]
  6. Tjøstolvsen I, Antonsen EB, Femdal I. Slik kan samarbeidet bli bedre mellom høyskole og praksissted. Sykepleien 2019;107(78356):e78356. [CrossRef]
  7. Hauge KW. Få studenter stryker i praksis. Journalisten 2014(9):46-49. [CrossRef]
  8. Pour HA, Havva Y. Assessing factors affecting on critical thinking skills of nursing students: a descriptive-longitudinal study. J Acad Soc Sci 2017 Jan 01;5(44):267-278. [CrossRef]
  9. Turan N, Ozdemir NG. Significance of technology-based environment in the development of nursing students critical thinking. Pressacademia 2017 Jun 30;4(1):74-79. [CrossRef]
  10. Strandell-Laine C, Saarikoski M, Löyttyniemi E, Meretoja R, Salminen L, Leino-Kilpi H. Effectiveness of mobile cooperation intervention on students' clinical learning outcomes: A randomized controlled trial. J Adv Nurs 2018 Jun;74(6):1319-1331. [CrossRef] [Medline]
  11. Mettiäinen S. Electronic assessment and feedback tool in supervision of nursing students during clinical training. Electron J e-Learn 2015 Jan 02;13(1):42-55 [FREE Full text]
  12. Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS ONE 2016 Mar 15;11(3):e0150205. [CrossRef]
  13. Zlamal J, Gjevjon E, Fossum M, Solberg M, Steindal S, Strandell-Laine C, et al. Technology-supported guidance models stimulating the development of critical thinking in clinical practice: protocol for a mixed methods systematic review. JMIR Res Protoc 2021 Jan 19;10(1):e25126 [FREE Full text] [CrossRef] [Medline]
  14. Nes A, Fossum M, Steindal S, Solberg M, Strandell-Laine C, Zlamal J, et al. Research protocol: Technology-supported guidance to increase flexibility, quality and efficiency in the clinical practicum of nursing education. Int J Edu Res 2020;103:101628. [CrossRef]
  15. Giangregorio L, Thabane L. Pilot studies and feasibility studies for complex interventions: an introduction. In: Richards D, Hallberg I, editors. Complex Interventions in Health: An Overview of Research Methods. New York: Routledge; 2015:143-151.
  16. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and Evaluating Complex Interventions: an Introduction to the New Medical Research Council Guidance. Evidence-based Public Health. Oxfordshire, United Kingdom: Oxford University Press; 2009:185-202.
  17. Chan A, Tetzlaff JM, Gotzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. Br Med J 2013 Jan 09;346:e7586. [CrossRef]
  18. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J 2014 Mar 07;348:g1687. [CrossRef]
  19. Hallingberg B, Turley R, Segrott J, Wight D, Craig P, Moore L, et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud 2018;4:104 [FREE Full text] [CrossRef] [Medline]
  20. Creswell JD, Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, California: SAGE Publications, Inc; Dec 2017.
  21. Technology optimized practice process in nursing. Lovisenberg Diaconal University College.   URL: [accessed 2021-05-19]
  22. Fisher MJ, Marshall AP. Understanding descriptive statistics. Aus Crit Care 2009 May;22(2):93-97. [CrossRef]
  23. DeCarlo LT. On the meaning and use of kurtosis. Psychol Method 1997;2(3):292-307. [CrossRef]
  24. Pickering RM. Describing the participants in a study. Age Ageing 2017 Jul 01;46(4):576-581. [CrossRef] [Medline]
  25. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  26. Mbuagbaw L, Kosa S, Lawson D, Stalteri R, Olaiya O, Alotaibi A, et al. The reporting of progression criteria in protocols of pilot trials designed to assess the feasibility of main trials is insufficient: a meta-epidemiological study. Pilot Feasibility Stud 2019;5:120 [FREE Full text] [CrossRef] [Medline]
  27. Löfmark A, Mårtensson G. Validation of the tool assessment of clinical education (AssCE): A study using Delphi method and clinical experts. Nurse Educ Today 2017 Mar;50:82-86. [CrossRef] [Medline]
  28. Training videos for TOPP-N application. Lovisenberg Diaconal University College.   URL: [accessed 2021-05-19]
  29. Avery KN, Williamson PR, Gamble C, Francischetto EO, Metcalfe C, Davidson P, Members of the Internal Pilot Trials Workshop Supported by the Hubs for Trials Methodology Research. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open 2017 Feb 17;7(2):e013537 [FREE Full text] [CrossRef] [Medline]
  30. Matsui D. Strategies to measure and improve patient adherence in clinical trials. Pharm Med 2012 Aug 23;23(5-6):289-297. [CrossRef]
  31. Facione P. Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). Educational Resources Information Center (ERIC). 1990.   URL: [accessed 2021-09-03]
  32. Bandura A. Self-efficacy mechanism in human agency. Am Psychol 1982;37(2):122-147. [CrossRef]
  33. Bandura A. Regulation of cognitive processes through perceived self-efficacy. Develop Psychol 1989;25(5):729-735. [CrossRef]
  34. Saarikoski M, Leino-Kilpi H. The clinical learning environment and supervision by staff nurses: developing the instrument. Int J Nurs Stud 2002 Mar;39(3):259-267. [CrossRef]
  35. Davis FD, Bagozzi RP, Warshaw PR. User acceptance of computer technology: a comparison of two theoretical models. Manag Sci 1989 Aug;35(8):982-1003. [CrossRef]
  36. Kosior K, Wall T, Ferrero S. The role of metacognition in teaching clinical reasoning: Theory to practice. Educ Health Prof 2019;2(2):108. [CrossRef]
  37. Tuomikoski A, Ruotsalainen H, Mikkonen K, Miettunen J, Kääriäinen M. Development and psychometric testing of the nursing student mentors' competence instrument (MCI): A cross-sectional study. Nurse Educ Today 2018 Sep;68:93-99. [CrossRef] [Medline]
  38. Bell ML, Whitehead AL, Julious SA. Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes. CLEP 2018 Jan;10:153-157. [CrossRef]
  39. Hooper R. Justifying sample size for a feasibility study. National Institute for Health Research. 2019.   URL: https:/​/www.​​wpcms/​wp-content/​uploads/​2019/​02/​Justifying-sample-size-for-feasibility-study-updated-22-Feb-2019.​pdf [accessed 2021-05-14]
  40. Lancaster G, Dodd S, Williamson P. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract 2004 May;10(2):307-312. [CrossRef] [Medline]
  41. Billingham SA, Whitehead AL, Julious SA. An audit of sample sizes for pilot and feasibility trials being undertaken in the United Kingdom registered in the United Kingdom Clinical Research Network database. BMC Med Res Methodol 2013;13:104 [FREE Full text] [CrossRef] [Medline]
  42. Realpe A, Adams A, Wall P, Griffin D, Donovan JL. A new simple six-step model to promote recruitment to RCTs was developed and successfully implemented. J Clin Epidemiol 2016 Aug;76:166-174. [CrossRef]
  43. Health Sciences Reasoning Test (HSRT). Insight Assesment. 2021.   URL: [accessed 2021-05-15]
  44. Cheraghi F, Hassani P, Yaghmaei F, Alavi-Majed H. Developing a valid and reliable Self-Efficacy in Clinical Performance scale. Int Nurs Rev 2009 Jun;56(2):214-221. [CrossRef] [Medline]
  45. Saarikoski M, Isoaho H, Warne T, Leino-Kilpi H. The nurse teacher in clinical practice: developing the new sub-dimension to the Clinical Learning Environment and Supervision (CLES) Scale. Int J Nurs Stud 2008 Aug;45(8):1233-1237. [CrossRef] [Medline]
  46. Strandell-Laine C. Nursing student-nurse teacher cooperation using mobile technology during the clinical practicum. Annales Universitatis Turkuensis D1412. 2019.   URL: https:/​/www.​​bitstream/​handle/​10024/​146655/​AnnalesD1412Strandell-Laine.​pdf?sequence=1&isAllowed=y [accessed 2021-05-15]
  47. Venkatesh V, Bala H. Technology acceptance model 3 and a research agenda on interventions. Deci Sci 2008 May;39(2):273-315. [CrossRef]
  48. Zoom videoconferencing version 5.6.5. Zoom Video Communications. 2021.   URL: [accessed 2021-05-15]
  49. Campbell FB. Assessment of critical thinking as a predictor of success in completion of an associate degree respiratory care program. PhD Thesis, Northeastern University. 2017 Dec.   URL: [accessed 2021-04-11]
  50. Al-Azawei A, Parslow P, Lundqvist K. Investigating the effect of learning styles in a blended e-learning system: An extension of the technology acceptance model (TAM). Aus J Edu Technol 2016 Nov 11:2741. [CrossRef]
  51. Canvas. Instructure Inc. 2021.   URL: [accessed 2021-05-15]
  52. Questback Management Software. Questback. 2021.   URL: [accessed 2013-01-08]
  53. SPSS Statistics version 26. IBM Corp. 2021.   URL: [accessed 2021-05-11]
  54. Maxqda Analytics Pro 2020 version 9. Verbi GmBH. 2021.   URL: [accessed 2006-03-30]
  55. Winne P, Azevedo R. Metacognition. In: Sawyer RK, editor. The Cambridge Handbook of the Learning Sciences. Cambridge: Cambridge University Press; 2014:63-87.
  56. Ormrod J. Human Learning. 8th Ed. New Jersey: Pearson Education International; 2009.
  57. Zimmerman J, Moylan A. Self-regulation: Where metacognition and motivation intersect. In: Hacker D, Dunlossky J, Graesser A, editors. Handbook of Metacognition in Education. New York: Routledge; 2009:299-315.
  58. Magno C. The role of metacognitive skills in developing critical thinking. Metacogn Learn 2010 Mar 31;5(2):137-156. [CrossRef]
  59. Snyder L, Snyder M. Teaching critical thinking and problem solving skills. Delta Pi Epsilon J 2008;50(2):90-99 [FREE Full text]
  60. Bransford J. How People Learn: Brain, Mind, Experience, and School: Expanded Edition. 1st Edition. Washington, D.C: National Academy Press; 2000:1-374.
  61. Burrell LA. Integrating critical thinking strategies into nursing curricula. Teach Learn Nurs 2014 Apr;9(2):53-58. [CrossRef]
  62. Ismail N, Harun J, Salleh S, Zakaria MA. Supporting students critical thinking with a mobile learning environment: a meta-analysis. In: Proceedings of the 10th International Technology, Education and Development Conference. Spain: IATED; 2016 Presented at: 10th International Technology, Education and Development Conference; March 7-9, 2016; Valencia, Spain. [CrossRef]
  63. Tickle-Degnen L. Nuts and bolts of conducting feasibility studies. Am J Occupat Ther 2013 Feb 22;67(2):171-176. [CrossRef]

AssCE: Assessment of Clinical Education
LDUC: Lovisenberg Diaconal University College
TIDieR: Template for Intervention Description and Replication
TOPP-N: Technology-Optimized Practice Process in Nursing
TSGM: Technology-Supported Guidance Model
UiA: University of Agder

Edited by G Eysenbach; submitted 30.06.21; peer-reviewed by C Mather, I Shubina; comments to author 30.07.21; revised version received 09.08.21; accepted 09.08.21; published 13.10.21


©Jaroslav Zlamal, Edith Roth Gjevjon, Mariann Fossum, Simen Alexander Steindal, Andréa Aparecida Gonçalves Nes. Originally published in JMIR Research Protocols (, 13.10.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.