Development and Validation of a Web-Based Survey on the Use of Personal Communication Devices by Hospital Registered Nurses: Pilot Study
Background: The use of personal communication devices (such as basic cell phones, enhanced cell phones or smartphones, and tablet computers) in hospital units has risen dramatically in recent years. The use of these devices for personal and professional activities can be beneficial, but also has the potential to negatively affect patient care, as clinicians may become distracted by these devices.
Objective: No validated questionnaire examining the impact of the use of these devices on patient care exists; thus, we aim to develop and validate an online questionnaire for surveying the views of registered nurses with experience of working in hospitals regarding the impact of the use of personal communication devices on hospital units.
Methods: A 50-item, four-domain questionnaire on the views of registered nursing staff regarding the impact of personal communication devices on hospital units was developed based on a literature review and interviews with such nurses. A repeated measures pilot study was conducted to examine the psychometrics of a survey questionnaire and the feasibility of conducting a larger study. Psychometric testing of the questionnaire included examining internal consistency reliability and test-retest reliability in a sample of 50 registered nurses.
Results: The response rate for the repeated measures was 30%. Cronbach coefficient alpha was used to examine the internal consistency and reliability, and in three of the four question groups (utilization, impact, and opinions), the correlation was observed to be very high. This suggests that the questions were measuring a single underlying theme. The Cronbach alpha value for the questions in the performance group, describing the use of personal communication devices while working, was lower than those for the other question groups. These values may be an indication that the assumptions underlying the Cronbach alpha calculation may have been violated for this group of questions. A Spearman rho correlation was used to determine the test-retest reliability. There was a strong test-retest reliability between the two tests for the majority of the questions. The average test-retest percent of agreement for the Likert scale responses was 74% (range 43-100%). Accounting for responses within the 1 SD range on the Likert scale increased the agreement to 96% (range 87-100%). Missing data were in the range of 0 to 7%.
Conclusions: The psychometrics of the questionnaire showed good to fair levels of internal consistency and test-retest reliability. The pilot study demonstrated that our questionnaire may be useful in exploring registered nurses’ perceptions of the impact of personal electronic devices on hospital units in a larger study.
JMIR Res Protoc 2013;2(2):e50)
- cellular phones;
- electronic mail;
- text messaging;
- medical staff/psychology;
- attitude of health personnel;
- survey methodology;
- questionnaire design
The use of personal communication devices including Internet surfing, text messaging, and emailing has increased in recent years. Electronic tools such as basic cell phones, enhanced cell phones (smartphones), and tablet computers are becoming mainstream, and numerous positive benefits of personal communication devices are cited in the literature. They provide clinicians with rapid access to medical references and patient information . They are used for medical consultation [ ], documentation [ ], and patient education [ ], and applications have been created for many clinical specialties [ - ].
There is much debate about whether online distractions, regardless of whether they are personal or professional in nature, can prove hazardous in medical settings, potentially distracting health care workers from patient care (termed “distracted nursing”) . For example, the Joint Commission for the Accreditation Organization [ ] and the United States Pharmacopeia through MEDMARX analyzed medication error reports and found that 43% of medication errors in hospitals were attributable to workplace distractions [ ]. The ECRI Institute, a nonprofit health care research organization, publishes an annual top 10 technology hazards list. The ninth hazard on this list for 2013 was “Caregiver distractions from smartphones and other mobile devices” [ ]. Concern among surgeons about distraction caused by the use of cellular telephone technology in the operating room led the American College of Surgeons to issue a statement (ST-59). The statement says, “the undisciplined use of cellular devices in the [operating room]—whether for telephone, e-mail, or data communication, and whether by the surgeon or by other members of the surgical team—may pose a distraction and may compromise patient care” [ ].
Rapidly increasing technology fosters multitasking because it promotes multiple sources of input. A key concern about such multitasking is that this increase in simultaneous media consumption decreases the amount of attention paid to each device [, ].
Several studies have reported the effects of using personal communication devices while driving . It has been reported that drivers react more slowly to brake lights and stop signs during phone conversations [ ], and additional driving studies have shown that peripheral vision is reduced when using a cell phone [ ]. These studies have led to numerous new state laws prohibiting the use of personal communication devices for calling, texting, or both while driving.
Based on this research and findings that interruptions to patient care are common in hospitals  and may contribute to errors in said care, we chose to initiate a study into how “distracted nursing” might affect patient care in a hospital setting. For the purpose of this research, we propose the development and validation of an online survey to identify the concerns and opinions of registered nurses who had experience of working in hospitals regarding the effects of using personal communication devices while on duty in hospital units. This paper sets out the process for the development and initial testing of an online survey. We hypothesize that, in the future, the information obtained from this survey could be used to determine policies for the use of personal communication devices on hospital units to minimize risks to patients.
Questionnaire Research and Development Process
In our literature review, we searched the Cumulative Index to Nursing and Allied Health, PubMed, and Dissertation Abstractions International databases for relevant publications on distraction and communication devices related to health care workers in the 2003-2012 period. As the term “cellular phone” was first introduced as a MeSH keyword in 2003, we selected this year as the starting point. The search terms for the present research were as follows: “cellular phones”, “Internet”, “computers, portable”, “electronic mail”, “text messaging”, “nurses”, “healthcare workers”, “distraction”, “medical staff/psychology”, and “attitude of health personnel”.
We reviewed previous questionnaires used in surveys related to distraction and personal communication devices and included the following as items in our survey: opinions about how cell phones impact team effectiveness , registered nurses’ opinions about cell phone use and patient safety [ ], and frequency of Internet use for personal activities at work [ ].
A total of 64 potential survey items were identified based on criteria outlined by DeVellis . These survey items were reformulated, assessed, refined, or rejected in collaboration with a nursing team with knowledge about questionnaire methodology, consisting of 3 faculty members at nursing schools, all of whom had worked as nurses for more than 20 years. The items were also discussed with a unit manager and 2 hospital nurses, all of whom were certified clinical nurse specialists and had worked for more than 10 years on their nursing units.
Format of the Questionnaire
The 50-item pilot questionnaire adhered to the recommendations of Bowling . The first item in the questionnaire asked respondents to choose a 4-digit identification number that they would be asked to provide again in the second survey to assist us in matching their test-retest surveys while remaining anonymous.
A total of 10 survey items used a drop-down menu asking respondents to select one answer, including the demographics: gender, age, race/ethnicity, type of workplace setting, and primary nursing practice position. The demographic questions were developed in accordance with the Forum of State Nursing Workforce Centers for Standardizing Nursing Workforce Data .
There were 2 types of questions following the initial demographic questions. The first type aimed to gain straightforward numerical-type data through the use of three different 4-point Likert scales (1) “strongly disagree”, “disagree”, “agree”, and “strongly agree”; (2) “>5 times”, “2-5 times”, “once”, and “never”; and (3) “strongly negative”, “slightly negative”, “slightly positive”, and “strongly positive”. No scale had a “don’t know” option. The second type of question probed the reasoning behind why respondents had answered specific Likert-scale questions (“If yes, please describe”) or asked them to state recommendations.
The other 5 items using drop-down menus, where respondents were asked to select one answer, sought their opinion of personal communication device use while at work (2 items), hospital policy concerning the use of device in hospital units, how concerned their employer is about nonwork-related use of personal communication devices, and their awareness of their employer disciplining or terminating the employment of a nurse for excessive personal communication device use while working.
The survey then included a total of 32 Likert-type questions that consisted of: 14 questions on personal communication device use on hospital units; 9 questions on the effects of these devices on job performance, and specifically, how they affect patient care; 6 questions on how the devices influence coordination and teamwork; and 3 questions on how nurses think patients or other health care providers perceive a nurse using their personal communication device during work time.
Following the Likert-type questions, there were 7 open-ended questions. One question asked respondents to provide a description of their hospitals’ personal communication device policy. Another one queried respondents’ opinions about how these devices should be used in hospital units. Respondents were asked in another question to address the overall impact of these devices on nursing units. Two of the questions asked for examples of how personal communication devices had positively or negatively affected a nurse’s performance. One question queried respondents if they were aware of a situation where their employer had disciplined or terminated the employment of anyone for excessive use of a personal communication device at work and, if so, to describe the situation. The last question asked respondents for any additional comments about the use of personal communication device in hospital units.
The validation process is set out in, and was based on criteria proposed by Terwee et al [ ]. First, the items for inclusion were developed from a content analysis of the literature as described above. Informal interviews with a small convenience sample of registered nurses using personal communication devices on general medical-surgical units took place to identify themes relevant to nurses working in clinical settings. A team of 3 nursing experts assessed the initial 64 items selected for face validity and content validity [ ]. Finally, the number of items was reduced to 50 for the pilot online questionnaire.
Online Pilot Survey
Following the validity testing, an online pilot survey was conducted in November 2012, with 50 registered nurses associated with acute care settings in Honolulu, Hawaii and the San Francisco Bay area in California. Only registered nurses who had worked in a hospital unit for at least 5 months in the previous 2 years were eligible to participate in the survey.
For the pilot online survey, two versions of the same questionnaire were created with questions randomly arranged within blocks of grouped questions. The registered nurses were invited to participate in the survey and asked to complete two separate online questionnaires, accessed via a hyperlink email sent 1 week apart. The interval between the two questionnaires was set for 1 week because it limits the recall of responses but does not generally allow enough time for respondents to have altered their attitudes or behaviors [, ]. The random arrangement of questions within each block was to check that there was no order effect bias [ ].
No literature was found to have similar study settings to our pilot project. A sample size could not be calculated from power analysis, but a sample size of 12 per group is a useful principle based on experience, according to Julius . A sample of 50 registered nurses was chosen for the pilot study considering a conservative 50% nonresponse rate. We planned to use data collected from the pilot study to determine power and sample size for a larger online survey.
The pilot online survey data were downloaded to Excel (version 14.2.5, Microsoft Corporation), and statistical analyses were performed using SPSS version 21 (SPSS Inc). Descriptive statistics were used to examine missing data and the floor and ceiling effects . Spearman correlation coefficients were used to examine test-retest reliability for agreement with the level of significance set at P<.05. Cronbach alpha was used to examine the grouped question items for internal consistency reliability and kappa scores to measure reliability.
The online questionnaire comments were analyzed using the meaning condensation method introduced by Kvale and Brinkmann  in which the main sense of what is said is rephrased into more succinct formulations, and content analysis [ ].
Based on the Code of Federal Regulations 45 CFR 46 (2), this online pilot study was exempt from institutional review board approval. Permission to conduct the exempt study was granted by the University of Hawaii Human Subjects Committee on September 13, 2012. All participants were informed in writing of the purpose of the study, that participation was voluntary, and that responses were anonymous.
The overall response rate for the pilot survey is shown in. Of the 50 invited respondents, 15 (30%) respondents completed both the test and retest online surveys. also presents background information of the participants in the pilot surveys. The mean age of the 15 respondents answering both online surveys was 49.23 (SD 10.2) years. Respondents who completed both surveys were primarily female 13/15 (87%), and 35/50 (70%) participants were missing/nonresponders.
Face and Content Validity
Based on the responses to the open-ended questions, the respondents in the pilot online survey understood the questions. Respondents considered the questions relevant, and none found areas lacking. In response to discussions with experienced colleagues in nursing research methodology, we made minor adjustments to phrasing and response options in multiple-response questions.
Reliability and Agreement
In the pilot survey, 19/32 responses (59%) to the 4-point Likert-scale questions covered all response categories. However, the variability in answers was not spread evenly across all questions. Test-retest reliability for respondents who completed both questionnaires was determined by intrarespondent percent agreement for each of the Likert-type questions. The results indicate that 74% (range 43-100%) of the answers to the questions were the same for both questionnaires (- ); accounting for responses within the 1 SD range on the Likert scale increased the agreement to 96% (range 87-100%).
Cohen’s kappa was calculated for each Likert-scale question on the online test/retest pilot survey to assess the items’ inter-rater agreement. Comparing the answers from the two versions of the questionnaire, the mean Cohen’s kappa for the Likert-scale questions was .41 (41%, range .67 to −.057). A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance .
The discrepancy between the relatively low kappa results and the high percent of agreement is most likely due to the sensitivity of kappa to small sample sizes (n=15). It is also possible that the registered nurses undergoing the survey changed their attitudes during the 1-week interval between the two questionnaires. Such a shift could have been subtle, but any difference results in a lower kappa value.
For the purposes of measuring survey reliability using Cronbach alpha, the survey was organized into 4 groups: (1) utilization of personal communication devices in nursing units, (2) effects of personal communication devices on registered nurses’ performance, (3) registered nurses’ opinions about personal communication device use and patient safety, and (4) registered nurses’ knowledge about hospital policy concerning personal communication device use in hospital units.
Several modifications were required to Group 2 questions. The final 3 questions (see, Group 2 questions) were excluded from the reliability analysis because nearly all respondents gave a response of “Never” to these questions. This lack of variation in the responses made it impossible to calculate reliable statistics, so the analysis used only the 6 remaining questions from Group 2. Of the 6 remaining questions, 3 dealt with negative events, whereas the other 3 dealt with positive events. To ensure consistency in the interpretation of the responses, the response scale was reversed for the 3 negative questions so that when calculating reliability statistics for this group, a positive response to a negative question was counted the same way as a negative response to a positive question. The 7 open-ended questions were excluded from the reliability analysis, as they did not share a common scale with the other questions.
Reliability of Grouped Questions
Cronbach alpha (ranging between 0 and 1) was selected as the most appropriate statistical analysis for testing the reliability of grouped questions because it can be used on ordinal data, such as the Likert scale. Higher alpha values imply a greater degree of correlation or inter-relatedness, with .7 generally considered a minimally acceptable value for reliability .
shows the Cronbach alpha statistic for each of the four sets of grouped questions. For each survey, two analyses of the datasets were run: (1) inclusion of all survey respondents (ie, includes respondents who completed only one questionnaire), and (2) inclusion of only those respondents who completed both questionnaires. The unpaired alpha values make use of all of the available data to produce a reliability estimate based on as many responses as possible, but may be susceptible to selection bias if there is a systematic pattern in the types of respondents who completed both surveys. The paired values eliminate this concern by ensuring that the same set of respondents is included in both Week 1 and Week 2 calculations, but sacrifice sample size to achieve this. Before computing these reliability statistics, all the responses to these questions were converted to a simple number scale, assigning integer values to each response (“never”, “strongly disagree”, or “strongly negative” was converted to 1; “once a day”, “disagree”, or “slightly negative” was converted to 2; “2-5 times per day”, “agree”, or “slightly negative” was converted to 3; “>5 times per day”, “strongly agree”, or “strongly positive” was converted to 4). No attempt was made to quantify the answers or the gaps between them accurately; the scale was kept simple for convenience.
shows that the measured reliability of 3 of the 4 question groups (utilization, impact, and opinions) was high, indicating that responses in these groups tended to be highly correlated. This suggests that the questions were measuring a single underlying theme. For these 3 groups, the reliability remained high in both weeks, and there was little variation based on whether all responses or only paired responses were included.
The Cronbach alpha value for Group 2 performance questions, describing use of personal communication devices by registered nurses while working, was notably lower than that for other question groups. The Week 2 values indicated that responses were fairly consistent, but the Week 1 responses displayed a very low Cronbach alpha value of .08, or −.35 if only paired responses were examined.
Because of the lack of variation in responses, 3 questions from Group 2, related to medical errors, and 1 question from Group 1, which asked respondents if they posted on social networking sites while working, were excluded from the survey.
Floor and Ceiling Effect
The floor and ceiling effect is detected using the measures of central tendency of the data, including mean and median, as well as the range, SD, and skewness ; a score would be considered acceptable if the values are distributed in a normal or bell-shaped curve, with the mean near the midpoint of the scale. Some criteria for floor and ceiling effects recommend a skewness statistic between −1 and +l as acceptable for eliminating the possibility of these effects.
Responses to the Likert scales revealed that 16/32 (50%) questions on the pilot online survey had a ceiling effect (ie, higher than the recommended maximum of 15%). However, given the small sample size of the pilot online survey, floor/ceiling effect criteria could not be applied to the survey results.
A total of 33/50 respondents (66%) from the pilot study added comments (some example comments are included in).
Responses to the online pilot survey were segmented by demographic variables (eg, gender, ethnicity, age, and job title). We performed regression analyses to determine if any divergent attitudes and/or behaviors based on demographics were statistically significant. None were statistically significant because of the small sample size.
In the online pilot survey, the first dataset had an average percentage of missing data of 1.2% and the second dataset of 0.04%.
In the paired composite data, there were 35 missing answers, resulting in 0.03% missing data. However, only one respondent was responsible for 17/35 missing answers (48%) in the paired composite data. Eliminating this respondent reduced the sample size to 14 pairs and the percent of missing data to 0.01%. In analyzing the data, a listwise deletion was used because the percentage of missing data was small (<5%) and the listwise deletion has been shown to provide unbiased estimators .
The present study describes a validation process to examine the psychometrics of a newly developed questionnaire concerning the views of registered nurses on the impact of the use of personal communication devices on nursing units. It concludes that initial findings suggest that the questionnaire is a reasonable way to assess registered nurses’ perceptions of use of personal communications devices. However, more work is needed to test it on a larger sample.
The test-retest results were found to be only fair to moderate . This can be attributed to the following two reasons: (1) a small test-retest sample increases the statistical error and results in a low Cohen’s kappa value, and (2) many questions relate to experiences and attitudes regarding personal communication device issues, which may change during a 1-week period. The changes within the 1 SD range from test to retest on the Likert scale indicate that even with the small sample, the stability is acceptable.
No specific questions had more than 7% missing data, and the percentage of missing data did not increase toward the end of the surveys, indicating that respondents did not consider the questionnaire to be too lengthy.
Unknown factors could confound the differences identified in this study. Despite the small sample size, significant differences were found in specific areas where a focus on utilization issues and guidelines for use on nursing units would be expected to have an impact. Although none of the units where the respondents were working had formal policies on personal communication device use that were enforced, it is considered likely that the guidelines and general focus on utilization issues could be major contributors to the differences among registered nurses based on the surveys. As such, it is plausible that our questionnaire will detect differences among a larger sample of surveyed nurses.
With respect to construct validity, most of the questions were direct (ie, they were asked directly about the construct they wished to measure). This increases the chance of the instrument actually measuring the desired construct. The comments added by the respondents indicated that the questions had been understood and responded to as expected. According to Terwee et al , conducting surveys among health care professionals involves a fairly homogenous group to whom personal communication devices are well known. This also increases the chances of the instrument measuring the desired concepts.
Tests for internal consistency and criterion validity may also be conducted during the validation of questionnaire instruments [, ]. However, this was not applicable for the present instrument because there is no objective standard against which to test the correlation.
Another weakness of the validation study was the small sample size of the study. Because measuring the relation between nonresponse and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates, until recently. Keeter et al compared the results of a 5-day survey employing the Pew Research Center’s usual methodology (with a 25% response rate) with those from a more rigorous survey conducted over a much longer field period, achieving a higher response rate of 50%. In 77 of 84 (91%) comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points. Holbrook et al [ ] assessed whether lower response rates are associated with less unweighted demographic representativeness of a sample. By examining the results of 81 national surveys with response rates varying from 5% to 54%, they found that surveys with much lower response rates were only minimally less accurate. Nevertheless, the small initial sample size did both increase the statistical error in the test-retest analysis and prevent extensive subanalyses in the comparison between the two tests. Further testing with a much larger sample would be necessary to overcome this limitation.
The low Cronbach alpha values for Group 2 questions may indicate that the assumptions underlying the Cronbach alpha calculation have been violated for this group of questions. In other words, the questions may not all be measuring the same underlying dimension, or the coding for some questions may need to be reversed. However, examining this set of questions in a variety of alternative ways (eg, not inverting the scale for negative questions) did not consistently increase the alpha values. Some of these modifications increased the estimate of reliability for Week 1 for at least some subsets of questions, but generally at the expense of the reliability for Week 2 or other question subsets. These results indicate that it may not be logical to consider the questions in this group as representative of a single underlying dimension.
Alternatively, the low Cronbach alpha score associated with these questions may reflect a tendency among respondents to underrecognize their own distraction, or a discomfort in reporting self-use. Nurses’ reported observations of other registered nurses’ use of personal communication devices were higher than the self-reported results. This discrepancy may be because nurses did not believe that they were distracted. In a study of drivers, Lesch and Hancock  found that drivers did not recognize that their driving ability worsened when they were using a cell phone. Strayer et al [ ] also noted that drivers using cell phones described other drivers with cell phones as driving inconsistently, but believed that their own performance was unaffected, even when results showed otherwise. This led the researchers to believe that cell phone use may make drivers unaware of their own attention deficits. In the pilot survey, the participants’ ambivalence about answering these difficult questions may therefore have been responsible for the low Cronbach alpha scores, but the importance of the answers argues for keeping the questions on the survey. The self-reported rate versus reporting of other nurses’ behavior will provide a useful comparison in future studies.
The Cronbach alpha for the utilization scale (Group 1) was higher than that for the performance scale (Group 2). This may be because it is easier to identify specific behaviors associated with the use of personal communication devices than attitudes and values associated with job performance. Although the items in the performance scale appear related to one another and to the concept of behavior associated with job performance, they had low inter-item correlations. This indicates that there was some acceptable commonality among some of the items. Despite these acceptable correlations, the alpha for the factor was lower than desirable, implying that the items may not in fact belong together. These results may reflect the complex role of performance in nursing and the fact that although job performance is an important concept, it may not be possible to encapsulate it easily in a question.
Several limitations of the present study should be acknowledged. Self-selection bias affects any survey that allows respondents to decide whether to participate. To mitigate this potential problem, we compared the characteristics of the respondents in our study with those of California-based registered nurses and found that the respondents in our study were not systematically different from those of the state’s average registered nurse in terms of gender, age, race/ethnicity, job title, and experience with personal communication devices. However, this is a potential problem should the survey be used elsewhere and more widely, and similar tests will need to be carried out to ensure there is no self-selection bias. Discussions during the validation process suggested that some responses might be skewed by the fact that respondents may prefer not to admit that they made a medical error or that their performance had suffered as a result of their use of personal communications devices. For this reason, questions were rewritten for the survey questionnaire to ask if respondents had seen other registered nurses making a medical mistake because of distraction caused by their personal communication device or if they knew of a major medical error that had been caused by a registered nurse who was similarly distracted. By reducing self-reporting in this manner, it was hoped that underreporting of “bad” behavior resulting from said distraction would be lessened.
In addition, the validation process was conducted in the United States. As the use of personal communication device varies in different cultures and across professions, the questionnaire may not be easily translated to other countries or professions. However, we believe that it will be useful as a model to develop different national questionnaires.
The work set out in this paper is very much a preliminary stage in a much larger piece of work. We have reported on the initial development and pilot testing of an online survey to assess registered nurses’ views on the use of personal communication devices. The next step is to make some specific changes, and then use the survey in a proposed study of California registered nurse licensees. The following three specific changes have been proposed for the survey. First, the definition of “personal communication device” needs to be clarified. There may have been some confusion about whether a hospital-owned device such as an electronic device should be considered as a personal communication device. When using the survey with a larger group, we plan to include a definition of “personal communication device” at the beginning, to make clear that it is the one that is owned by the nurse, and used for personal business or family matters, as compared with professional or work-related activities.
Second, the type of unit that the registered nurse respondent is working on may have some sort of bearing on the use of personal communication devices, for example, in terms of the levels of concentration required of nurses.
Finally, the length of time during use as well as the frequency of device use should also be kept in consideration.
The results of the proposed study will provide data concerning the use of personal communication devices on nursing units and the opinion of nurses on the potential impact, both positive and negative, of personal communication devices on patient safety. These data may inform hospital policies and practices concerning the use of personal communication devices on hospital units. An acceptable hospital policy needs to be based on evidence and should aim to create a reasonable balance between work-related and personal use.
Suggestions for Future Research
In future, it may be helpful to expand the survey to include additional questions to identify times and places where important information is transferred or interaction occurs, such as during attending rounds, morning huddle, or at the bedside. Mitigation techniques could also be surveyed to better understand how respondents handle distraction caused by their or others’ personal communication devices.
Knowledge about personal communication practices in hospital units is important to improve clinical practice and patient care, and a valid questionnaire assists in detecting the issues that need to be improved. Our results describe the development of one possible online questionnaire to establish registered nurses’ views concerning the impact of personal communication devices on hospital units. Statistical tests showed both good and moderate areas of validity of the questionnaire. The psychometrics performed on the survey questionnaire indicated that the survey has the potential to be a useful tool to assess hospital nurses’ perceptions of personal communication device use in the workplace, and we plan to use it to carry out a wider survey of nurses in California in the future.
The authors wish to thank Dr Rachel J Katz-Sidlow, Department of Pediatrics, Jacobi Medical Center, Albert Einstein College of Medicine of Yeshiva University; Dr Jack L Nasar, Professor of City & Regional Planning and Editor of the Journal of Planning Literature at Ohio State University; Heather Shoenberger, JD, PhD candidate at the Missouri School of Journalism; Trevor Smith, CCP, LP, Brigham and Women’s Hospital, Boston, MA; and Dr Guangxiang (George) Zhang, Research Biostatistician at the John A Burns School of Medicine’s Biostatistics and Data Management Core, University of Hawaii, for providing valuable advice on the content and effective implementation of the questionnaire.
Conflicts of Interest
- Baumgart DC. Smartphones in clinical practice, medical education, and research. Arch Intern Med 2011 Jul 25;171(14):1294-1296. [CrossRef] [Medline]
- Divall P, Camosso-Stefinovic J, Baker R. The use of personal digital assistants in clinical decision making by health care professionals: a systematic review. Health Informatics J 2013 Mar;19(1):16-28. [CrossRef] [Medline]
- Florczak B, Scheurich A, Croghan J, Sheridan P, Kurtz D, McGill W, et al. An observational study to assess an electronic point-of-care wound documentation and reporting system regarding user satisfaction and potential for improved care. Ostomy Wound Manage 2012 Mar;58(3):46-51. [Medline]
- Shepherd JD, Badger-Brown KM, Legassic MS, Walia S, Wolfe DL. SCI-U: e-learning for patient education in spinal cord injury rehabilitation. J Spinal Cord Med 2012 Sep;35(5):319-329 [FREE Full text] [CrossRef] [Medline]
- Dala-Ali BM, Lloyd MA, Al-Abed Y. The uses of the iPhone for surgeons. Surgeon 2011 Feb;9(1):44-48. [CrossRef] [Medline]
- Jensen Ang WJ, Hopkins ME, Partridge R, Hennessey I, Brennan PM, Fouyas I, et al. Validating the use of smartphone-based accelerometers for performance assessment in a simulated neurosurgical task. Neurosurgery 2013 Jul 24. [CrossRef] [Medline]
- Sohn W, Shreim S, Yoon R, Huynh VB, Dash A, Clayman R, et al. Endockscope: using mobile technology to create global point of service endoscopy. J Endourol 2013 Sep;27(9):1154-1160. [CrossRef] [Medline]
- Zuo KJ, Guo D, Rao J. Mobile teledermatology: a promising future in clinical practice. J Cutan Med Surg 2013;17:1-5. [Medline]
- ECRI Institute. Top 10 technology hazards for 2013. Health Devices 2012;41(11):1-23 [FREE Full text] [WebCite Cache]
- JACHO. Preventing ventilator-related deaths and injuries. Sentinel Event Alert 2002 Feb 26(25):1-3. [Medline]
- Stevenson JG. Medication errors: experience of the United States pharmacopeia (USP). Jt Comm J Qual Saf 2005;31(2):106-111.
- American College of Surgeons. Statement on the use of cell phones in operating room. Bull Am Coll Surg 2008;93:33-34.
- Hopkinson SG, Jennings BM. Interruptions during nurses' work: a state-of-the-science review. Res Nurs Health 2013 Feb;36(1):38-53. [CrossRef] [Medline]
- Payne KB, Wharrad H, Watts K. Smartphone and medical related App use among medical students and junior doctors in the United Kingdom (UK): a regional survey. BMC Med Inform Decis Mak 2012;12:121 [FREE Full text] [CrossRef] [Medline]
- Naumann RB, Dellinger AM. Mobile device use while driving–United States and seven European countries. MMWR 2011;62(10):177-182.
- Hoff J, Grell J, Lohrman N, Stehly C, Stoltzfus J, Wainwright G, et al. Distracted driving and implications for injury prevention in adults. J Trauma Nurs 2013;20(1):31-4; quiz 35. [CrossRef] [Medline]
- Schweizer TA, Kan K, Hung Y, Tam F, Naglie G, Graham SJ. Brain activity during driving with distraction: an immersive fMRI study. Front Hum Neurosci 2013;7:53 [FREE Full text] [CrossRef] [Medline]
- Weigl M, Müller A, Vincent C, Angerer P, Sevdalis N. The association of workflow interruptions and hospital doctors' workload: a prospective observational study. BMJ Qual Saf 2012 May;21(5):399-407. [CrossRef] [Medline]
- Gill PS, Kamath A, Gill TS. Distraction: an assessment of smartphone usage in health care work settings. Risk Manag Healthc Policy 2012 Aug;5:105-114 [FREE Full text] [CrossRef] [Medline]
- Smith T, Darling E, Searles B. 2010 Survey on cell phone use while performing cardiopulmonary bypass. Perfusion 2011 Sep;26(5):375-380. [CrossRef] [Medline]
- Black E, Light J, Paradise Black N, Thompson L. Online social network use by health care providers in a high traffic patient care environment. J Med Internet Res 2013 May;15(5):e94 [FREE Full text] [CrossRef] [Medline]
- DeVellis RF. Guidelines in scale development. In: Scale Development: Theory and Applications (Applied Social Research Methods). Volume 26. 3rd edition. London: Sage Publications, Inc; 2012:73-115.
- Bowling A. Research methods in health: investigating health and health services. 3rd edition. Maidenhead, Buckingham: McGraw-Hill Open University Press; 2009:299-337.
- Moulton PL, Wiebusch PL, Cleary BL, Brunell ML, Napier DF, Bienemy C, et al. Toward standardization (part 2): national nursing minimum data sets consensus building and implementation status. Policy Polit Nurs Pract 2012 Aug;13(3):162-169. [CrossRef] [Medline]
- Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol 2007 Jan;60(1):34-42. [CrossRef] [Medline]
- Dillman DA, Smyth JD, Christian LM. Mail and Internet surveys: the tailored design method. In: Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 2nd edition. Hoboken, NJ: Wiley & Sons; 2009.
- Allen MJ, Yen WM. Introduction to measurement theory. In: Introduction to Measurement Theory. Long Grove, IL: Waveland Pr Inc; 2002.
- Julius SA. Sample size of 12 per group rule of thumb for a pilot study. Pharm Stat 2005;4:287-291.
- Kvale S, Brinkmann S. InterViews: Learning the Craft of Qualitative Research Interviewing. Los Angeles: Sage Publications, Inc; 2009.
- Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today 2004 Feb;24(2):105-112. [CrossRef] [Medline]
- Tavakol M, Dennick R. Making sense of Cronbach's alpha. Int J Med Educ 2011;2:53-55.
- Allison PD. Missing data. Thousand Oaks, CA: Sage Publications; 2002.
- Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 4th edition. Oxford: Oxford University Press; 2008.
- Keeter S, Kennedy C, Dimock M, Best J, Craighill P. Gauging the impact of growing nonresponse on estimates from a National RDD Telephone Survey. Public Opin Q 2006;70(5):759-779.
- Lepkowski JM, Tucker C, Leeuw E, Japec L, Lavrakas P, Link M, et al. The causes and consequences of response rates in surveys by the news media and government contractor survey research firms. In: Lepkowski JM, Tucker NC, Brick M, DeLeeuw ED, Japec L, Lavrakas PJ, et al, editors. Advances in Telephone Survey Methodology. Hoboken, NJ: John Wiley & Sons; 2008.
- Lesch MF, Hancock PA. Driving performance during concurrent cell-phone use: are drivers award of their performance decrements? Accid Anal Prev 2003;36:471-480.
- Strayer DL, Drews FA, Johnston WA. Cell phone-induced failures of visual attention during simulated driving. J Exp Psychol Appl 2003 Mar;9(1):23-32. [Medline]
Edited by G Eysenbach; submitted 14.06.13; peer-reviewed by H Soliman, DF Aboul-Enein, L Kivuti-Bitok, TT Lee, B Parkhurst; comments to author 08.07.13; revised version received 21.08.13; accepted 15.09.13; published 26.11.13
©Deborah L McBride, Sandra A LeVasseur, Dongmei Li. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 28.11.2013.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included.