Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Biofeedback in Partial Weight Bearing: Usability of Two Different Devices from a Patient’s and Physical Therapist’s Perspective

  • Remko van Lieshout,

    Affiliations Center for Physical Therapy Research and Innovation in Primary Care, Julius Health Care Centers, Utrecht, the Netherlands, Physical Therapy Research, Clinical Health Sciences & Department of Rehabilitation, Nursing Science and Sport, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands

  • Martijn F. Pisters,

    Affiliations Center for Physical Therapy Research and Innovation in Primary Care, Julius Health Care Centers, Utrecht, the Netherlands, Physical Therapy Research, Clinical Health Sciences & Department of Rehabilitation, Nursing Science and Sport, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands, Department of Health Innovations and Technology, Fontys University of Applied Sciences, Eindhoven, the Netherlands

  • Benedicte Vanwanseele,

    Affiliations Department of Health Innovations and Technology, Fontys University of Applied Sciences, Eindhoven, the Netherlands, Human Movement Biomechanics Research Group, Department of Kinesiology, KU Leuven, Leuven, Belgium

  • Rob A. de Bie,

    Affiliation Department of Epidemiology, CAPHRI School for Public Health and Primary Care, Maastricht University, Maastricht, the Netherlands

  • Eveline J. Wouters,

    Affiliation Department of Health Innovations and Technology, Fontys University of Applied Sciences, Eindhoven, the Netherlands

  • Mirelle J. Stukstette

    m.j.p.m.stukstette@umcutrecht.nl

    Affiliations Center for Physical Therapy Research and Innovation in Primary Care, Julius Health Care Centers, Utrecht, the Netherlands, Physical Therapy Research, Clinical Health Sciences & Department of Rehabilitation, Nursing Science and Sport, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands, Department of Epidemiology, CAPHRI School for Public Health and Primary Care, Maastricht University, Maastricht, the Netherlands

Abstract

Background

Partial weight bearing is frequently instructed by physical therapists in patients after lower-limb trauma or surgery. The use of biofeedback devices seems promising to improve the patient’s compliance with weight-bearing instructions. SmartStep and OpenGo-Science are biofeedback devices that provide real-time feedback. For a successful implementation, usability of the devices is a critical aspect and should be tested from a user’s perspective.

Aim

To describe the usability from the physical therapists’ and a patients’ perspective of Smartstep and OpenGo-Science to provide feedback on partial weight bearing during supervised rehabilitation of patients after lower-limb trauma or surgery.

Methods

In a convergent mixed-methods design, qualitative and quantitative data were collected. Usability was subdivided into user performance, satisfaction and acceptability. Patients prescribed with partial weight bearing and their physical therapists were asked to use SmartStep and OpenGo-Science during supervised rehabilitation. Usability was qualitatively tested by a think-aloud method and a semi-structured interview and quantitatively tested by the System-Usability-Scale (SUS) and closed questions. For the qualitative data thematic content analyses were used.

Results

Nine pairs of physical therapists and their patients participated. The mean SUS scores for patients and physical therapists were for SmartStep 70 and 53, and for OpenGo-Science 79 and 81, respectively. Scores were interpreted with the Curved Grading Scale. The qualitative data showed that there were mixed views and perceptions from patients and physical therapists on satisfaction and acceptability.

Conclusion

This study gives insight in the usability of two biofeedback devices from the patient’s and physical therapist’s perspective. The overall usability from both perspectives seemed to be acceptable for OpenGo-Science. For SmartStep, overall usability seemed only acceptable from the patient’s perspective.

Implication

The study findings could help clinicians to decide which biofeedback device is appropriate for their given situation and provide information for future development of biofeedback devices.

Introduction

Restrictions of lower-limb weight bearing are frequently instructed in patients after orthopedic trauma or surgery such as lower-limb fractures or osteotomies [1]. Weight bearing (WB) is often restricted to protect the injury site or surgical construct from too much stress that may lead to failure [24]. Conversely, the rationale for gradually advancing WB is that repetitive loads can stimulate bone growth and healing [13]. Therefore, it is commonly recommended that a rehabilitation program should include WB restrictions, which are gradually reduced as healing occurs [1,5].

Usually, physical therapists (PTs) train patients to comply with WB instructions, using verbal instructions, tactile feedback or bathroom scales [1,6]. However, these methods do not represent dynamic activities (e.g. walking) and are not accurate in training patients to comply with partial weight-bearing (PWB) instructions [59]. Previous research shows that it is difficult for patients to comply with WB instructions [1,5,7,913]. Reasons for non-compliance include [1] the difficulty to judge the load placed on the lower-limb, and [2] the use of inadequate training methods to achieve controlled PWB [1,7,11,13]. Technological advances have resulted in the development of several commercially available biofeedback devices that are capable of offering real-time feedback on PWB in dynamic situations [1,5,6,11]. These devices intend to enable PTs to assess, train and monitor WB, and aim to provide patients with feedback during daily activities. Examples of such biofeedback devices are SmartStep and OpenGo Science. Both devices are used in supervised clinical settings by PTs and patients and seem promising to improve training and compliance to WB instructions because of providing real-time feedback. These devices use different technology to measure WB and to provide real-time feedback. Thereby, SmartStep has already been used in several studies on PWB [10,1416]. The recently developed OpenGo Science does not interfere with natural gait by using insoles without external modules attached to the body. Both devices are completely wireless. These two devices were selected based on the findings of earlier research, which showed that criterion validity for peak force measurements under the foot was acceptable in the lower weight-bearing categories for OpenGo Science and SmartStep [17].

Although these devices seem promising, the use of technology in daily practice is often not as successful as expected due to lack of technology acceptance in patients and healthcare professionals [18]. Several studies have outlined that involvement of the user in the development and evaluation of technology is needed for successful implementation [1820]. Usability of a product has been considered a key aspect of the interaction between user and product [21]. A mismatch among the users’ needs or expectations and the abilities of the biofeedback device can considerably undermine the user-product interaction. It is difficult to precisely define and measure usability [22]. According to the International Organization for Standardization (ISO, 9241–11) usability of a product refers to the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use [23]. This standard has been widely adopted in the field of usability testing and is commonly referred to as summative usability [22]. Another widely adopted major concept of usability is called formative usability. Formative usability focuses on the detection of usability problems and the design of interventions to reduce or eliminate the impact [22]. In recent decades there have been several extensions to the usability descriptions such as user-centered design (UCD) [22,24] and user experience (UX) [22,25]. Based on previous conceptual considerations De Bleser and colleagues developed a framework specifically for testing the usability of electronic adherence monitoring devices [26]. This framework shows that usability testing of electronic monitoring devices require testing of three aspects: user performance, satisfaction and acceptability. According to the framework, these usability aspects should be tested from a users’ subjective point of view. In the current study the intended users are PTs and their patients after an orthopedic trauma or surgery. Besides involvement of the intended users, usability of the systems should be tested in the specific context in which the biofeedback devices are used, including the tasks users intend to perform.

Therefore, the aim of this study was to describe the usability from physical the therapist’s and the patient’s perspective of SmartStep and OpenGo Science for feedback on partial weight bearing during supervised rehabilitation of patients after orthopedic lower-limb trauma or surgery.

Methods

Design

A convergent mixed methods design was used to evaluate the usability of the Biofeedback devices from patient’s and PT’s perspective. Qualitative and quantitative data were collected and analyzed during a similar timeframe [27].

Participants

Participants consisted of pairs of a physical therapist (PT) and their patient. Participants were recruited by purposive sampling at two different types of physical therapy settings in the Netherlands: (1) private primary care practices and (2) secondary or tertiary care facilities including rehabilitation centers, rehabilitation departments of nursing homes and orthopedic departments of hospitals. In total, nine pairs participated in the study, five from a primary care setting en four from a secondary/tertiary setting. The aim was to recruit a group of participants that was heterogeneous with respect to therapy setting, orthopedic condition, age, sex and education level to facilitate a broad spectrum of usability input [26]. In the current study a sample size of at least four participants per different type of end-user was considered as sufficient. The definition of a sample size for a usability analysis is a complex issue [28]. However, a large number of researchers and practitioners suggest that at least four participants per different type of end-users could be a good starting point to discover at least 80% of a product’s usability problems [22,2933]. The PTs were eligible if they: trained patients in PWB, worked at earlier described settings and were not influenced concerning the devices by a direct colleague who already participated in the study. Patients were eligible if they fulfilled all the following requirements. They had to be: prescribed with PBW (with a maximum load of 50% body weight) by a doctor because of lower-limb (orthopedic) trauma or surgery; referred to physical therapy; able to walk with crutches; aged 18 years and older; wearing shoes, sized between 38 and 45 (measured by the Continental European System). Patients were excluded when one or more of the following applied: diagnosed with cognitive impairments that prevent the understanding of instructions or the performance of the assigned tasks; not able to wear shoes (e.g. because of edema); having other disorders that interfered with normal gait performance; no good understanding of the Dutch language; diagnosed with central and peripheral neurologic disorders; having a severe hearing impairment (not able to hear the feedback sounds). All participants gave their written informed consent.

Ethics

The study was approved by the Medical Ethics Research Committee of the University Medical Center Utrecht (date of approval 14 April 2015, METC-protocol number 15-080/C). The study was conducted according to the principles of the Declaration of Helsinki (version: 64th WMA General Assembly, Brazil, October 2013) and in accordance with Dutch acts: Agreement on Medical Treatment Act and the Personal Data Protection Act.

Biofeedback devices

OpenGo Science (Moticon, Munich, Germany) consists of wireless sensor insoles for data measurement, analysis software for a PC, an ANT radio stick for wireless transmission, and a WB application for a smartphone. Each insole has thirteen pressure sensors, which cover 60% of the insole area, a triaxial acceleration sensor and a temperature sensor. The WB application for the smartphone was a prototype and at the time of the study not commercially available. This application is used to preset an upper threshold for WB and to provide real-time audio or haptic feedback. The analysis software for the PC enables to record and analyze WB data. In live mode, the sensor insoles transmit the data directly to the PC.

SmartStep (Andante medical devices Ltd, Beer Sheva Israel) consists of flexible insoles containing two separate air pockets (one for the forefoot and one for the hind foot). Tubes are used to inflate/deflate each pocket and to connect the pockets to microprocessor control unit, that is worn around the ankle. The microprocessor control unit contains two pressure sensors and is also functioning as a feedback unit by producing an audio signal when a preset WB value is reached. A Software application on a PC is used to preset upper and lower WB thresholds and to record and analyze WB data. In the online mode, SmartStep communicates via wireless Bluetooth USB adapter with a computer.

Table 1 shows an overview of the characteristics of SmartStep and OpenGo Science.

Data collection

Patient characteristics such as age, gender, educational level, type and location of orthopedic surgery were collected. For PTs the following characteristics were collected: age, gender, physical therapy setting, and experience in instructing PWB.

To evaluate usability, the conceptual framework for testing electronic adherence monitoring devices of De Bleser et al. was used [26]. This framework divides usability from the user’s perspective into three categories: user performance, satisfaction and acceptability. User performance refers to safe and effective use of a device [26]. Satisfaction refers to user-reported advantages, disadvantages, problems experienced when using the device and how much users liked or disliked the device [26]. Acceptability relates to whether the device will be used in the real world [26]. User performance was studied using three methods: (1) counting errors of use during the performed tasks; (2) timing user tasks in seconds; (3) asking participants to think aloud while performing tasks. A think-aloud method is commonly used in usability research, especially when users are confronted for the first time with a device [26,3436]. This method is used to make explicit what users thoughts and experiences are when performing a specific task. The think-aloud process was videotaped. To investigate aspects of satisfaction and acceptability, patients and PTs were assessed by also using the think-aloud method and by a semi-structured interview. An interview guide with closed questions about satisfaction and acceptability was employed to provide structure to the interviews. During the interview participants were asked to explain their answers. Satisfaction questions were based on the D-quest [37]. Acceptability questions were based on the acceptability concept described in De Bleser et al. [26,38] Semi-structured interviews were audio-recorded.

To evaluate overall usability, patients and PTs were assessed with the System Usability Scale (SUS) [39,40]. This questionnaire consists of 10 statements that are scored on a 5-point Likert scale, ranging from “strongly agree” to “strongly disagree” (scoring range 0 to 4). To calculate the SUS score, the scores of all items are summed and multiplied by 2.5 to obtain the overall SUS score. The overall SUS score ranges from 0 to 100, where higher scores represent better usability. Based on SUS data from a large number of studies Sauro and Lewis produced norms for the interpretation of mean SUS scores, the Curved Grading Scale (CGS) [39,41]. The CGS assigns grades as function of SUS scores and ranges from F (absolutely unsatisfactory) to A+ (absolutely satisfactory), the grades are as follow: Grade F (0–51.7); Grade D (51.8–62.6); Grade C- (62.7–64.9); Grade C (65.0–71.0); Grade C+ (71.1–72.5); Grade B- (72.6–74.0); Grade B (74,1–77.1); Grade B+ (77.2–78.8); Grade A- (78.9–80.7); Grade A (80.8–84.0); Grade A+ (84.1–100). Sauro and Lewis reported the overall mean score to be 68 [39,41]. In the current study, we considered SUS scores of at least 62.7 (corresponding with at least a grade C-) as acceptable usability. The SUS is a simple and reliable method, widely used in usability evaluation [22,39,41,42]. The English SUS shows good internal consistency: between studies Cronbach’s alpha ranged from 0.85 to 0.91 [42}. Construct validity was tested in several studies by factor analysis and showed different factor structures of the SUS [22,3944]. Originally the SUS intended to be a unidimensional (one factor) measurement of perceived “overall usability”[40,42]. Data from later studies indicated a two-factor structure (with item 4 and 10 aligning on the factor “learnability” separate from de remaining items that aligned on a factor “usability”) [41,43,44]. Recently, Borsci and colleagues found in their study the SUS acting as a unidimensional scale when administered to people who had less product experience but was acting as a bidimensional scale when administered to users with more product experience [41]. In the current study the SUS was considered to be unidimensional because participants lacked experience with the biofeedback devices.

Procedures

Patients prescribed with partial weight bearing and their physical therapists were asked to use Smart Step and OpenGo-Science during a physical therapy session. So pairs of a patient and a PT could experience these devices. Usability from patient’s and PT’s perspective were assessed separately to prevent bias. In the first session the researcher instructed the patient and the patient underwent the measurements. In the second session the PT instructed the patient and the PT underwent the measurements. The two sessions lasted respectively 60 and 90 minutes. The devices were used in two different equally occurring orders (SmartStep—OpenGo-Science and the reversed order). The pairs were allocated randomly to one of the two testing orders by letting the patient draw blindly one slip of paper without replacement.

Procedure patient’s perspective

First, measurement procedures were explained to the patient and patient characteristics were collected. Before the usability testing began, patients watched an instructional video to explain the think aloud method based on an example. Patients were asked to use the biofeedback devices one after the other during a training session in which PWB with crutches was practiced. Patients were asked to complete a cluster of specific tasks with the devices. Shortly before executing a particular cluster of tasks, patients watched an instructional video with an explanation of device functions regarding the task to perform. The order of the device presentation was randomised to avoid learning effect and was similar to the allocated testing order. The order of the cluster of tasks was not randomised and occurred in the same order as when patients used the device for the first time.

The clusters of tasks were: putting on the device and using the biofeedback device during PWB. Patients were asked to think aloud when carrying out the tasks. Patients were also encouraged to think aloud by using standardized phrases. The session was videotaped. After completion, the session with the other device started, following the same procedures. Subsequently, patients were asked to fill in the SUS questionnaire per biofeedback device after testing both devices, followed by a semi-structured interview that took approximately 20 minutes.

Procedure PT’s perspective

In general, the procedures regarding the PT’s perspective were the same as for the patient’s perspective. It also started with an explanation about the measurement procedures, collection of participant characteristics and they also watched instructional videos. Only the PT’s tasks were different from the patient’s tasks: the PT had to program the devices. The clusters of tasks for PTs were: putting on the device, preparing and using the device for instructing PWB, and preparing and using the biofeedback device for monitoring PWB. Subsequently, PTs were also asked to fill in the SUS questionnaire for both biofeedback devices and were interviewed afterwards.

Data analysis

Descriptive statistics were used to describe participant characteristics and to describe usability measured by (1) the SUS-questionnaire, (2) the interview’s closed questions, and (3) the user performance tests.

Although the primary research aim was to describe and not compare the usability, additional tests (paired samples t-tests or the nonparametric Wilcoxon Signed Ranks Tests, both 2-tailed and α = .05) were used to compare the SUS Scores and User Performances for both devices within both perspectives. The assumption of normality was tested with the Shapiro-Wilk test. When the assumption of normality was violated the Wilcoxon Signed Ranks Test was used instead of the t-test.

The video- and audiotaped data from the think-aloud session and the interview were transcribed verbatim. Qualitative data were analyzed using thematic content analysis. Meaningful comments regarding usability were identified and grouped into thematic categories. This was done by one researcher (RL) and checked by a second researcher (MS). For quotations for which no agreement was found, a third researcher (MP) was consulted till consensus was reached among researchers. Qualitative data analysis was conducted with NVivo for the Macintosh version 10.1.1 (QSR International, Doncaster, Australia).

Both quantitative and qualitative data were merged together for analysis and comparison.

Results

Participant characteristics

Nine patients (five females) with a median age of 48 years, mostly after hip surgery, participated in this study, together with their PTs (five females). The median age of the PTs was 36 years with a median experience in instructing PWB of 11 years. Characteristics of the study population arranged in pairs of a physical therapist and a patient are presented in Table 2.

thumbnail
Table 2. Characteristics of the study population arranged in pairs of a physical therapist and a patient.

https://doi.org/10.1371/journal.pone.0165199.t002

Overall perceived usability

The results of the SUS for both biofeedback devices are shown in Table 3. Mean SUS scores of at least 62.7 were considered as acceptable usability and SUS scores were graded according to the CGS as described in the method section. For both SmartStep and OpenGo Science the mean SUS score of patients was above 62.7, i.e., six and eight patients respectively considered the usability as acceptable. The mean SUS score of PTs was below 62.7 for SmartStep and above 62.7 for OpenGo Science, respectively three and eight PTs respectively considered SmartStep and OpenGo-Science as acceptable.

thumbnail
Table 3. Perceived overall usability for both biofeedback devices.

https://doi.org/10.1371/journal.pone.0165199.t003

The SUS scores of SmartStep and OpenGo Science were compared with each other. All distributions of the differences between the devices were normally distributed. The paired sample t-test showed no statistically significant difference in SUS scores between Smart Step and OpenGo Science for the patients, the mean difference was 8.6 (SD = 19.6) points on the SUS (t(8) = 1.3, p = .223). The difference in SUS scores between SmartStep and OpenGo Science for PTs was tested with the paired samples t-test as well. The mean difference of 27.5 (SD = 14.8) points was statistically significant (t(8) = 5.6, p = .001) in favour of OpenGo Science.

Categories of usability

User performance.

Table 4 shows the time registration of the cluster of tasks performed with the biofeedback devices, a description of errors of use, and the number of errors that occurred during the tasks. The results showed that there were not many errors made by the patients. However, four out of nine patients could not attach SmartStep’s control unit around the ankle independently due to hip surgery (hip flexion was not allowed past 90 degrees). Concerning OpenGo Science, initially four out of nine patients wrongly placed the battery in the insole. When looking at the PTs we see that, for SmartStep, five out of nine PTs experienced problems in inflating the air pockets of the insole. After a cue all participants managed to complete this task. For OpenGo Science, there were a lot of connectivity errors between the insoles and the PC or smartphone. Eventually, each participant succeeded in connecting the insoles with the PC and/or smartphone (some with cues or help from the researcher). Additionally, user performance (regarding the timed cluster of tasks) of both devices was compared with each other. All distributions of the differences between the devices were normally distributed. The mean time consumed to put on SmartStep and OpenGo Science by patients was compared with the paired samples t-test. The mean difference of 18 (SD = 29.1) seconds in favour of SmartStep was not statistically significant (t(5) = 1.4, p = .247). The mean difference between the devices regarding putting on the device by PTs was 102 (SD = 59.3) seconds in favour of SmartStep, which is statistically significant (paired samples t-test: t(8) = 5.2, p = .001). The paired samples t-test showed that the difference in mean time used by the PTs to prepare SmartStep and OpenGo Science for use was statistically significant and 363 (SD = 51.5) seconds in favour of OpenGo Science (t(8) = 21.2, p < .001). All PTs managed to prepare SmartStep and OpenGo-Science for use during PWB instructions within respectively 12 and 5 minutes.

thumbnail
Table 4. User performances for both the biofeedback devices.

https://doi.org/10.1371/journal.pone.0165199.t004

Satisfaction.

The results of the satisfaction questions (closed questions) are shown in Table 5. For SmartStep, the quantitative data showed that patients were the least satisfied with ‘ease of use’, ‘overall satisfaction’, and ‘wearable comfort’. Seven out of nine patients were at least ‘quite satisfied’ with ‘overall satisfaction’ and ‘wearable comfort’, and 5 out of nine were at least ‘quite satisfied’ with ‘ease of use’ (see Fig 1). For OpenGo-Science, the data showed that patients were quite satisfied with all examined satisfaction aspects and scored especially high on satisfaction with wearable comfort. All nine patients were at least ‘quite satisfied’ with the “overall satisfaction” (see Fig 1). For SmartStep, quantitative data in Table 5 showed that PTs were the least satisfied with the ‘ease of use’, ‘monitoring patient’s WB’ and ‘overall satisfaction’. Four out of nine PTs were at least ‘quite satisfied’ with ‘monitoring patient’s WB’ and ‘overall satisfaction’, and two out of nine PTs were at least ‘quite satisfied’ with ‘ease of use’ (see Fig 1). For OpenGo Science, the data showed that PTs were the least satisfied with the ‘feedback’ and ‘monitoring patient’s WB’ (for both, five out of nine PTs were at least ‘quite satisfied’). Seven out of nine PTs were at least ‘quite satisfied’ with the ‘overall satisfaction’ (see Fig 1).

thumbnail
Fig 1. Overall satisfaction with SmartStep and OpenGo Science from the patient’s and physical therapist’s perspective.

Patient SM = patient perspective on SmartStep, Patient OG = patient perspective on OpenGo Science, PT SM = physical therapist perspective on SmartStep, PT OG = physical therapist perspective on OpenGo Science.

https://doi.org/10.1371/journal.pone.0165199.g001

thumbnail
Table 5. Patient and physical therapist satisfaction with the biofeedback devices measured on a five-point Likert scale.

https://doi.org/10.1371/journal.pone.0165199.t005

Satisfaction with the devices extracted from the think-aloud data and the open questions, is illustrated by thematically categorized examples of quotes from patients and PTs shown in S1 Table. The results showed mixed views and perceptions from patients and PTs on satisfaction. A selection of meaningful quotes is presented for each perspective in the text below. After each quote, the participant code and the involved biofeedback device is given (SM = SmartStep and OG = OpenGo Science).

Patients’ perspective

For SmartStep in general, patients were satisfied with the wearable comfort and the effectiveness. This is illustrated by the following quotes:

“It feels comfortable and the materials weigh little, even the control unit with the ankle bracelet”. (patient 4, SM)

“I’m satisfied with SmartStep, it does what it should do.” (patient 2, SM)

Concerning OpenGo Science, in general, patients were satisfied with the wearable comfort, ease of use and intrusiveness. More specific, patients liked the use of normal looking and feeling insoles and the use of a smartphone.

“The insole feels good, you really do not notice that you have it in your shoe”. (patient 1, OG)

“Easy to use system, it is wireless, has no tubes or control unit attached around body parts just insoles and gives feedback via your smartphone, perfect“. (patient 7, OG)

“You hardly notice you are wearing something and that is important. I think it is important that the materials do not bother you too much. You already have enough discomfort due to your operation or rehabilitation.” (Patient 8, OG)

For Smartstep, the views were mixed for satisfaction with the feedback, ease of use and intrusiveness. Some patients experienced SmartStep as an intrusive device because of the control unit around the ankle and the amount of beeps. Furthermore, they disliked the inability to attach the control unit by themselves.

(“I like that the system warns you when you are putting too much or too little weight on the leg. And I’m satisfied with the feedback beeps, the beeps are clear”. (patient 2, SM))

“Attaching SmartStep’s control unit around the ankle is an unhandy activity when you have had hip surgery. I’m not able to attach the control unit by myself”. (patient 3, SM)

“On the one hand it is very nice that you get beeps when you're doing well or aren’t doing well, but on the other hand it is super annoying that the device is beeping constantly, not only for yourself but also for people around you”. (patient 8, SM)

“I think the ankle bracelet around my ankle looks really terrible, it seems a bit like a prison bracelet”. (patient 8, SM)

For OpenGo Science, the views were mixed for satisfaction with feedback and effectiveness. Some patients disliked the impossibility to simultaneously watch the exact amount of loading on the smartphone and walk with crutches and that no WB data was stored on the smartphone.

“Sometimes it is unclear whether you put enough weight on your leg because you are walking with crutches and the smartphone is in your pocket”. (patient 1, OG)

“It is a pity the smartphone did not store data. I can’t look back in the smartphone how much load I placed on my leg. When I walked with crutches and tried to comply with the weight-bearing instructions I could not manage to also look on the smartphone for the kilograms, I only heard the beeps” (patient 7, OG)

Physical therapists’ perspective

Satisfaction from a PT perspective differed for SmartStep and OpenGo Science. For SmartStep, in general, most PTs were at least more or less satisfied with the effectiveness.

“Overall, I’m more or less satisfied. Although it is a cumbersome system it does what it has to do, provide feedback on weight bearing”. (physical therapist 8, SM)

Regarding OpenGo Science, most PTs were satisfied with the wearable comfort, ease of use and intrusiveness. More specific, they liked that the insoles were without external modules attached to the body, and could be connected with a smartphone.

“OpenGo Science is quickly applied, easy to use and no long explanations are needed for patients. There are no wires attached, just put the insoles in the patient’s shoe and ready to go”. (physical therapist 2, OG)

“You cannot see that your patient is wearing a device, great!”. (physical therapist 3, OG)

“The pressure insole looks like a normal insole; it seems comfortable for patients”. (physical therapist 7, OG)

“I am very satisfied with the ease of use. This system is very easy to use. It speaks for itself. It has a good user interface. Furthermore, it is easy to assemble and the insoles can be put easily in the shoes. Thereby it is not uncomfortable for patients, it looks normal, and it uses a smartphone”. (physical therapist 9, OG)

For SmartStep, PTs expressed mixed feelings concerning feedback, wearable comfort, ease of use and intrusiveness. Especially, comments on SmartStep’s feedback varied at lot. Some PTs liked that it was possible to preset a lower and upper threshold, whereas others had some concerns about the auditory feedback. With respect to the ease of use, more than half of the PTs said that there were a lot of steps before they could actually use SmartStep.

“It is quite a device and it is really visible. When used in practice is not necessarily an issue but used outside it is”. (physical therapist 3, SM)

“I'm satisfied with the feedback. I like that not only feedback is provided when the patient is loading too much but also when the patient is loading correctly. It encompasses the patient and the physical therapist”. (physical therapist 5, SM)

“Too much auditory information, beeps, not pleasant for me as a physical therapist and for the patient” (physical therapist 6, SM)

“I’ am not satisfied with the ease of use. I think it is a cumbersome system because of the following things. The control unit around the ankle, the tubes and inflation procedure and you have to add an insole to one of shoes and people have often already swollen feet due to surgery”. (physical therapist 8, SM)

PTs perceptions were also mixed with respect to OpenGo Science’s feedback and effectiveness. Most PTs liked that both audio and haptic feedback was possible. Some PTs disliked that a feedback signal was only provided when the upper threshold was exceeded and most PTs regret that feedback via smartphone was not available during recording WB via the PC.

“The ability to provide audio and haptic feedback as well is nice. It is a calm sound and you change the feedback to vibrations so only the patient is provided with the feedback”. (physical therapist 3, OG)

“When recording patient’s weight bearing the biofeedback from the smartphone is not available, I would have preferred to also use the smartphone’s feedback during the measurements”. (physical therapist 6, OG)

“I had to look a lot on the display of the mobile. I would like to have more auditory information for the patient. The system did give a beep when the patient loaded the leg too much, but didn’t beep when the patient loaded properly or loaded the leg to little. Therefore, I am more or less satisfied with the feedback”. (physical therapist 8, OG)

Acceptability.

The results of the closed acceptability questions are shown in the Table 6. In this section only the main acceptability results are presented. Regarding acceptability from the patient’s perspective, the results showed that eight out of nine patients answered the question whether they would recommend SmartStep to other people in a similar situation (prescribed with PWB) with at least ‘probably’. For OpenGo-Science all patients answered this question with at least ‘probably’. Looking at willingness to pay a contribution for the biofeedback device, patients responded more unfavorably concerning SmartStep in comparison with OpenGo-Science. In the former case six out of nine patients answered with at least ‘probably’, whereas in the latter case eight out of nine patients responded with at least ‘probably’. Regarding acceptability from PTs’ perspective Table 6 shows the following: Two out of nine PTs would recommend SmartStep at least ‘very probably’ to colleagues and seven out of nine PTs would recommend OpenGo-Science ‘very probably’ to colleagues. Furthermore, five out of nine PTs wanted to purchase SmartStep at least ‘probably’ and eight out of nine PTs wanted to purchase OpenGo-Science at least ‘probably’. Reasons for non-acceptance of SmartStep emerging from the interview were: complexity of the device, intrusiveness of a control unit around the ankle, intrusiveness of the audio feedback and the availability of more usable devices. The reason for non-acceptance of OpenGo Science experienced by one PT were the encountered connectivity problems between the insole and the smartphone.

thumbnail
Table 6. Patient and physical therapist acceptability of the biofeedback devices measured on a five-point Likert scale.

https://doi.org/10.1371/journal.pone.0165199.t006

Discussion

This study described the usability from the physical therapist’s and patient’s perspective of the biofeedback devices, SmartStep and OpenGo Science when used for feedback on PWB during supervised rehabilitation after lower-limb trauma or surgery. In general, the overall usability measured on the SUS suggested that both SmartStep and OpenGo Science were at least acceptable from the patients’ perspective. From the PT’s perspective OpenGo Science seemed to be acceptable and SmartStep seemed not to be acceptable.

Looking at user performance, the results showed that there were not many errors made by the patients. However, more than one-third of the patients could not attach SmartStep independently due to hip surgery. Concerning OpenGo Science, more than one-third of the patients wrongly placed the battery in the insole of OpenGo Science. All PTs managed to prepare SmartStep within twelve minutes and OpenGo-Science within five minutes for use during PWB instructions. Five minutes seems more acceptable instead of the twelve minutes since most PTs have treatment sessions of 30 minutes per patient. Furthermore, the majority of the physical therapist had problems with inflating the air pockets of SmartStep while OpenGo-Science in some cases encountered connectivity problems. Eventually, all problems of the physical therapists were solved with cues or help from the researcher. It should be noted that time spent to prepare the devices and found usability problems, are expected to decrease when patients and PTs have more experience in using the devices. Problems that seem to be of a more structural nature were: attaching SmartStep by patients after hip surgery and some of OpenGo Science’s connectivity issues experienced by the physical therapists.

Looking at satisfaction, there are mixed views and perceptions from patients and PTs. In general, the majority of the patients were overall at least quite satisfied with the biofeedback devices. For SmartStep, the quantitative data showed that patients were the least satisfied with ‘ease of use’, ‘wearable comfort and ‘overall satisfaction’. This could be explained by the qualitative data, some patients experienced SmartStep as an intrusive device because of the control unit around the ankle and the amount of beeps. Some patients disliked the texture of the insole, the multiple operating steps and that they were not capable to attach the control unit by themselves due to hip surgery. This is in line with Fu et al. who mentioned that SmartStep is besides its effectiveness, a complex and auditory intrusive device [5]. Concerning OpenGo-Science, the quantitative data showed patients were quite satisfied with all examined satisfaction aspects. Patients scored especially high on satisfaction with ‘wearable comfort’. This is in line with the qualitative data showing that all patients expressed positive feelings regarding the ‘wearable comfort’ of the insoles. PT satisfaction differed more than patient satisfaction. In general, concerning the overall satisfaction (quantitatively measured), PTs were more positive about OpenGo Science. Less than half of the PTs were at least quite satisfied with SmartStep and more than three-quarters of the PTs were at least quite satisfied with OpenGo Science. The qualitative data suggests that especially satisfaction with ‘wearable comfort’, ‘ease of use’ and ‘intrusiveness’ contributed to this good ‘overall satisfaction’ with OpenGo Science. For SmartStep, satisfaction with ‘effectiveness’ seemed to contribute positively to the ‘overall satisfaction’. Moreover, for SmartStep, quantitative data showed, that PTs were the least satisfied with ‘ease of use’, ‘monitoring patient’s WB’ and with ‘overall satisfaction’. This might be explained because the majority of the PTs indicated that there were a lot of steps before they actually could use SmartStep and besides that they disliked the intrusiveness of the device (e.g. amount of beeps heard and control unit around the ankle). Furthermore, as disadvantage the short maximum recording time of approximately 10 minutes was mentioned. Regarding OpenGo Science, the quantitative data showed that most PTs were the least satisfied with the ‘feedback’ and ‘monitoring patient’s WB’. This might be explained by more than half of the PTs who disliked that feedback via smartphone was not available during recording WB via the PC; and more than half of the PTs disliked that patients are only provided with feedback when they were loading the affected leg too much and not when they were loading properly or too little.

Looking at acceptability, in general, patient acceptability for SmartStep and OpenGo Science for use during supervised rehabilitation was good. When looking at PT acceptability, the data about intention to purchase in the future suggested poor acceptability for SmartStep and good acceptability for OpenGo-Science. Thereby, it should be noted that PTs were not informed concerning the real price of SmartStep and OpenGo-Science. This could have influenced the acceptability because high costs might undermine acceptance, as being a critical determinant of technology acceptance [45]. Reasons for non-acceptance of SmartStep by the PTs emerging from the interview were: complexity of the device, intrusiveness of a control unit around the ankle, intrusiveness of the audio feedback and the availability of more usable devices.

The quantitative satisfaction data suggested that patients and PTs were mostly satisfied with the feedback provided by the biofeedback devices. However, during the think-aloud sessions and the interviews patients and PTs expressed different needs and expectations about feedback. For example, some participants preferred to receive only feedback when they exceeded an upper WB threshold, others preferred to receive also feedback when WB was within the target zone. Perhaps, the limited feedback options of the devices don not match with the different users’ needs and expectations. Previous studies indicate that biofeedback devices have to be flexible in the way that they are able to adapt to users’ feedback needs and learning phase [46,47]. Sigrist et al. suggested in their theoretical review that motor learning should start with real-time feedback during the motor task execution [46]. Subsequently, real-time feedback should be switched to a lower frequency or changed to postponed feedback to facilitate automation of the movement [46]. Furthermore, they suggested that self-controlled feedback offers a possibility for adapting feedback to the current phase of the learner. Therefore, it highly involves and motivates the learner and may also promote self-efficacy [46]. Winstein et al. studied the effectiveness of real-time and post-response feedback in learning PWB skills [47]. They suggested that practice with real time feedback is beneficial for immediate performance, but not for learning of PWB skills. For long-lasting learning the skill, post-response feedback is more effective [47]. Moreover, a recent review on using feedback through digital technology to increase performance identified multiple factors moderating feedback efficacy. These factors are receiver traits & states, feedback properties such as technology, content, timing, modality, duration, frequency, presentation, and user experience [48]. Although there is emerging evidence, research on the effect of feedback through digital technology has just started and the optimal choice of feedback properties for effective feedback interventions remains unclear [48].

Strengths of the study were that the intended users (PTs and patients prescribed with PWB) were involved and usability was tested in the specific context in which the biofeedback devices should be used. Hereby, a clear view was provided what users experiences were when using the different devices in the intended context of use. Another strength was that usability was evaluated with the conceptual framework for testing electronic adherence monitoring devices of De Bleser et al. [26]. This was the most specific and applicable framework found for testing usability of electronic monitoring devices and it builds on existing literature and frameworks [26,49,50]. In this framework several qualitative and quantitative evaluation methods have been proposed. Mixing qualitative and quantitative evaluation methods ensures comprehensive data collection and avoids needless a priori assumptions [26,34,36]. However, there are also limitations of using the conceptual framework of De Bleser et al. This framework is not widely adopted in usability testing and although it builds on existing literature and frameworks it remains unclear how valid this conceptual framework itself is. Another limitation could be the short time participants worked with the devices. Although the authors think that evaluating the first experiences is information-rich, identifies most of the usability issues and tests the learnability of the devices (devices should be simple enough for novel users to learn its functions easily), the authors also realizes that this short time may have affected the results. In this study all users were inexperienced and had no user experience with the tested biofeedback devices. More time with the devices may result in a better user performance or effectiveness and a more positive user satisfaction. For instance, AL-Maskari’s study showed that better user performance resulted in a greater user satisfaction [51]. Additionally, it should also be noted that testing the two devices in one session may have influenced the results. Although participants were asked not to compare the devices much of the participants verbally used the other device as reference in the usability testing. This could have affected the usability negatively or positively depending of the superiority or inferiority of the reference device. To prevent this bias as much as possible, explicit instructions were given to participants (prior and during the usability testing) not to compare the devices directly. Furthermore, in the current study inferential statistics were used to compare the SUS scores and user performances regarding both devices. However, it was not the primary research aim to compare the devices and the study was not specifically designed for a comparison (see e.g. the small sample size and the sample size determination). Therefore, one should be cautious in drawing conclusion based on the inferential statistics.

The findings of the current study could help clinicians to decide which biofeedback device is appropriate for their given situation and provide information for future development of biofeedback devices. This study only provides information on the usability during supervised rehabilitation. However, there is a growing need from clinical practice and research for biofeedback devices that provide real time biofeedback and can collect data in daily life for monitoring purposes. Further research should investigate the usability of biofeedback devices for monitoring purposes in patients’ own home and community setting.

Conclusion

The results of the current study give insight in the usability of two biofeedback devices from the patient’s and physical therapist’s perspective when used in supervised rehabilitation of patients after lower-limb trauma or surgery. The overall usability of SmartStep and OpenGo Science seemed acceptable from the patient’s perspective. From the PT’s perspective OpenGo Science seemed to be acceptable and SmartStep seemed not acceptable. The study findings could help clinicians to decide which biofeedback device is appropriate for their given situation and provide information for future development of biofeedback devices.

Supporting Information

S1 Table. Thematically categorized examples of patients’ and physical therapists’ comments on the usability of biofeedback devices extracted from the think-aloud data and the open questions.

Note. PT = physical therapist.

https://doi.org/10.1371/journal.pone.0165199.s001

(DOCX)

Acknowledgments

The authors would like to thank all patients and physical therapist that participated for their time and valuable input. Also we would like to thank Jim van Stratum, Anthranilla Leeuwesteijn, Niek Koenders, Tim Janssen, and Peter van der Wurff who helped us with the recruitment.

Author Contributions

  1. Conceptualization: RL MP BV RB EW MS.
  2. Formal analysis: RL MP MS.
  3. Funding acquisition: MP MS.
  4. Investigation: RL.
  5. Methodology: RL MP BV RB EW MS.
  6. Writing – original draft: RL MP MS.
  7. Writing – review & editing: RL MP BV RB EW MS.

References

  1. 1. Hustedt JW, Blizzard DJ, Baumgaertner MR, Leslie MP, Grauer JN. Current advances in training orthopaedic patients to comply with partial weight-bearing instructions. Yale J Biol Med 2012 Mar;85(1):119–125. pmid:22461750
  2. 2. Mavcic B, Antolic V. Optimal mechanical environment of the healing bone fracture/osteotomy. Int Orthop 2012 Apr;36(4):689–695. pmid:22302177
  3. 3. Meadows TH, Bronk JT, Chao YS, Kelly PJ. Effect of weight-bearing on healing of cortical defects in the canine tibia. J Bone Joint Surg Am 1990 Aug;72(7):1074–1080. pmid:2384507
  4. 4. DiStasio AJ, Jaggears FR, DePasquale LV, Frassica FJ, Turen CH. Protected early motion versus cast immobilization in postoperative management of ankle fractures. Contemp Orthop 1994 Oct;29(4):273–277. pmid:10150249
  5. 5. Fu MC, DeLuke L, Buerba RA, Fan RE, Zheng YJ, Leslie MP, et al. Haptic biofeedback for improving compliance with lower-extremity partial weight bearing. Orthopedics 2014 Nov 1;37(11):e993–8. pmid:25361376
  6. 6. Hurkmans HL, Bussmann JB, Benda E, Verhaar JA, Stam HJ. Techniques for measuring weight bearing during standing and walking. Clin Biomech (Bristol, Avon) 2003 Aug;18(7):576–589.
  7. 7. Dabke HV, Gupta SK, Holt CA, O'Callaghan P, Dent CM. How accurate is partial weightbearing? Clin Orthop Relat Res 2004 Apr;(421)(421):282–286.
  8. 8. Gray FB, Gray C, McClanahan JW. Assessing the accuracy of partial weight-bearing instruction. Am J Orthop (Belle Mead NJ) 1998 Aug;27(8):558–560.
  9. 9. Yu S, McDonald T, Jesudason C, Stiller K, Sullivan T. Orthopedic inpatients' ability to accurately reproduce partial weight bearing orders. Orthopedics 2014 Jan;37(1):e10–8. pmid:24683650
  10. 10. Hustedt JW, Blizzard DJ, Baumgaertner MR, Leslie MP, Grauer JN. Is it possible to train patients to limit weight bearing on a lower extremity? Orthopedics 2012 Jan 16;35(1):e31–7. pmid:22229610
  11. 11. Hurkmans HL, Bussmann JB, Benda E, Verhaar JA, Stam HJ. Effectiveness of audio feedback for partial weight-bearing in and outside the hospital: a randomized controlled trial. Arch Phys Med Rehabil 2012 Apr;93(4):565–570. pmid:22325684
  12. 12. Hurkmans HL, Bussmann JB, Selles RW, Benda E, Stam HJ, Verhaar JA. The difference between actual and prescribed weight bearing of total hip patients with a trochanteric osteotomy: long-term vertical force measurements inside and outside the hospital. Arch Phys Med Rehabil 2007 Feb;88(2):200–206. pmid:17270518
  13. 13. Vasarhelyi A, Baumert T, Fritsch C, Hopfenmuller W, Gradl G, Mittlmeier T. Partial weight bearing after surgery for fractures of the lower extremity—is it achievable? Gait Posture 2006 Jan;23(1):99–105. pmid:16311201
  14. 14. Rubin G, Monder O, Zohar R, Oster A, Konra O, Rozen N. Toe-touch weight bearing: myth or reality? Orthopedics 2010 Oct 11;33(10):729. pmid:20954673
  15. 15. Isakov E. Gait rehabilitation: a new biofeedback device for monitoring and enhancing weight-bearing over the affected lower limb. Eura Medicophys 2007 Mar;43(1):21–26. pmid:17021589
  16. 16. Ruiz FK, Fu MC, Bohl DD, Hustedt JW, Baumgaertner MR, Leslie MP, et al. Patient compliance with postoperative lower extremity touch-down weight-bearing orders at a level I academic trauma center. Orthopedics 2014 Jun;37(6):e552–6. pmid:24972436
  17. 17. Van Lieshout R, Stukstette MJ, Vanwanseele B, Pisters MF, De Bie RA. Biofeedback in partial weight bearing: validity of three different devices. Journal of Sports Physical Therapy; 2016: 46(11): 993–1001.
  18. 18. Van Hoof J, Wouters EJM. User needs and ambient-assisted living in the Netherlands. Gerontechnology 2012;11(2):134.
  19. 19. Van Hoof J, Wouters EJM, Marston HR, Vanrumste B, Overdiep RA. Ambient assisted living and care in the Netherlands: The voice of the user. International Journal of Ambient Computing and Intelligence 2011;3(4):25–40.
  20. 20. Shah SG, Robinson I, AlShawi S. Developing medical device technologies from users' perspectives: a theoretical framework for involving users in the development process. Int J Technol Assess Health Care 2009 Oct;25(4):514–521. pmid:19845981
  21. 21. Arthanat S, Bauer SM, Lenker JA, Nochajski SM, Wu YW. Conceptualization and measurement of assistive technology usability. Disabil Rehabil Assist Technol 2007 Jul;2(4):235–248. pmid:19263540
  22. 22. Lewis JR. Usability: Lessons Learned … and Yet to Be Learned. International Journal of Human–Computer Interaction 2014;30(9):663–684.
  23. 23. ISO 9241–11. Ergonomic requirements for office work with visual display terminals (VDTs). International Organization for Standardization 1998;Geneva, Switserland.
  24. 24. Vredenburg K, Mao J, Smith PW, Carey T. A survey of user centered design practice. In Proceedings of CHI 2002. Minneapolis, MN: Association for Computing Machinery; 2002: 471–278.
  25. 25. Tullis TS, Albert W. Measuring the user experience: Collecting, analyzing, and presenting ussability data. 2nd ed. Waltham MA: Morgan-Kauffman; 2013.
  26. 26. De Bleser L, De Geest S, Vincke B, Ruppar T, Vanhaecke J, Dobbels F. How to test electronic adherence monitoring devices for use in daily life: a conceptual framework. Comput Inform Nurs 2011 Sep;29(9):489–495. pmid:21532468
  27. 27. Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health Serv Res 2013 Dec;48(6 Pt 2):2134–2156. pmid:24279835
  28. 28. Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and Extending the Five-User Assumption: a Grounded Procedure for Interaction Evaluation. ACM Transactions on Computer-Human Interaction (TOCHI) 2013;20(5):29.
  29. 29. Nielsen J, Landauer TK. A mathematical model of the finding of usability problems. In Proceedings of INTERCHI'93. Amsterdam, the Netherlands: Association for Computing Machinery; 1993: 206–213.
  30. 30. Virzi RA. Refining the testphase of usability evaluation: How many subjects is enough? Human Factors 1992;34:457.
  31. 31. Turner CW, Lewis JR, Nielsen J. Determining usability test sample size. International Encyclopedia of Ergonomics and Human Factors 2006;3:3084.
  32. 32. Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices 2014 Sep;11(5):513–525. pmid:25033757
  33. 33. Lewis JR. Sample sizes for usability studies: additional considerations. Hum Factors 1994 Jun;36(2):368–378. pmid:8070799
  34. 34. Boland MR, Rusanov A, So Y, Lopez-Jimenez C, Busacca L, Steinman RC, et al. From expert-derived user needs to user-perceived ease of use and usefulness: A two-phase mixed-methods evaluation framework. J Biomed Inform 2013 Dec 12;52: 141–50. pmid:24333875
  35. 35. Daniels J, Fels S, Kushniruk A, Lim J, Ansermino JM. A framework for evaluating usability of clinical monitoring technology. J Clin Monit Comput 2007 Oct;21(5):323–330. pmid:17701381
  36. 36. Lyles CR, Sarkar U, Osborn CY. Getting a technology-based diabetes intervention ready for prime time: a review of usability testing studies. Curr Diab Rep 2014 Oct;14(10):534. pmid:25173689
  37. 37. Wessels RD, De Witte LP. Reliability and validity of the Dutch version of QUEST 2.0 with users of various types of assistive devices. Disabil Rehabil 2003 Mar 18;25(6):267–272. pmid:12623616
  38. 38. De Bleser L, Vincke B, Dobbels F, Happ MB, Maes B, Vanhaecke J, et al. A new electronic monitoring device to measure medication adherence: usability of the Helping Hand. Sensors (Basel) 2010;10(3):1535–1552.
  39. 39. Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. 1st ed. San Francisco USA: Morgan Kaufmann; 2012.
  40. 40. Brooke J. Sus—a quick and dirty usability scale. Usability Evaluation in industry 1996:189–194.
  41. 41. Borsci S, Federici S, Bacci S, Gnaldi M, Bartolucci F. Assessing User Satisfaction in the Era of User Experience: Comparison of the SUS, UMUX, and UMUX-LITE as a Function of Product Experience. International Journal of Human-Computer Interaction 2015;31(8):484–495.
  42. 42. Bangor A, Kortum P, Miller J. An emperical evaluation of the system usability scale. International Journal of Human-Computer Interaction 2008;24:574–594.
  43. 43. Borsci S, Federici S, Lauriola M. On the dimensionality of the System Usability Scale: a test of alternative measurement models Cognitive Processing 2009(10):193–197.
  44. 44. Lewis JR, Sauro J. The factor structure of the system usability scale. Human centered design. Heidelberg Germany: Springer-Verlag; 2009: 94–103.
  45. 45. Peek ST, Wouters EJ, van Hoof J, Luijkx KG, Boeije HR, Vrijhoef HJ. Factors influencing acceptance of technology for aging in place: a systematic review. Int J Med Inform 2014 Apr;83(4):235–248. pmid:24529817
  46. 46. Sigrist R, Rauter G, Riener R, Wolf P. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review. Psychon Bull Rev 2013 Feb;20(1):21–53. pmid:23132605
  47. 47. Winstein CJ, Pohl PS, Cardinale C, Green A, Scholtz L, Waters CS. Learning a partial-weight-bearing skill: effectiveness of two forms of feedback. Phys Ther 1996 Sep;76(9):985–993. pmid:8790276
  48. 48. Hermsen S, Frost J, Renes RJ, Kerkhof P. Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature. Computer in Human Behavior 2016;57:61–74.
  49. 49. Hensel BK, Demiris G, Courtney KL. Defining obtrusiveness in home telehealth technologies: a conceptual framework. J Am Med Inform Assoc 2006 Jul-Aug;13(4):428–431. pmid:16622166
  50. 50. Denhaerynck K, Schafer-Keller P, Young J, Steiger J, Bock A, De Geest S. Examining assumptions regarding valid electronic monitoring of medication therapy: development of a validation framework and its application on a European sample of kidney transplant patients. BMC Med Res Methodol 2008 Feb 19;8:5. pmid:18284675
  51. 51. Al-Maskari A, Sanderson M. A review of factors influencing user satisfaction in information retrieval. J Am Soc Inf Sci Technol 2010;61(5):859–868.