Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

People with intellectual and sensory disabilities can independently start and perform functional daily activities with the support of simple technology

  • Giulio E. Lancioni ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    giulio.lancioni@uniba.it

    Affiliation Department of Neuroscience and Sense Organs, University of Bari, Bari, Italy

  • Nirbhay N. Singh,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliation Department of Psychiatry, Augusta University, Augusta, GA, United States of America

  • Mark F. O’Reilly,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliation College of Education, University of Texas at Austin, Austin, GA, United States of America

  • Jeff Sigafoos,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliation School of Education, Victoria University of Wellington, Wellington, New Zealand

  • Gloria Alberti,

    Roles Data curation, Investigation, Writing – review & editing

    Affiliation Lega F. D’Oro Research Center, Osimo, Italy

  • Valentina Del Gaudio,

    Roles Data curation, Investigation, Writing – review & editing

    Affiliation Lega F. D’Oro Research Center, Osimo, Italy

  • Chiara Abbatantuono,

    Roles Data curation, Investigation, Writing – review & editing

    Affiliation Department of Neuroscience and Sense Organs, University of Bari, Bari, Italy

  • Paolo Taurisano,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliation Department of Neuroscience and Sense Organs, University of Bari, Bari, Italy

  • Lorenzo Desideri

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Lega F. D’Oro Research Center, Osimo, Italy

Abstract

Objectives

The study assessed a smartphone-based technology system, which was designed to enable six participants with intellectual disability and sensory impairment to start and carry out functional activities through the use of reminders and verbal or pictorial instructions.

Methods

The technology system involved a Samsung Galaxy A22 with Android 11 operating system and four Philips Hue indoor motion sensors. Three to five activities were scheduled per day. At the time at which an activity was due, the system provided the participant with a reminder followed by the verbal or pictorial instruction for the initial part of the first response (e.g., “Go to the bathroom and take the dirty towels”). The instruction would be available (repeated) until the participant responded to it and, in so doing, activated a sensor. Sensor activation caused the presentation of the instruction for the second part of the same (first) response (e.g., “Put the towels in the laundry machine”). The same process occurred for each of the responses involved in the activity. The system was introduced according to nonconcurrent multiple baseline designs across participants.

Results

During baseline, the mean percentage of activities the participants started independently was below 7; the mean frequency of correct responses per activity was below 0.5 (out of a maximum possible of 8). During the intervention (i.e., with the support of the technology system), the mean percentage and mean frequency values increased to nearly 100 and 8, respectively.

Conclusions

The data suggest that the aforementioned technology system may enable people with intellectual disability and sensory impairment to start and carry out functional activities independent of staff.

Introduction

People with intellectual disabilities or combinations of intellectual and sensory or motor disabilities may need specific staff or caregiver’s support to initiate and complete activities with multiple steps, such as washing dishes, putting away clothes, or assembling objects [13]. In fact, they (a) may not know when those activities are due and/or lack any initiative to start them and (b) may not remember the steps included in the activities and the order in which those steps are to be carried out [48]. Their dependence on external support to be constructively engaged in functional activity may require a level of supervision that many daily contexts might not be able to afford. This may often result in reduced levels of engagement, with negative implications for the development or strengthening of functional occupation and self-determination skills, as well as for the social image [911].

One way to counter the aforementioned situation involves the use of technology-aided programs designed to provide verbal or pictorial (i.e., picture- or video-based) step instructions enabling the participants to carry out the activities included in the program [6,1216]. Four different types of programs have been developed. The first type of program involves the use of devices such as computers or tablets presenting the step instructions (one at a time) in relation to the participants’ activation of those devices (e.g., through a response such as touching the tablet or computer screen) [6,12,1719]. The second type of program differs from the first in that it uses a computer, tablet or smartphone to automatically present step instructions at preset time intervals [14,15,20]. The intervals are arranged in advance by staff with the view of facilitating the participants’ performance, that is, of freeing the participants from the need of (and possible errors in) activating the devices for each activity step. The third type of program, which can rely on the use of a smartphone or tablet, is designed not only to provide instructions for the activity steps at preset intervals but also to cue/alert the participants when any activity is to be started so that the participants can initiate the activities on their own at the right time [14,15,21]. The fourth type of program differs from the second because the instructions are presented automatically to the participants as the participants reach the locations where the objects for the activity steps are to be used [16,22]. To detect the participants’ arrival at the target locations, specific environmental sensors (e.g., dance pads) are employed.

All the different types of programs have shown their usefulness in helping participants with intellectual and other disabilities carry out complex (multistep) activities with satisfactory levels of completeness/accuracy [6,13,15,22]. Obviously, the third and fourth types of programs might be considered more advanced than the first two in terms of the level and quality of support they offer to the participants. Combining the advantageous characteristics of the last two types of programs could lead to a new (upgraded) program that would automatically (a) present the activity instructions in functional connection with the participants’ performance and (b) provide timely cues for the start of the activities.

The purpose of this study was to set up and assess such a new program. Within this program, presenting the instructions in functional connection with the participant’s performance entailed (a) delivering the instructions at the time the participant was ready to perform the related/intended actions and (b) making the instructions available (i.e., repeating or extending them) until the participant was ready to perform the related/intended actions. In contrast to what was done in previous studies [16,22], the instructions scheduled for the participants’ performance were not limited to a single activity, but covered various activities, which involved several destinations/target areas. The technology system was meant to be suitable for planning flexible activities (i.e., activities that could be easily modified across different days in terms of responses/steps included, so that they could better suit the context and the participants’ conditions). The technology system was based on the use of a smartphone and motion sensors, which for some participants were connected to a Bluetooth mini speaker. Six participants with intellectual disability and visual or auditory impairments were included in the study.

Method

Participants

Table 1 identifies the six participants by their pseudonyms and reports their chronological age, their visual and auditory conditions, and the age equivalents for their daily living skills on the second edition of the Vineland Adaptive Behavior Scales [23,24]. The participants’ chronological age ranged from 35 to 61 years. Two participants had typical vision (Chelsea and Nancy); three participants (Liz, Mary, and George) had a functional residual vision that allowed them to discriminate objects as well as pictures of objects and locations; and one participant (Andy) had only perception of light and shadows. Regarding the auditory condition, four participants (Liz, Andy, Mary, and George) had typical hearing while the other two participants (Chelsea and Nancy) had severe hearing loss. One participant (George) also had motor impairment and used an electric wheelchair for all his indoor travel. The Vineland age equivalents for their daily living skills (personal sub-domain) varied between 4 years and 1 month (Liz and George) and 5 years and 6 months (Nancy). All participants attended rehabilitation and care centers. The psychological services of those centers reported the participants’ intellectual disability to range within the moderate spectrum. Available estimates suggest that individuals with moderate intellectual disability represent about 10% of the cases of this clinical population and need some assistance in several areas of their daily life including engagement in functional activities [25].

thumbnail
Table 1. Participants’ pseudonyms, chronological age, visual and auditory conditions, and Vineland age equivalents for daily living skills (personal sub-domain).

https://doi.org/10.1371/journal.pone.0269793.t001

The participants could independently orient and travel within their living and occupational contexts. They also were able to respond to simple verbal or pictorial instructions concerning daily activity steps both when presented by staff and when presented by a smartphone or other technology device (i.e., could profitably use the support of the second or fourth types of programs mentioned above). However, the participants were not able to carry out multistep activities independently (i.e., without instructions) and did not typically start activities on their own initiative.

Ethical approval and informed consent

All participants had been informed verbally and through demonstrations as to how the smartphone-based system used in this study worked, and had expressed their willingness to be involved in the study and carry out daily activities with the support of the system. In light of the participants’ intellectual disability level, this willingness was considered to be a reliable indicator of their consent for the study. However, given their inability to read and sign a consent form, their legal representatives were asked to fulfill those requirements on their behalf. The study complied with the 1964 Helsinki declaration and its later amendments and was approved (including the aforementioned consent procedures) by an institutional Ethics Committee.

Procedures

Setting, activities, instructions, and research assistants.

The study was carried out in the centers that the participants attended. The activities were to be performed within the participants’ daily contexts (living and occupational facilities). The participants could easily move inside those contexts and find the rooms/areas and objects involved in the activities. Activities consisted of combinations of functional responses (steps), that is, responses that were known and meaningful to the participants and useful within the context. Between 50 and 71 functional responses were identified/available for each participant. Various combinations/groups of eight such responses (which could change across days) were considered to represent viable activities, that is, multifaceted forms of occupation relevant for the participant and convenient and beneficial for the context. Between three and five activities were scheduled per day over the morning, the afternoon or the entire day, typically 3 to 5 days a week. Table 2 provides two combinations of eight responses considered to represent two viable activities.

thumbnail
Table 2. Two combinations of eight responses considered to represent two viable activities.

https://doi.org/10.1371/journal.pone.0269793.t002

The instructions for the activities were verbal (with the addition of miniature objects during baseline phases) for Liz, Andy, and Mary and pictorial for the other three participants (i.e., George, Chelsea, and Nancy). During baseline phases, the research assistants gave instructions for (described) each activity to be carried out in a global manner (i.e., with a few phrases and objects or a drawing). During the intervention phase, the smartphone provided two verbal or pictorial instructions for each response of the activity the participants were to perform (e.g., “Go to the cabinet store and take paper towels” and “Put the paper towels on the kitchen table”). The second instruction was presented only after the participants had responded to the first one. Research assistants, who carried out the sessions and recorded the participants’ data, had experience with the application of technology-aided interventions with people with intellectual and other disabilities and also with data collection.

Technology system.

The technology system used during the intervention phase of the study involved (a) a Samsung Galaxy A22 with Android 11 operating system that was equipped with Amazon Alexa, MacroDroid, and Philips Hue applications, (b) four Philips Hue indoor motion sensors, (c) a Philips Hue Bridge and Philips Hue smart bulb working via Bluetooth, (d) a 4G Long-Term Evolution Wi-Fi router, and (e) a Bluetooth mini speaker. The mini speaker was only used for the participants who received verbal instructions. The Philips Hue Bridge, smart bulb and application, and the router served to support the functioning of the Philips Hue sensors.

The sensors were box-like devices with a 5.5-cm side and 3.5-cm height. One sensor was placed in each of the four areas (e.g., kitchen area, bathroom, laundry area, and store cabinet) where the participants were to collect objects to perform the responses for the single activities of the day. Sensor activation (caused by the participant’s arrival at a specific area) was detected through the Amazon Alexa application and transmitted to the smartphone via the MacroDroid. In connection with this, the smartphone would (a) shelve the first instruction being delivered for the response that the participant was performing and (b) present the second instruction for the completion of such response (see below in this section). The verbal instructions were written in the MacroDroid, uttered by the smartphone through the vocal synthesis function, and delivered to the participants via the mini speaker that they carried with them. The pictorial instructions were in a folder of the smartphone directly accessed by MacroDroid, which regulated their presentation on the participants’ smartphone screen.

The research assistant would configure the smartphone with the activities the participant had to carry out during the day. This involved (a) setting the time at which each activity was to be started and (b) selecting the verbal or pictorial instructions for the responses the activity had to involve (i.e., from the list of responses available for the participant). Participants relying on verbal instructions only needed to have the mini speaker with them. Participants relying on pictorial instructions needed to have the smartphone with them.

At the time at which an activity was due, the system provided the participant with a verbal reminder about it, or vibration and light flashes from the smartphone, followed by the first instruction for the initial response of the activity. For example, the instruction could be the verbal message: “Go to the bathroom and take the dirty towels” or a picture showing dirty towels in the bathroom being removed. The verbal instruction would be repeated at programmable intervals, while the pictorial instruction remained on the smartphone screen, until the participant reached the bathroom and, in so doing, activated the sensor available in that area. Sensor activation was followed (after about 2 s) by the presentation of the instruction concerning the second part of the same (first) response. Such instruction could be the verbal phrase “Put the towels in the laundry machine” or the picture showing the towels in the laundry machine. After a time estimated to be sufficient for bringing the towels to the laundry machine (e.g., 20–30 s), the system presented the initial instruction for the second response of the activity (e.g., the verbal phrase “Go to the cabinet store and take the toilet paper” or the picture of the cabinet store and toilet paper). The phrase would be repeated at programmable intervals while the picture remained on the smartphone screen until the participant reached the cabinet store and activated the sensor available there. Sensor activation led to the delivery of the instruction for the second part of the response (e.g., “Bring the paper to the bathroom”). The instruction process for each of the following responses of the activity was as that described above.

Following the last instruction for the last response of the activity, the system presented a phrase such as “Well done, now you can return to your desk” or a picture conveying the same message. Following the last activity of the day, the system presented the aforementioned phrase/message and thereafter a preferred song or video (e.g., comedy, family event or sport).

Experimental conditions.

Each of the two groups of participants (i.e., the three participants relying on verbal instructions and the three participants relying on pictorial instructions) was exposed to the use of the technology system according to a single-case research design, that is, a nonconcurrent multiple baseline design across participants [26,27]. Specifically, the participants started with two baseline phases (i.e., without the technology system). Within each of these phases, different (increasing) numbers of activities were available for the members of every group, in line with the design requirements. Thereafter, the participants entered the intervention phase, which involved the use of the technology system.

Video recordings of the participants during their performance of the activities scheduled were regularly accessible to a study coordinator who would view them and provide feedback to the research assistants about the way they implemented the experimental (baseline or intervention) conditions. This feedback was specifically aimed at ensuring that those conditions were implemented correctly, and thus at guaranteeing procedural fidelity [28].

Baseline I.

This phase served to determine whether the participants would independently start the activities scheduled for them. Three or four activities were scheduled per day. At the beginning of the day (when the participant was sitting at a desk with conventional/occupational material such as kitchen utensils or objects to be assembled), the research assistant presented all the activities to be carried out during the day (a) verbally and through miniature objects for Liz, Andy, and Mary and (b) using a sheet of paper with drawings concerning those activities for George, Chelsea, and Nancy. For each of the activities presented verbally and through miniature objects, the research assistant provided the participant with global verbal instructions (e.g., you supply the bathroom, arrange the kitchen and take care of the plants) and also gave the participant known miniature objects representing the bathroom, the kitchen and plants (as extra, tangible activity cues). For each of the activities presented pictorially, the research assistant provided the participant with a drawing that showed three of the areas/objects that were to be used (e.g., bathroom, laundry machine, and kitchen). An activity was rated as “started independently” if the participant carried out at least one of the responses included in that activity. If the participant did not independently start any activity within one hour from receiving the instructions, a negative score was given for all the activities scheduled.

Baseline II.

During this phase, the activities were presented one at a time. After presenting an activity (i.e., in the same way as in Baseline I), the research assistant asked the participant to carry it out. The research assistant would not intervene if the participant correctly performed responses that were included in the activity. Yet, the research assistant intervened by providing verbal or pictorial instructions (comparable to those used by the system; see Technology system) to support a response included in the activity (a) after the participant had performed a response that was not included in the activity and (b) after a period of passivity of about 40 s. This process continued until all the responses included in the activity were carried out.

Intervention.

During the intervention, the technology system was used. At the beginning of the day, the participant was provided with the mini speaker (Liz, Andy, and Mary) or the smartphone (George, Chelsea, and Nancy). The system worked as described in the Technology system section. That is, for each of the three to five activities scheduled for the day, the system provided a reminder when the activity was due and followed the reminder with the delivery of instructions for every single response included in the activity. During the initial 15 activities, the research assistant would intervene with prompts/guidance if the participant showed any problem or hesitation/delay in using the system’s reminders and response instructions appropriately. Following those initial activities, the research assistant did no longer intervene except if the participant asked for help.

Measures of participants’ performance

The measures recorded were: (a) activities started independently, (b) activity responses performed correctly, and (c) time required for the completion of each activity. Activities started independently were those initiated by the participants on their own (a) following the research assistants’ presentation of (global instructions about) the activities they were supposed to carry out for the day (Baseline I) or (b) after the reminder of the smartphone (Intervention). Correct responses for an activity were those that belonged to that activity, matched the description available for them, and were performed by the participants independently (i.e., without any research assistant’s instructions or prompt/guidance). Recording of the first two measures was carried out by the research assistants, who were also responsible for implementing the procedural conditions in the different phases of the study. Recording of the last measure was carried out via the smartphone during the intervention phase. Interrater agreement on the first two measures was checked on at least 20% of the activities of each participant with a reliability observer joining the research assistant in recording the data. Agreement for each measure was computed by dividing the number of activities on which both research assistant and reliability observer reported the same scoring (i.e., activity start scoring or response scoring) by the total number of activities checked and multiplying by 100%. The percentages of agreement were 95% or higher on both measures for each participant.

Data analysis

The participants’ data for activities started independently and correct responses were reported in graphic form. In order to simplify the graphic display, the data were summarized over blocks of activities. Consequently, the data points appearing in the graphs represent mean percentages of activities started independently and mean frequencies of correct responses (see Results). The Kolmogorov-Smirnov test [29] was to be used for analyzing the differences between the data of Baseline I and Baseline II and the data of the Intervention phase for the participants who showed data overlaps between those phases.

Results

Figs 1 and 2 report the baseline and intervention data for the participants using verbal instructions (i.e., Liz, Andy, and Mary) and the participants using pictorial instructions (i.e., George, Chelsea, and Nancy), respectively. The black triangles represent the mean percentage of activities started independently over blocks of five activities during Baseline I and over blocks of 15 activities during the Intervention phase. The open circles represent the mean frequency of correct responses per activity over blocks of five activities during Baseline II and over blocks of 15 activities during the Intervention phase. Any nonstandard block at the end of a phase is marked with a number that indicates how many activities such block includes. The numbers inside the boxes reported in the figures’ panels indicate how many activities in total were presented to each participant during the different phases of the study.

thumbnail
Fig 1. The three panels report the data for Liz, Andy, and Mary, respectively.

The black triangles represent the mean percentage of activities started independently over blocks of five activities during Baseline I and 15 activities during the Intervention phase. The open circles represent the mean frequency of correct responses per activity over blocks of five activities during Baseline II and 15 activities during the Intervention phase. A nonstandard block at the end of a baseline phase or the Intervention phase is marked with a number indicating how many activities such block includes. The numbers inside the boxes indicate how many activities were presented to each participant during Baseline I, Baseline II, and the Intervention.

https://doi.org/10.1371/journal.pone.0269793.g001

thumbnail
Fig 2. The three panels report the data for George, Chelsea, and Nancy, respectively.

Data are plotted as in Fig 1.

https://doi.org/10.1371/journal.pone.0269793.g002

Baseline I included 13 (Liz) to 22 (Mary) activities for the participants relying on verbal instructions (see Fig 1) and 15 (George) to 27 (Nancy) activities for the participants relying on pictorial instructions (see Fig 2). The mean percentage of activities started independently during Baseline I varied between zero and near 7. Andy, George and Nancy were the only participants who started one activity independently, and thus had one data point with a 20% performance level (see Figs 1 and 2). Baseline II included 10 (Liz) to 15 (Mary) activities for the first group of participants (see Fig 1) and 12 (George) to 17 (Nancy) activities for the second group of participants (see Fig 2). The mean frequency of correct responses per activity was below 0.5 (out of a maximum possible of 8) for all participants (see Figs 1 and 2).

The Intervention phase included 120 (Andy) to 271 (Mary) activities for the first group of participants (see Fig 1) and 197 (Chelsea) to 292 (George) activities for the second group of participants (see Fig 2). Differences in the number of activities used during the Intervention were mainly due to participants’ availability. The mean percentage of activities started independently was virtually 100 for all participants, with exceptions concentrated in the first block of 15 activities (i.e., when the research assistant used prompts/guidance to overcome any participant’s problem or delay in relation to the smartphone’s reminders and instructions) (see Figs 1 and 2). The mean frequency of correct responses varied between 7.8 (Nancy) and virtually 8 (Mary and George) per activity. That is, responses not correct were rare or virtually absent.

Given the obvious difference (i.e., absence of any overlap) between the Baseline data and the Intervention data on these two measures, the use of the Kolmogorov-Smirnov test to evaluate the statistical significance of such difference was deemed unnecessary and thus omitted. The mean amount of time required for carrying out an activity during the Intervention phase ranged between about 6 min (Liz) and 10 min (George).

Discussion

The data from the Intervention phase indicate that all participants were successful in using the technology system to independently start and carry out various activities at different times of the day. These data support previous findings in the area suggesting that technology-aided programs can be effectively used for supporting participants with intellectual and sensory or sensory and motor disabilities (a) in their performance of functional activities [6,13,20,30] and (b) in their determination to start the activities on their own at the appropriate times [15,21]. The study also confirms that a relatively simple technology system may be set up to provide instructions functionally connected to the participants’ responding [16,22]. The same technology system may be designed to allow and facilitate the scheduling of flexible activities (i.e., activities that can change over different days, in line with environmental requirements and participant’s conditions). In view of the results reported, a number of considerations may be in order.

First, the very high frequency of correct activity responses produced by the participants may be attributed (a) in part to the fact that the responses were familiar to the participants, and (b) in part to the fact that the instructions were functionally connected to the participants’ responding. Although no specific assessment was carried out in this study as to the impact of using instructions functionally connected to responding as opposed to instructions presented at pre-arranged time intervals (i.e., independent of responding), two points suggest that such an impact was relevant. The first point is the data reported by Lancioni et al. [22] showing that participants with intellectual and other disabilities tended to have a significantly more accurate performance with the former type of instructions (i.e., functionally connected to the participants’ responding) than with the latter type of instructions (i.e., presented at pre-arranged intervals). The second point is the data of previous studies using instructions presented at pre-arranged intervals [e.g., 14,15,20,21]. Those data only rarely showed levels of virtually 100% correct responding.

Second, performing activities with high accuracy is likely to make the participants feel at ease during the engagement and perceive the technology system as friendly. Both these aspects would seem relevant to increase the participants’ quality of engagement and eventually their satisfaction with the engagement, with positive implications for their mood and general quality of life [31,32]. Starting the activities independent of staff input (and at the appropriate times) could increase the participants’ sense of control over their occupational situation and improve their social image as well as the level of approval they may receive within the daily context [16,22,33].

Third, the possibility of arranging the daily activities in a flexible manner (by selecting the responses to be included so that the activities can better match environmental requirements and participants’ conditions) may be practically relevant [3436]. In fact, it may give staff the freedom and opportunities to set up the best occupational schedule for the single participants on a daily basis, with minimal time investment. Obviously, this possibility does not exclude that some of the activities (or all of the activities for some participants) may remain unchanged over any period of time.

Fourth, the technology system used in this study may be viewed as a relatively simple tool that is suitable for participants relying on verbal instructions as well as participants relying on pictorial instructions. The system, moreover, could be improved over time, based on additional participants’ data and staff feedback [3740]. The fact that the system relies on fairly affordable everyday commercial components can make it accessible to rehabilitation and care contexts. Its present cost may be estimated at about US $550 (i.e., approximately US $175 for the Samsung smartphone, US $200 for the four Philips Hue sensors, US $25 for the mini speaker, and US $150 for the Philips Hue Bridge, the Philips Hue smart bulb, and the 4G Long-Term Evolution Wi-Fi router). A possible obstacle in accessing and using such a system on an immediate basis may be represented by the fact that it is not a ready-made (off-the-shelf) tool but rather a tool that needs to be arranged through the aforementioned commercial components and programmed for the intervention purpose.

Limitations and future research

The study presents three basic limitations, that is, the small number of participants and lack of follow-up data, the absence of an evaluation of participants’ satisfaction with the use of the system, and the absence of an evaluation of staff opinion about the system. The first limitation, which does not at this time allow one to make general statements about the overall potential and usability of the technology system reported, may be addressed through replication studies involving follow-up checks. These studies could help determine the reliability of the system and, possibly, of upgraded versions of it in promoting and maintaining independent activity performance of people with intellectual and multiple disabilities [4143].

The second limitation (i.e., lack of data on participants’ satisfaction about the system) might be addressed by (a) asking the participants to choose between activities to carry out with the support of the system and other daily activities to be performed with some staff support [44], and (b) observing the participants’ behavior during the two activity situations and determining possible differences in expressions of positive mood (e.g., smiles) during their engagement [4547]. The third limitation (i.e., lack of data on staff view about the system) might be addressed by carrying out a staff survey. The survey could be arranged through a brief questionnaire (concerning the efficacy, friendliness, and applicability of the system) that staff persons would be asked to complete after they have watched a brief video illustrating the functioning of the system [4850].

In conclusion, the results suggest that the technology system was highly effective in helping people with intellectual and sensory or sensory and motor disabilities to start and carry out functional activities independently. The system also allowed to easily change the activities scheduled from day to day, based on environmental requirements or participants’ conditions. While the results are encouraging, no definite statements can be made about the system’s overall potential and usability until new research has addressed the limitations of this study. New research may also try to further develop the present system to improve its effectiveness and extend its usability across participants with different characteristics.

References

  1. 1. Ayres K. M., Mechling L., & Sansosti F. J. (2013). The use of mobile technologies to assist with life skills/independence of students with moderate/severe intellectual disability and/or autism spectrum disorders: Considerations for the future of school psychology. Psychology in the Schools, 50(3), 259–271.
  2. 2. Dusseljee J. C. E., Rijken P. M., Cardol M., Curfs L. M. G., & Groenewegen P. P. (2011). Participation in daytime activities among people with mild or moderate intellectual disability. Journal of Intellectual Disability Research, 55(1), 4–18. pmid:21029235
  3. 3. Smith K. A., Shepley S. B., Alexander J. L., & Ayres K.M. (2015). The independent use of self-instructions for the acquisition of untrained multi-step tasks: A review of the literature. Research in Developmental Disabilities, 40, 19–30. pmid:25710296
  4. 4. Cullen J. M., Alber-Morgan S. R., Simmons-Reed E. A., & Izzo M. V. (2017). Effects of self-directed video prompting using iPads on the vocational task completion of young adults with intellectual and developmental disabilities. Journal of Vocational Rehabilitation, 46(3), 361–375.
  5. 5. Cullen J. M., Simmons-Reed E. A., &Weaver L. (2017). Using 21st century video prompting technology to facilitate the independence of individuals with intellectual and developmental disabilities. Psychology in the Schools, 54(9), 965–978.
  6. 6. Desideri L., Lancioni G., Malavasi M., Gherardini A., & Cesario L. (2021). Step instruction technology to help people with intellectual and other disabilities perform multistep tasks: A literature review. Journal of Developmental and Physical Disabilities, 33(6), 857–886.
  7. 7. Randall K. N., Johnson F., Adams S. E., Kiss C. W., & Ryan J. B. (2020). Use of a iPhone task analysis application to increase employment-related chores for individuals with intellectual disabilities. Journal of Special Education Technology, 35(1), 26–36.
  8. 8. Spriggs A. D., Mims P. J., van Dijk W., & Knight V. F. (2017). Examination of the evidence base for using visual activity schedules with students with intellectual disability. The Journal of Special Education, 51(1), 14–26.
  9. 9. Payne D., Cannella-Malone H. I., Tullis C. A., & Sabielny L. M. (2012). The effects of self-directed video prompting with two students with intellectual and developmental disabilities. Journal of Developmental and Physical Disabilities, 24(6), 617–634.
  10. 10. Wehmeyer M. L. (2020). The importance of self-determination to the quality of life of people with intellectual disability: A perspective. International Journal of Environmental Research and Public Health, 17(19), 7121. pmid:33003321
  11. 11. Wehmeyer M. L., Davies D. K., Stock S. E., & Tanis S. (2020). Applied cognitive technologies to support the autonomy of people with intellectual and developmental disabilities. Advances in Neurodevelopmental Disorders, 4(4), 389–399.
  12. 12. Collins J. C., Ryan J. B., Katsiyannis A., Yell M., & Barrett D. E. (2014). Use of portable electronic assistive technology to improve independent job performance of young adults with intellectual disability. Journal of Special Education Technology, 29(3), 15–29.
  13. 13. Goo M., Maurer A. L., & Wehmeyer M. L. (2019). Systematic review of using portable smart devices to teach functional skills to students with intellectual disability. Education and Training in Autism and Developmental Disabilities, 54(1), 57–68.
  14. 14. Lancioni G. E., Singh N. N., O’Reilly M. F., Sigafoos J., Boccasini A., La Martire M. L., et al. (2016). People with multiple disabilities use assistive technology to perform complex activities at the appropriate time. International Journal on Disability and Human Development, 15(3), 261–266.
  15. 15. Lancioni G. E., Singh N. N., O’Reilly M. F., Sigafoos J., Alberti G., Zimbaro C., et al. (2017). Using smartphones to help people with intellectual and sensory disabilities perform daily activities. Frontiers in Public Health, 5, 282. pmid:29114539
  16. 16. Lin M. L., Chiang M. S., Shih C. H., & Li M. F. (2018). Improving the occupational skills of students with intellectual disability by applying video prompting combined with dance pads. Journal of Applied Research in Intellectual Disabilities, 31(1), 114–119. pmid:28544583
  17. 17. Cannella-Malone H. I., Chan J. M., & Jimenez E. D. (2017). Comparing self-directed video prompting to least-to-most prompting in post-secondary students with moderate intellectual disabilities. International Journal of Developmental Disabilities, 63(4), 211–220.
  18. 18. Mechling L. C., Gast D. L., & Seid N. H. (2010). Evaluation of a personal digital assistant as a self-prompting device for increasing multi-step task completion by students with moderate intellectual disabilities. Education and Training in Autism and Developmental Disabilities, 45(3), 422–439.
  19. 19. Savage M. N., & Taber-Doughty T. (2017). Self-operated auditory prompting systems for individuals with intellectual disability: A meta-analysis of single-subject research. Journal of Intellectual & Developmental Disability, 42(3), 249–258.
  20. 20. Lancioni G. E., Singh N. N., O’Reilly M. F., Sigafoos J., Campodonico F., Zimbaro C., et al. (2018). Helping people with multiple disabilities manage an assembly task and mobility via technology-regulated sequence cues and contingent stimulation. Life Span and Disability, 21(2), 143–163.
  21. 21. Resta E., Brunone L., D’Amico F., & Desideri L. (2021). Evaluating a low-cost technology to enable people with intellectual disability or psychiatric disorders to initiate and perform functional daily activities. International Journal of Environmental Research and Public Health, 18, 9659. pmid:34574584
  22. 22. Lancioni G. E., O’Reilly M. F., Sigafoos J., Alberti G., Tenerelli G., Ricci C., et al. (2021). Tying the delivery of activity step instructions to step performance: Evaluating a basic technology system with people with special needs. Advances in Neurodevelopmental Disorders, 5(4), 488–497.
  23. 23. Balboni G., Belacchi C., Bonichini S., Coscarelli A. (2016). Vineland II. Vineland adaptive behavior scales (2nd ed.). Firenze: Standardizzazione Italiana, OS.
  24. 24. Sparrow S. S., Cicchetti D. V., Balla D. A. (2005). Vineland Adaptive Behavior Scales (2nd ed.). (Vineland II). Minneapolis: Pearson.
  25. 25. Boat T.F., Wu J.T., National Academies of Sciences, Engineering, and Medicine: Clinical characteristics of intellectual disabilities. In: Mental Disorders and Disabilities Among Low-Income Children. National Academies 2015; 169–178. https://doi.org/10.17226/21780.
  26. 26. Barlow D. H., Nock M., & Hersen M. (2009). Single-case experimental designs (3rd ed.). New York: Allyn & Bacon.
  27. 27. Lancioni G. E., Desideri L., Singh N. N., Sigafoos J., & O’Reilly M. F. (2021). A commentary on standards for single-case experimental studies. International Journal of Developmental Disabilities.
  28. 28. Sanetti L. M. H., & Collier-Meek M. A. (2014). Increasing the rigor of procedural fidelity assessment: An empirical comparison of direct observation and permanent product review methods. Journal of Behavioral Education, 23(1), 60–88.
  29. 29. Siegel S., & Castellan N. J. (1988). Nonparametric statistics for the behavioral sciences (2nd ed.). New York: MacGraw-Hill.
  30. 30. Golisz K., Waldman-Levi A., Swierat R. P., & Toglia J. (2018). Adults with intellectual disabilities: Case studies using everyday technology to support daily living skills. British Journal of Occupational Therapy, 81(9), 514–524.
  31. 31. Brown I., Hatton C., & Emerson E. (2013). Quality of life indicators for individuals with intellectual disabilities: Extending current practice. Intellectual and Developmental Disabilities, 51(5), 316–332. pmid:24303820
  32. 32. Kocman A., & Weber G. (2018). Job satisfaction, quality of work life and work motivation in employees with intellectual disability: A systematic review. Journal of Applied Research in Intellectual Disabilities, 31(5) 804–819.
  33. 33. Mihailidis A., Melonis M., Keyfitz R., Lanning M., Van Vuuren S., & Bodine C. (2016). A nonlinear contextually aware prompting system (N-CAPS) to assist workers with intellectual and developmental disabilities to perform factory assembly tasks: System overview and pilot testing. Disability and Rehabilitation: Assistive Technology, 11(7), 604–612. pmid:26135042
  34. 34. Desmond D., Layton N., Bentley J., Boot F. H., Borg J., Dhungana B. M., et al. (2018). Assistive technology and people: A position paper from the first global research, innovation and education on assistive technology (GREAT) summit. Disability and Rehabilitation: Assistive Technology, 13(5), 437–444. pmid:29772940
  35. 35. Federici S., & Scherer M. (2017). Assistive Technology Assessment Handbook (2nd ed.). Boca Raton, FL: CRC Press.
  36. 36. Scherer M. J., & Federici S. (2015). Why people use and don’t use technologies: Introduction to the special issue on assistive technologies for cognition/cognitive support technologies. NeuroRehabilitation, 37(3), 315–319. pmid:26518529
  37. 37. Boot F. H., Owuor J., Dinsmore J., & MacLachlan M. (2018). Access to assistive technology for people with intellectual disabilities: a systematic review to identify barriers and facilitators. Journal of Intellectual Disability Research, 62(10), 900–921. pmid:29992653
  38. 38. Borg J. (2019). Commentary on selection of assistive technology in a context with limited resources. Disability and Rehabilitation: Assistive Technology, 14(8), 753–754. pmid:31469019
  39. 39. de Witte L., Steel E., Gupta S., Delgado Ramos V., & Roentgen U. (2018). Assistive technology provision: Towards an international framework for assuring availability and accessibility of affordable high-quality assistive technology. Disability and Rehabilitation: Assistive Technology, 13(5), 467–472. pmid:29741965
  40. 40. Scherer M. J. (2019). Assistive technology selection to outcome assessment: The benefit of having a service delivery protocol. Disability and Rehabilitation: Assistive Technology, 14(8), 762–763. pmid:31524540
  41. 41. Kazdin A. E. (2011). Single-case research designs: Methods for clinical and applied settings (2th ed.). New York: Oxford University Press.
  42. 42. Locey M. L. (2020). The evolution of behavior analysis: Toward a replication crisis? Perspectives on Behavior Science, 43(4), 655–675. pmid:33381684
  43. 43. Travers J. C., Cook B. G., Therrien W. J., & Coyne M. D. (2016). Replication research and special education. Remedial and Special Education, 37(4), 195–204.
  44. 44. Tullis C. A., Cannella-Malone H. I., Basbigill A. R., Yeager A., Fleming C. V., Payne D., et al (2011). Review of the choice and preference assessment literature for individuals with severe to profound disabilities. Education and Training in Autism and Developmental Disabilities, 46(4), 576–595.
  45. 45. Dillon C. M., & Carr J. E. (2007). Assessing indices of happiness and unhappiness in individuals with developmental disabilities. Behavioral Interventions, 22(3), 229–244.
  46. 46. Lancioni G. E., Singh N. N., O’Reilly M. F., Sigafoos J., Grillo G., Campodonico F., et al. (2020). A smartphone-based intervention to enhance functional occupation and mood in people with neurodevelopmental disorders: A research extension. Life Span and Disability, 23(1), 21–42.
  47. 47. Parsons M. B., Reid D. H., Bently E., Inman A., & Lattimore L. P. (2012). Identifying indices of happiness and unhappiness among adults with autism: Potential targets for behavioral assessment and intervention. Behavior Analysis in Practice, 5(1), 15–25. pmid:23326627
  48. 48. Lancioni G. E., Singh N. N., O’Reilly M. F., Sigafoos J., D’Amico F., Buonocunto F., et al. (2015). Assistive technology to help persons in a minimally conscious state develop responding and stimulation control: Performance assessment and social rating. NeuroRehabilitation, 37(3), 393–403. pmid:26518532
  49. 49. Plackett R., Thomas S., & Thomas S. (2017). Professionals’ views on the use of smartphone technology to support children and adolescents with memory impairment due to acquired brain injury. Disability and Rehabilitation: Assistive Technology, 12(3), 236–243. pmid:26730647
  50. 50. Worthen D., & Luiselli J. K. (2019). Comparative effects and social validation of support strategies to promote mindfulness practices among high school students. Child & Family Behavior Therapy, 41(4), 221–236.