Javier L. Castellanos-Cruz, María F. Gómez-Medina, Mahdi Tavakoli, Patrick M. Pilarski, and Kim Adams
University of Alberta, Canada
INTRODUCTION
Play is critical for children to develop the skills needed to assume student, family, and social roles throughout their lives [1]. Play is a way for children to expand their knowledge about self and the world and allows them to discover and enhance their capabilities by trying out objects, making decisions, understanding cause-and-effect relationships, and seeing the consequences of their actions [2]. However, children with physical impairments may face challenges to play and explore their environment and this may negatively affect their social, emotional, and/or psychological development [3].
Robots can allow children with physical impairments to explore and interact with their environment, which can contribute to their learning and social development [4]. Robots can be teleoperated, allowing children to control them from a distance, e.g., from their wheelchairs. The robots that have been developed for the purpose of play allow children to see and hear, but not feel, what the robot is doing to their toys [5]. Haptic interfaces can provide the sense of touch to the children, so that they can learn about the mechanical properties (e.g., hardness or softness) of their toys, contributing to the understanding and exploration of the environment [6]. Haptics-enabled robots can also implement kinaesthetic (i.e., hands-on) guidance for helping people with physical disabilities do manual tasks such as sorting that they cannot otherwise complete [7]. To this end, haptics-enabled robots exert forces that guide the user to reach locations or objects in the environment.
Children with physical disabilities could point at the object or toy they want to play with using their eye gaze so that the haptics-enabled robot to guide the user to reach it. A common approach is to use an explicit eye input interface, which requires the users to control their eye movements, or gaze direction, voluntarily and consciously, e.g., using the eyes as a pointer [8], however it can be difficult for children with disabilities. The performance of selecting target images with the eye gaze was compared between five typically developing children with ages between 4 and 13 years old, and five children with spastic quadriplegic cerebral palsy between 7 and 11 years old [11]. The task consisted of selecting the correct target image out of 2, 4, 8, or 16 images that were displayed on a computer screen. The typically developing children spent an average of 14.2 seconds to complete the task, while the children with CP spent 57.8 seconds on average. The children with CP had difficulties maintaining their gaze on the target due to their body movements. Similarly, the ability to select targets was assessed in seven children with Rett syndrome (a developmental disorder with cognitive and neuromotor impairments) between 4 and 9 years of age [12]. Children were asked to fixate at one of two physical pictures according to verbal instructions, or at the picture that was similar to or was exactly like a third picture presented. Analysis of the eye gaze revealed that children fixated at the correct picture, on average, 62.4% of the time.
In addition, children may find it difficult to use explicit interfaces while completing a task that requires multiple steps. Encarnação et al. [9] tested an explicit interface to control a Lego robot with children with physical impairments. Three children with cerebral palsy, two of whom were three years old and the other who was six years old, participated in that study. Children controlled the movement of the robot by looking at a computer screen that displayed buttons to move forward or backward and turn left or right. Children had to fixate on the screen to make a selection and then look at the robot to observe its action. The three-year-olds were not able to complete the activities, and the authors suggested it was due to the complexity of changing the focus of their attention from the screen to the robot. In another study, a seven-year-old child without impairments was not able to use an explicit eye input interface for drawing on a computer screen [10]. The interface required users to fixate at a location on the canvas for at least 500 ms to set the starting point for drawing. Then, the user had to select a shape (line or circle) by fixating at one of the buttons displayed on the computer screen. Finally, the user had to fixate at a location on the canvas to set the end point of the figure. Other participants, who were between 10 and 36 years of age, were able to use the interface successfully.
Attentive eye input interfaces can be easier and faster to use than explicit eye input interfaces. Attentive user interfaces track and process the user’s point of gaze (POG) in the background to provide information about the user’s attentional behavior while performing a task [8]. Li et al. [13] implemented a visual attention recognition method to control a laparoscope (i.e., a camera sent inside the body) during minimally invasive surgery. The system had an attentive user interface that recognized the surgeon’s visual attention by interpreting eye gaze movements and eye gaze patterns, and then autonomously moved the motorized laparoscope to the site where the surgeon’s visual attention was directed. Barbuceanu et al. [14] designed an attentive interface to identify the user’s intentions for object selection in a virtual environment. The user’s eye gaze was first analyzed during the selection of virtual objects representing items from a kitchen. The interface incorporated a probability model of the gaze transitions between the objects, reflecting the possible operations to perform with them, e.g., pour water from a bottle into a glass. The interface made the selections of the objects based on the gaze transition probabilities of the model. Establishing such connections between the objects allowed the system to anticipate the user’s selection of the objects.
In our previous work, it was possible to predict the object or toy that adults without disabilities wanted to reach when they were using a haptic telerobotic system [15], [16]. The objective of this study was to examine the eye gaze of typically developing children and participants with physical disabilities in order to develop an attentive user interface to predict the toy that they wanted to reach when using the haptic telerobotic system.
METHODS
Participants
Nine typically developing children participated in this study. Their ages ranged from three years and one month to four years and 10 months (48.3 ± 7.3 months). None of the children had any known physical or visual impairments. Additionally, a child with hemiplegic cerebral palsy participated in this study. He was seven years and four months old. His right limbs are affected, and he has difficulties grasping and reaching objects with his right hand. He was classified as Level I in the Gross Motor Function Classification System (GMFCS) Expanded and Revised [17], which means that he can walk but with limited balance and coordination. He is classified as Level III in the Manual Ability Classification System (MACS, [18], which means he has difficulties handling objects (with his right hand). He had corrected-to-normal vision (i.e. wore glasses) and had attention deficit hyperactivity disorder, as reported by his parent. Also, a 52-year-old female adult who has quadriplegic cerebral palsy participated in this study. She has difficulties handling objects due to poor motor control and spastic movements. She was classified as Level IV in the GMFCS scale, meaning that she can perform self-mobility by using a powered wheelchair. In addition, she is classified as Level III in the MACS scale, meaning that she has difficulties handling objects. She has alternating amblyopia; thus, her eyes are turned in different directions, but her dominant eye was the left one. Consent was obtained from the children’s parents and verbal assent was obtained from the children prior to starting the trials. The adult provided consent for her participation. Ethical approval was obtained from the Health Research Ethics Board – Health Panel at the University of Alberta.
Materials
Figure 1 illustrates the setup of the robotic system and the activity. The robotic system had two PHANToM Premium 1.5A haptic robots (3D Systems, Inc., Rock Hill, SC, USA) in teleoperation mode. One of them was placed in the environment (environment-side robot) and followed the movements performed by the user on the other robot (user-side robot). The system also included a Tobii EyeX eye tracking system (Tobii Technology, Stockholm, Sweden) to measure the x and y coordinates of the point of gaze (POG) for the left and right eyes of the user. The sampling frequency of the eye tracker was 40 Hz. The robots and the eye tracker were programmed in R2016a Matlab/Simulink.
The activity chosen was the whack-a-mole game. The game turned on and off the three moles that were the closest to the environment-side robot. Using only the closest moles was to avoid having to move the robot’s end-effector to the two moles at the limits of the robots’ workspace where instability in the teleoperation was more prone to happen. Children looked through the hole of a stand, to avoid losing the eye tracker losing the view of the eyes due to large head movements. The adult did the activity without the stand. The distance between the participant's eyes and the eye tracker was approximately 65 cm and about 90 cm to the rear moles of the game. For more details see [15].
Procedure
There was one session that lasted about 20 minutes for each participant. The typically developing children and the adult with CP held the robot interface with their dominant hand, while the child with CP held the robot interface with his affected right hand. Because of the adult’s eye condition, she did the activity wearing an eye patch on her right eye so that the eye tracker could measure her left eye's POG.
Before starting the activity, participants had the chance to get familiar with the system by whacking each mole twice. Participants then played the whack-a-mole game without the activation of haptic guidance. One mole was lit up at a time randomly, and after whacking it, a different mole was lit up one second later. The typically developing children whacked 60 moles. The child and the adult with CP whacked 180 and 120 moles, respectively. They took breaks after whacking 60 moles. The participants with CP played the game longer than the typically developing children because they represented a small sample, thus, more data was needed from each of them. To keep children engaged, author 1 motivated the children by congratulating them after whacking each mole, e.g. “wow, that was amazing!”, and every few moles the researcher said an engaging sentence such as “He/she is a good player, but can he/she really win at this game? Of course, he/she can!”
Data collection and analysis
The participants’ POG and the environment-side robot's position of the end-effector were collected, and the session was video recorded. The POG’s x and y coordinates for the left and right eyes were averaged. After the session, the POG and the robot’s position data were synchronized and divided into episodes. An episode was the time interval from the moment a mole was lit up until the participant whacked it. Episodes were excluded from analysis if the participant anticipated and moved from the mole he/she had whacked towards the other two moles without waiting for the one-second delay to light up one of the other moles. Exclusion of those episodes was based on if the position of the environment-side robot’s end effector was less than 8 cm away from the target mole once it was lit up, i.e. approximately half of the distance between the rear moles. Also, episodes were excluded from the analysis if the eye gaze data was lost due to head movements or if participants were looking outside the eye tracker's workspace (e.g. when children looked at their parents or lifted the robot’s end-effector too high). Episodes were also excluded if users exceeded the force limits of the robots (i.e. 8.5N) since that deactivated the robots’ motors for a few seconds. The remaining number of episodes that were included in the analysis were 258 for the typically developing children, 52 for the child with CP, and 39 for the adult with CP. The POG and robot’s position data were examined for understanding the participants’ eye-robot coordination (i.e., coordination of eye movements with the movements of the robot) to implement an algorithm to predict the mole that participants were wanting to whack with the robot. The accuracy of the predictions in each episode was measured.
RESULTS AND DISCUSSION
All the participants were able to control the robots to complete the activity without help from the researcher, and children seemed to enjoy playing the game using the robots. Even the three year old children were capable of controlling the telerobotic system to complete the activity. It might have been quite intuitive because the environment-side robot closely followed the movements of the user’s hand at the user-side robot and it seemed like they were moving their own hand. In fact, participants needed very little time to get familiarized with the system to be able to whack the moles.
From the videos, it was observed that the typically developing children and the child with disabilities did not have trouble getting the robot end-effector to the moles, but sometimes they had some difficulties to push straight down on the moles. Therefore, haptic guidance will be necessary to help the children push down the moles, especially for the child with CP who could not whack the moles as hard as the typically developing children. In the case of the adult with CP, she also had trouble whacking the moles, but she also had difficulties reaching the moles due to spastic movements. Thus, haptic guidance will be necessary to help her get to the moles and to whack them.
Examination of the participants’ eye gaze and the robot’s position revealed that most of the time the participants’ POG was close to the mole they wanted to whack while they moved the robot toward it. Thus, it was determined that an attentive user interface could be developed to predict the mole that participants wanted to whack with the robot by first measuring the distances between the user’s POG and each mole, then assigning the mole with the least distance as the predicted mole. Applying that algorithm to the episodes collected while the participants played the whack-a-mole game without the haptic guidance, the accuracies of the predictions were 95.84% (SD=9.24%) for the typically developing children, 93.97% (SD=9.64) for the child with CP, and 90.54% (SD=9.76) for the adult with CP. It is important to note that these accuracies were obtained without the activation of the haptic guidance towards the predicted moles. In a future study, we aim to test the attentive interface when it activates the haptic guidance so as to steer the participants’ activity towards the predicted mole. Since the predicted accuracies here were not 100%, it is possible that participants will feel that the guidance is opposing their movements when predictions are inaccurate.
REFERENCES
- Henry A. Assessment of play and leisure in children and adolescents. In: Parham LD, Fazio LS, editors. Play in Occupational Therapy for Children. 2nd ed. St. Louis, USA: Mosby Elsevier; 2008. p. 95–193.
- Missiuna C, Pollock N. Play deprivation in children with physical disabilities: The role of the occupational therapist in preventing secondary disability. Am J Occup Ther. 1991;45(10):882–8.
- Deitz JC, Swinth Y. Accessing Play Through Assistive Technology. In: Parham LD, Fazio LS, editors. Play in Occupational Therapy. 2nd ed. St. Louis, USA: Mosby Elsevier; 2008. p. 395–412.
- Cook A, Encarnação P, Adams K. Robots: Assistive technologies for play, learning and cognitive development. Technol Disabil. 2010;22(3):127–45.
- van den Heuvel RJF, Lexis MAS, Gelderblom GJ, Jansens RML, de Witte LP. Robots and ICT to support play in children with severe physical disabilities: a systematic review. Disabil Rehabil Assist Technol. 2015;1–14.
- Demain S, Metcalf CD, Merrett G V, Zheng D, Cunningham S. A narrative review on haptic devices: relating the physiology and psychophysical properties of the hand to devices for rehabilitation in central nervous system disorders. Disabil Rehabil Assist Technol. 2013;8(3):181–9.
- Sakamaki I, Adams K, Gomez Medina MF, Castellanos Cruz JL, Jafari N, Tavakoli M, et al. Preliminary testing by adults of a haptics-assisted robot platform designed for children with physical impairments to access play. Assist Technol. 2017.
- Majaranta P, Bulling A. Eye tracking and eye-based human-computer interaction. In: Fairclough S, Gilleade K, editors. Advances in Physiological Computing. London: Springer-Verlag; 2014. p. 39–65.
- Encarnação P, Leite T, Nunes C, Nunes da Ponte M, Adams K, Cook A, et al. Using assistive robots to promote inclusive education. Disabil Rehabil Assist Technol. 2017;12(4):352–72.
- Hornof A, Cavender A, Hoselton R. Eyedraw: A System for Drawing Pictures with Eye Movements. In: Proceedings of the 6th International ACM SIGACCESS Conference on Computers and Accessibility. Atlanta, GA; 2004. p. 86–93.
- Amantis R, Corradi F, Molteni AM, Massara B, Orlandi M, Federici S, et al. Eye-tracking assistive technology: Is this effective for the developmental age? Evaluation of eye-tracking systems for children and adolescents with cerebral palsy. Assist technol Res Ser. 2011;29:489–96.
- Baptista PM, Mercadante MT, Macedo EC, Schwartzman JS. Cognitive performance in Rett syndrome girls: A pilot study using eyetracking technology. J Intellect Disabil Res. 2006;50(9):662–6.
- Li S, Zhang X, Kim FJ, Donalisio da Silva R, Gustafson D, Molina WR. Attention-aware robotic laparoscope based on fuzzy interpretation of eye-Gaze patterns. J Med Device. 2015;9(4).
- Barbuceanu F, Antonya C, Duguleana M, Rusak Z. Attentive user interface for interaction within virtual reality environments based on gaze analysis. In: 14th International Conference on Human-Computer Interaction: Interaction Techniques and Environments. 2011. p. 204–13.
- Castellanos-Cruz JL, Gómez-Medina MF, Tavakoli M, Pilarski PM, Adams KD. Preliminary Testing of a Telerobotic Haptic System and a Neural Network to Predict Targets during a Playful Activity. In: 7th International Conference on Biomedical Robotics and Biomechatronics. Enschede, Netherlands: IEEE; 2018. p. 1280–5.
- Castellanos J, Gomez MF, Adams K. Using machine learning based on eye gaze to predict targets: An exploratory study. In: IEEE Symposium series on computational intelligence. Honolulu, HL, USA: IEEE; 2017.
- Palisano R, Rosenbaum P, Bartlett D, Livingston M. Gross motor function classification system expanded and revised. Retrieved from www.canchild.ca. 2007.
- Eliasson AC, Krumlinde-Sundholm L, Rösblad B, Beckung E, Arner M, Öhrvall AM, et al. The Manual Ability Classification System (MACS) for children with cerebral palsy: scale development and evidence of validity and reliability. Dev Med Child Neurol. 2006;48(07):549–54.