RESNA Annual Conference - 2019

Augmented Reality System For Feeding

Nathalia Peixoto1, Devaraj Dhakshinamurphy1

1George Mason University (Fairfax, VA, USA)


Activities of daily life (ADL) are critically impaired in several kinds of disabilities. Conventional assistive devices for feeding are robotic arms programmed to perform a repetitive task with the push of a button. On the high end of the technology for feeding help, electroencephalographic-controlled robotic feeders provide smoother interaction capabilities but require extensive training and expensive equipment, while being somewhat uncomfortable to wear. Our long-term objective is to propose an adaptable design model that considers intuitive use and possible expansion from feeding to other examples of ADL. Here we show one such example: we designed and implemented an augmented reality-based control mechanism for robotic assistive devices. Our device lets users inspect food served on a plate, and then with a movement of their head they can select from which quadrant the food will be taken and delivered to their mouth with a spoon, through a robotic arm that is controlled via augmented reality.

Some forms of disability permanently impair quality of life [1].  Technology can be adapted to enhance assistive devices with the objective that patients use them daily. Continuous use of this technology is more likely if it contributes to increased quality of life for the patient non-obtrusively. One example of activity that is impaired and can be addressed by technology is independent feeding. We estimate that there is a need for devices that can intuitively be controlled by patients without the use of muscles, voice, or with the help of other users. Usually automated feeding systems are composed of robotic arms that have control mechanisms leveraging those triggers (voice, push-button, care givers who control the arm). On the higher end of the technology there are research-based brain-signal detectors, also known as electroencephalography (EEG)-based detectors, that allow the user to, by themselves, control the feeding arm. Such assistive technology has not seen widespread use due to the cumbersome setup (EEG electrodes go on caps that wrap around the head), price, and the requirement of extensive training until the user can successfully send a simple command such as “up” or “down” for a spoon to be lifted or put down [2,3]. We present here an alternative design: an intuitive controller for a robotic arm feeder. The novelty in our design is the mixed-reality-based controller. More specifically, we designed an augmented reality system that features the following: (1) shows the environment to the user, including the plate with food that is currently in front of the robotic arm and (2) lets the user select which quadrant from the plate will be selected for the next spoonful of food; (3) adapts over time to the speed of the user while eating; (4) can be reprogrammed to control other devices.   

Virtual reality, augmented reality, and mixed reality are experiencing the initial “hype phase” of Gardner’s technology development cycle [4]. Many applications, from games to training, rely on virtual environments that promise to completely immerse users. Augmented reality (AR) superimposes virtual objects with the real world. We hypothesized that the user would have a high perception of usability for the intuitive AR system. Our study found that the AR system performed above industry standards for usability and learnability. Here we show the design and results from testing with healthy subjects.


Schematic of the indirect view which is a smartphone with an image of a real-world object integrated with a virtual plate that is not positioned in front of the user’s face, but rather next to the robotic arm. The user can point to the real and virtual images by way of moving their head.
Figure 1. Indirect view with a real-world object (orange rectangle) integrated with a virtual plate (blue circle) that is not positioned in front of the user’s face, but rather next to the robotic arm. The user can point to the real and virtual images by way of moving their head. The virtual pointer is the dashed line.

A commercially available off-the-shelf 4 DOF (Lynxmotion AL5B) robotic arm was built and the end-effector adapted to hold a disposable spoon in the horizontal position. The arm is controlled via a microcontroller development board (Arduino Mega 2560). A desktop computer connects to the board via a serial interface (I2C) to send commands for movement and receive sensor information (arm position, power consumption, battery level). Characterization of this actuator system shows a delay of 100 to 200ms (average of 150ms) from typing a command on the computer to the movement of the spoon. Spatial accuracy was at the millimeter level.

We then compared, using a decision matrix, several commercially available virtual reality systems (HTC ViveTM, Oculus RiftTM, NeuTabTM) for functionality, available libraries, usability, and cost. The selected system was a smart-phone (BLU Life One X2) based system (NeuTab Virual Reality HeadsetTM) with a heads-up-display (HUD). When the cellphone is moved, the gyroscope/accelerometer data are recorded by an application our team developed. These data are processed and used to then adjust the image displayed on the smart-phone and merges it with the real images from a camera positioned on top of a plate with food. The reality presented to the user is, in this case, merged from the image that the camera from the smartphone is acquiring and the plate with food. Figure 1 shows this framework, which is called an indirect-view augmented reality. There is a fixed virtual pointer that is present in the center of the screen. When the virtual pointer hits a virtual object, options about the object are displayed to the user. If the user focuses on one of the options for five seconds, then that “option” is selected. If the option is one of the plate quadrants, the food is then scooped by the spoon and moved to the level of the user’s mouth.  


We tested 12 healthy college students in order to gain feedback on the design and the chosen system (questionnaire approved in 2017 by the George Mason University IRB, all users signed the consent form). We measured the users’ perception of usability, learnability, interface and hardware capabilities of the AR system through a questionnaire given after users experienced the system for 15 minutes. The testing included a plate with dry food (cereal) and no actual eating of the food, but the control and selection of one of the quadrants. Users could see the spoon being moved toward their mouth and could select to “exit” in the middle of the experiment if desired. All users completed the 15-minute test and were given a questionnaire with questions that allowed for Likert-scale answers (from “completely agree” to “disagree” and “do not know / do not want to answer”). A summary of the results is given below, in table 1.

Table 1.  Average answers for subscales

Average score Standard Deviation
Usability (q1, q3, q5-q9, q11). 80.73 9.7
Learnability (q4, q10) 84.38 13.6
Interface (q12, q15) 66.25 9.3
Hardware (q16, q18) 77.22 13.8


We combined the questions that refer to device usability (total of 8 questions) from all users in order to average their answers. 80.73% means that they were mostly satisfied with the usability of the system presented. Questions addressed several sub-items of this scale but on average the usability is consistent with our objective. Clearly the learnability (84%) is the highest scoring item on our questionnaire; this result matches previous literature on virtual reality systems, mainly for the younger population, already used to such systems. On the other hand, the interface (head-mounted display) was the lowest scoring feature we tested. We have addressed this issue by considering alternative interface designs such as a dark room, projectors, and glasses (projecting the VR and AR images directly on lens that are mounted on glasses. Such technologies are still emerging (and some not yet available) and therefore the future direction of this research may include developing new interfaces.


The augmented-reality system presented here is an enhancement of previous feeding arm designs our team had implemented [5,6], as the prior feedback on those semi-automated arms was mainly focused on the difficulty of using them. We have now adapted the usability to an intuitive controller by leveraging augmented reality as control mechanism. On the other hand, the interface is still cumbersome (heads-up displays were originally designed to be worn by soldiers and not by people with disabilities). We have identified a need to design slim and unobtrusive virtual reality interfaces. We believe that integrating augmented reality with robotic helpers can significantly increase the qualify of life of people with disabilities and allow them to further the technology by expanding its application areas. Virtual reality can be leveraged to control any embedded system available and thus its possible applications in homes with smart devices are immediate.


[1] “Quadriplegia and Paraplegia Information and Infographic,” Disabled World. [Online]. Available: [Accessed: 04-Dec-2018].

[2] Gao, S. K. Ong, M. L. Yuan, and A. Y. C. Nee, “Assist Disabled to Control Electronic Devices and Access

Computer Functions by Voice Commands,” in Proceedings of the 1st International Convention on Rehabilitation Engineering & Assistive Technology: In Conjunction with 1st Tan Tock Seng Hospital Neurorehabilitation Meeting, New York, NY, USA, 2007, pp. 37–42.

[3] Takano K., Hata N, Kansaku K., “Towards intelligent environments: an augmented reality–brain– machine interface operated with a see-through head-mount display,” Neuroprosthetics, vol. 5, p. 60, 2011.

[4] Steinert, M., & Leifer, L. (2010, July). Scrutinizing Gartner's hype cycle approach. In Picmet 2010 Technology Management for Global Economic Growth (pp. 1-13). IEEE.

[5] Mahmoud, S.;  Khan, S.;  Kambugu, J.;  Mohammadi, K.;  Madani, F.. "Semi-automatic feeding device for wheelchair users", 2012, Proceedings of the Annual RESNA Conference, 2012.

[6] Mahmoud, S.H. Song, H. Peixoto, N.. "Semi-Automatic Self Feeding Device", 2012, "Annual Meeting of the Biomedical Engineering Society, 2012".