RESNA Annual Conference - 2019

Design of User Experience for Auto-Positioning of Communication Devices

Peter D. Loeffler1, Dianne Goodwin1, Marty Stone1

1BlueSky Designs (Minneapolis)

INTRODUCTION

People with limited or no movement of their body rely on a combination of assistance from others and technology to provide access to the activities of daily life such as selfcare and communication. The outcomes and consistency of the access will vary depending on factors such as the condition and disposition of the subject, who the person receives help from, and the features and function of the technology. Any person will experience varying energy, abilities, and posture due to factors such as nourishment from food, hydration, and fatigue. A person with limited or no mobility may have an exacerbated change due to the effect of gravity and an inability to self-reposition. The person helping a user, who might be a family member or care staff, will have varying levels of training, attention to detail, and distraction, resulting in a varying outcome for the person being assisted. Technology can perform to its intended design function according to the features it possesses.

Examining the case of eye controlled communication demonstrates the varying outcomes depending on the state of the user, the person assisting the user, and the technology for positioning the communication device. In a simple case a user must have the communication device located at a given relative position in order for the cameras to correctly detect the user’s eyes. Within a certain envelope of the relative position the communication device can detect the user's eye moment and accurately provide control of the communication device. Outside of that envelope the device does not function. When the variance of a user's posture and position is within the communication device's envelope a static fixed mounting system for the device will suffice. If the user's position and posture has a variance that exceeds the envelope due to changes during the course of a day from daily activities or the effect of time and gravity, either the person must move or the device must move.

A fixed static mount may require partial disassembly and the use of tools to reposition. The desire to easily reposition devices with a moveable mounting system contributed to the commercialization of such products [1]. Both a moveable or a static fixed mounting system may require assistance from a care giver to reposition for certain users. Depending on who the care giver is the task of repositioning may have a different outcome of success for the user.

If a communication device could be positioned and repositioned with no outside assistance, using only controls or information provided by the user, the user would gain a significant amount of independence while providing continuous access. The Pow!r Mount [2, 3] provides a platform for positioning and, with the auto-positioning feature described in this paper, adjusts according to information from a camera used to locate the user and a repeatable voluntary action that the user can take. The input from the user is a repeatable voluntary action and may include the activation of a simple switch, a movement of the head or face recognized by computer vision software as a gesture, or an audible sound. The purpose of auto-positioning is continuous, at-will access to a user of a communication device with intuitive straight forward controls.

DESIGN

Images of four screens for the Pow!r Mount app auto positioning function. The first shows the screen with a Find Me button used to start and an Exit button to stop. The second shows the screen displayed while the system searches for the user. The third shows the screen displayed while the system centers the user. The fourth shows the screen displayed once the user has been centered.
Figure 1 . Pow!r Mount App Screens for Auto-positioning Feedback
Considering the user experience provides the clearest view of the product. One primary user scenario consists of a person in a wheelchair who uses eye controlled communication. The power mount attaches to the wheelchair and mounts the eye gaze device. The camera from the communication device and a switch of the user’s preference provide input into the power mount to control auto-positioning. Control in this scenario assumes that the user can operate the switch voluntarily in two ways, either pressing and immediately releasing, or pressing and holding for a short time. A press and hold turns the auto-positioning on and off, while a quick press release pauses and unpauses. A phone running an app provides feedback to the user of the system state, and additional functions for power mount control.

At the beginning of the scenario the power mount is in a stowed position, folded up off to the side of the user. Upon a press hold of the switch the power mount unfolds and moves in front of the person to a known or preferred use position. The power mount begins receiving information from the camera on the communication device regarding the position and orientation of the person in the image. The algorithms of the auto-positioning determine the path that the power mount must move in order to bring the position and orientation into the optimal envelope for the communication device to function. The power mount adjusts as necessary, then once on target with the user centered in the image, the auto-positioning routine pauses and will not adjust the position further until requested by a click of the switch.

The auto-positioning feature of the power mount has four states. The state depends on the input the person provides and what the camera sees. The user toggles the off state through a voluntary action, such as a switch hold as in the above scenario. Once auto-positioning exits the off state the camera data determines whether the system searches or centers. If the camera does not see a person the system searches for one, and if the camera sees someone the system seeks to center the person in the camera’s view. The pause state can be entered in two ways, either voluntarily with a user input such as the switch click in the above scenario, or when the system successfully centers the person in the camera view. An additional switch click exits the pause state to restart auto-positioning, either searching or centering depending on the camera data. Finally, the same action that turns the system on will turn the system off at any point.

User evaluation demonstrated that feedback is vital for the person using the system. The power mount app displays the state of auto-positioning when in use. The initial app screen for auto-positioning during the off state, shows a Find Me button, as shown in Figure 1 A. Once clicked the screen displays either a searching or centering icon, shown in Figure 1 B and C respectively, indicating to the user if the camera can see him or her. When the system has centered the person it pauses and displays that it has found the person, then shows the Find Me button again, along with a message to press and hold to turn off.

Camera Data

The necessary camera data for the system includes the user position in x, y, and z coordinates, and the angle of rotation of the user’s orientation about the x, y, and z axes. Two commercially available options have been successfully employed for the auto-positioning camera data. One is a computer vision microcontroller sold by Omron, the other is the TobiiDynavox PCEye Mini. Both have a software development kit (SDK) that allows integration with the auto-positioning software. The Omron provides camera data in pixels, while the PCEye Mini provides millimeters.

Position Data

The Pow!r Mount functions as a five degree of freedom robotic arm with five revolute joints. The first two joints, with the anatomical analogy of the shoulder and elbow, provide arbitrary positioning within a 380.5 mm radius of the attachment location. The third joint continues the anatomical analogy as the wrist, providing 360 degrees of rotation for arbitrary orientation within the Pow!r Mount’s work space. The next joint is the tilt which rotates through 120 degrees to vertically align the communication device and camera field of view. The final joint is the rotator, which rotates the communication device through 180 degrees about the axis perpendicular to its screen to match a lateral flexion of the user’s neck where an ear approaches the shoulder.

Movement Logic

The auto-positioning feature adds two main modes of movement to the Pow!r Mount, in addition to the standard modes of adjusting each joint individually and moving between predefined setpoints [4]. Centering is the first movement method, active when the camera detects a user, which acts to minimize the error between the current and centered location and orientation of the user in the field of view. Searching is the second movement method, active when the camera does not detect a user, which acts to locate a user by scanning through a best guess for where the user might be. Centering relies on the correlation between the computer vision data and the position data. Searching has a naïve algorithm that scans in the horizontal and vertical directions before backing up for a wider view. Both modes of movement utilize the kinematic solution of the Pow!r Mount for path planning.

Feasibility

The feasibility of auto-positioning relies on the system consistently behaving as the user expects while being responsive. At the core of feasibility then lies the requirement that the kinematic solution accurately reflects the physical location of the Pow!r Mount. Comparing the kinematic solution of the end effector to the measured location of the end effector provides a basis to evaluate the accuracy. The end effector is the location where the communication device attaches. The test procedure cycled between four locations, recording at each the values calculated by the kinematic solution and the measured coordinate on the reference plane of the table where the system was mounted. The X axis describes the horizontal distance left and right of the user. The Z axis describes the distance near and far to the user.

The quantity ΔX (delta X) is the difference between the targeted kinematic solution of X and the measured X location. The quantity ΔZ (delta Z) is the difference between the targeted kinematic solution of Z and the measured Z location. The quantity d equals the square root of the sum of the squared quantities 𝛥X (delta X) and 𝛥Z (delta Z), to represent the total distance between the kinematic solution and the measured location, as d = Δ X 2 + Δ Y 2 .

RESULTS

Testing

A plot of the difference between the targeted location defined by the kinematic solution of the Pow!r Mount and the actual measured location. The plot displays three quantities: delta X in the horizontal direction, delta Z in the distance direction, and d the combination of the horizontal and distance directions as the square root of the sum of the squared quantities delta X and delta Z. The mean of the combined distance is 12.7 mm.
Figure 2. Accuracy Test Results

The results 19 trials of the accuracy test show the system to position the end effector communication device on average just under 13 mm of the target. The mean of the sample for d, shown in Figure 2, is 12.7 mm.

User Evaluation

Two people with ALS evaluated the system during a series of informal trials and provided feedback. They both emphasized the importance of clear feedback from the system to allow them to understand what it was doing. This request led to the app screens shown in Figure 1 .

An image of a user evaluating the system with her communication device which uses a head mouse.
Figure 3. User Evaluation

Both of the individuals had full mobility of the neck and head. During one trial the person looked to the side during a conversation which led to the system moving in response. During another trial with a user of a head mouse, shown in Figure 3, the system responded to the head movements used to control the cursor. Both of these system behaviors were over responsive and undesirable. The experience led to the addition of control options to pause the system at will.

DISCUSSION

The envelope of the eye gaze device accommodates a generous range of positioning. In the case of the PCEye Mini that space is a 35 cm × 30 cm ellipse with a working distance of 45 cm – 85 cm [5]. As long as the centered location targeted for auto-positioning is near the center of that space the eye tracking will work. In practice, during user evaluations the eye tracking provides control of the communication device as soon as the user enters the tracking envelope, before the Pow!r Mount reaches the centered location.

The accuracy can be improved through a change in the resolution of the encoders which measure joint location and the implementation of the motor controller. The encoder position sensor is capable of 0.088 degree/bit resolution and is only using 1.41 degree/bit. In addition the motor controller allows for a ramp up and down of current to smooth the start and stop of motion and avoid jerkiness. Due to the inertia of the system the motors experience different amounts of drift during start and stop. The data for this test is on average 1.39 encoder bits away from the target position. The motor controller can be tuned to maintain the desirable motion profile, while improving precision.

Future Work

Future work on auto-positioning includes improvements to the existing capabilities, expansion of features, and rigorous user testing. The improvements in development include optimized search to more quickly locate the user based on data from testing. Additional configuration options will allow customization based on user preference, such as continuous tracking and the behavior when a user goes to sleep. The utilization of computer vision capabilities will expand to allow repeatable gestures as a input method. The authors are seeking integration of leading eye controlled communication devices with open SDKs or application programming interface (API). The authors also plan to develop a camera accessory for computer vision input for users who do not have a supported communication device. The system could be used to position other equipment and target locations other than the user's face, in which cases a communication device would not be present.

Future work includes single subject design performed in partnership with clinicians. First the baseline will be established by measuring attributes related to the positioning of the user’s communication device, as well as Functional Independence Measures. Next the intervention will be introduced where subjects use the prototype for 6 to 8 weeks, keeping a log of problems, benefits, changes in how things are done or who does them, and safety issues.

CONCLUSION

Auto-positioning of a communication device with a powered mount provides continuous access and increased independence to a person dependent on that technology. The progress on development of the technology shows promise, which needs rigorous verification through single subject design user studies.

REFERENCES

[1] Sundberg, E., Goodwin, D. M. 2007. Usability Testing of Repositionable and Customizable Locking Mounts with Rehabilitation Professionals. Proc RESNA 2007 Ann Conf, Washington, DC: RESNA Press.

[2] Goodwin, D. M., Lee, N.K. 2014. Design and Development Process: Accessible, Affordable and Modular Robotics. Proceedings of the RESNA 2014 Ann Conference, Indianapolis, IN.: RESNA Press.

[3] Portoghese, C., Goodwin, D. M. 2014. Development of Accessible Powered Mounting Technology. Proceedings of the 2014 International Seating Symposium, Vancouver, BC.

[4] Goodwin, D, Lee, N, Stone, M, Kanitz, D, 2015. Accessible User Interface Development: Process and Considerations. RESNA 2015 Annual Conference.

[5] PCEye Mini Specifications. Retrieved from https://www.tobiidynavox.com/en-us/devices/eye-gaze-devices/pceye-mini-access-windows-control/#Specifications. 2019.

ACKNOWLEDGEMENTS

Funding provided by NIH/NICHD SBIR grants (1R44HD093467-01, 5R44HD072469-03, 2R44HD072469-02, 1R43HD072469-01); NIDILRR SBIR Grant H133S060096; and the RERC on Wireless Technology LiveWell program (NIDILRR Grant 90RE5023).