Design And Testing Of A Haptic-Feedback Active Mouse For Accessing Virtual Tactile Diagrams

Alex Lazea, Dr. Dianne Pawluk

Department of Biomedical Engineering, Virginia Commonwealth University, Richmond VA

ABSTRACT

Traditionally raised line drawings produced on specialized paper are used to provide access to tactile diagrams, such as tactile maps, graphs or diagrams, to individuals who are blind or visually impaired. However, the production of such tactile diagrams can be a time consuming and resource heavy task. This paper describes an affordable “active mouse” that we have developed to provide faster, virtual access to tactile diagrams.  The device allows for both active and passive haptic exploration of tactile diagrams through the use of force feedback.  The system consists of a small omni-drive system to allow for smooth motion in the plane, and with admittance control to regulate guidance and passive movement. A prototype system has been constructed and preliminary results suggest its ability to be used for passive and active exploration.

BACKGROUND

Visuals such as maps, graphs, diagrams, and many others are common ways to communicate information to others and we are surrounded by them each day. However the visually impaired do not get to take advantage of such means of communication. A traditional solution is to use raised line drawings printed on specialized paper to convey the necessary information through the sense of touch. Unfortunately the production of physical tactile diagrams is a slow and expensive task.  They also do not allow for flexibility when exploring these diagrams, such as allowing zooming and decluttering/simplification (Rastogi and Pawluk, 2013).

Several groups have considered providing haptic/tactile computer interfaces to provide interactive access to virtual tactile diagrams.  One alternative is to use tactile displays in various forms (Metec AG, 2010; Petit et al., 2008; Owen et al., 2009) to provide information about the edges or textures underneath the device.  Unfortunately, large multi-pin devices are prohibitively expensive.  Small, moving multi-pin devices have been created but have difficulty tracking lines, which greatly slows access even when effective.  

Alternatively, haptic fixtures have been used (Abu Doush et al., 2009) to constrain exploration along data lines in graphs. This allows for quicker tracking of lines.  Unfortunately it is not clear whether this method will be effective for more complex diagrams, particularly ones using more complex representations (i.e., textures) rather than lines (but which are also more effective). In addition, existing systems tend to be expensive and needlessly 3D, and/or have a small workspace and provide low force levels.

In exploring physical tactile maps, two main paradigms have been observed (Magee and Kennedy, 1980; Symmons et al., 2005). One form of exploration is known as active exploration, where the user can freely explore the diagram and piece together the shape that is presented. It is thought to be hypothesis driven. Another method for approaching tactile diagrams is known as passive exploration. The user is guided along the raised lines on the diagram by a person that can clearly see the information presented. This guide moves the user’s hand with their own while the user focuses all cognitive function on identifying the subject of the diagram (Symmons, 2005; Vermeij, 1980).

The two methods have been examined under various conditions and it is not clear which one is more effective or whether allowing a mix of methods would be best.  This paper describes a low cost, “active mouse” which can allow both active movement by a user (and provide haptic feedback as to the diagram) and guidance of the passive user’s hand under the command of the “mouse”.  It is intended as an efficient method to access tactile diagrams, as well as provide a platform to understand the differences between active exploration and passive guidance.

DEVICE DESIGN

System Layout

Diagram of the full system layout. Two main components are illustrated in the layout. The first component is the software featuring virtual fixture paths. The second component is the device which includes the various sensors and actuators of the device. The user applies a force onto the device which in turn gets read by the onboard force sensor. The position of the mouse is also recorded by the graphics tablet. The input force and position are fed into the software component where the virtual fixtures create a new modified end manipulator output velocity. The output velocity is then relayed to the servos of the device to create a force feedback for the user to interpret.
Figure 1: Figure showing the overall system layout.
The device created is a force feedback active computer “mouse” designed for access to 2-D virtual tactile diagrams. Its design requirements, in terms of allowing natural exploratory movements, facilitate the need of a responsive and dynamic system including sensors, actuators, and control modules.

The system is divided into two main components. The first is the software which handles the virtual tactile diagram and the control of the device’s many features. The second part is the actual device hardware.

The software is further broken down into two modules. The first module handles the virtual tactile diagrams. It is programmed in C# using Visual Studio and runs on any desktop PC. Shapes (i.e., objects and parts) can be drawn or interpreted in the virtual workspace. Currently, the software interprets them as virtual fixtures. These fixtures produce virtual forces in response to user input force and position. To control the end result, an admittance control model of the type V = k * F is devised where F is the user input force and V is the system output velocity. Alternatively, path planning can be used to “forceably” guide the user explicitly through a path, such as around the edge of a shape. The System output is fed into the second software module designed to control and manage the device hardware. This second module was programmed in Arduino’s C++ and runs on an Arduino Due. The Arduino software is responsible for collecting and relaying input from the sensors in the device to the PC software and calculating the response of each actuator in the device.

The final component of the system is the device hardware which consists of a 2D force sensor, RF position sensor and three servo motors as the actuators. As the user interacts with the device, the force sensor and position sensor collect the user’s intentions to move and relays the information to the software via the Arduino Due. Next the software determines a response and outputs a net velocity Vh,net through the use of the three servo motors.

Omni Drive Design

Diagram showing a top view of the device drive platform and omni wheel designs. The drive omni platform consists of three servo motors placed concentrically around the platform’s z axis. At the end of each motor there is an omni wheel attached. Each omni wheel is seen having small spinners concentrically placed around the radius of the wheel frame. The small spinners are oriented at 90 degrees relative to the main wheel frame to allow for complete omni directional movement.
Figure 2: Diagram of the omni drive and omni wheel.
The device hardware is comprised of two main sensors and a drive platform. The force feedback component is a motorized drive platform consisting of three servo motors (HS-7955TG, Hitec), that were modified for continuous rotation, connected to omni-directional wheels. The wheels (custom made) come with smaller spinners placed around the radius of the main wheel frame and oriented orthogonally to the wheel’s axis of rotation. The wheels can achieve smooth 2D omni-directional movements on a given plane which is important for a natural exploratory feel for the device.

The three servo motors are concentrically positioned around the main vertical axis of the drive at 120° apart. This type of drive layout is known as a holonomic drive system. With the help of the omni wheels, this type of drive system allows for the control of all three degrees of freedom that the device has. The device can be linearly moved along the x and y axis and rotated around the z axis freely. Figure 2 below illustrates the design of both the holonomic drive system and the omni wheels respectively.

The device sensors consist of a 2D force sensor (MSI Model 462, Ultra-MSI) and a 2D RF position sensor that interfaces with a graphics tablet (Wacom Intuos Extra Large). Both sensors are housed in the device and are fastened on top of the drive platform. The force sensor specifically acts as a connector between the drive and the outer shell of the device. As the user applies a force to the outer shell, the force sensor will measure and relay it to the Arduino Due as an input force F for the admittance control model.

Guidance Virtual Fixtures

Virtual fixtures are forces and positions produced virtually that can be reapplied to an end manipulator for correcting or restricting movement. Guidance virtual fixtures (GVF) are a type of virtual fixtures that help to guide a human operator along a desired path. GVFs are categorized under two control models: impedance and admittance. An impedance model uses position and velocity as the input and outputs a force. This model is usually low in inertia and back drivable. An admittance model, in contrast, uses force and position as the input and outputs a velocity to the end effector.  This model type helps to constrain the user’s movements to a given path or region. High inertia and non-back drivable actuators are typically used to enforce the desired stiffness (Abbott et al., 2007).

For the current application a pseudo-admittance model is used that primarily constrains a user’s movements to a path but do allow the user to “break free” to freely explore other areas of the diagram. Such “soft” haptic fixtures have previously been described by (Bowyer et al., 2007). The method allows our device to seamlessly adapt to both passive and active tactile exploration methods used with traditional tactile maps.

The main goal of the admittance GVF model is to respond to user input by encouraging desired motions, such as to follow a curve, and limiting undesired motions, such as moving away from a curve. The position of the end manipulator Pe is taken and used to calculate the closest point on the virtual path (curve) to the end manipulator.

With the closest point achieved, we can calculate the desired d and undesired τ direction unit vectors which are dependent on the position of the end manipulator Pe (see Figure 3). The two unit vectors are the tangent and normal vectors of the path at the closest point respectively. The input force F from the hand is projected onto the desired and undesired directions to produce the desired Fd and undesired Fτ forces of the user.

F d = F u                                                          (1)

F τ = F τ                                                          (2)

Diagram showing the modification of an input force by the virtual fixture path. The path is shown as r of s and the trajectory of the end manipulator p e is shown as a dotted curve. The trajectory of the end manipulator is modified by a final force F vf which is found by the summation of the desired and undesired directional forces of the user. The mentioned force directions are derived from the closest point on the virtual fixture path which is labeled as r of si.
Figure 3: Illustration of desired Fd and undesired Ft forces at point Pe relative to a virtual path r(s).
From here the undesired force Fτ of the user is damped by an admittance value kτ and thanks to direction τ it is also redirected towards the path as well (like an attraction well to the path). The resulting product can be summed with the desired force Fd to produce the final output force Fvf used to adjust the user’s movements.

F vf = F d + K τ F τ                                               (3)

The admittance model uses an admittance variable   to map the force to a final velocity (Bettini et al., 2004; Mihelj et al., 2012).

V h , net = α F vf                                                   (4)

Omni Drive Control Model

The omni drive of the device consists of three servo motors positioned concentrically 120° apart. To guide the user along a path on the virtual tactile diagram, the device’s drive module must take the commanded output velocity Vh,net (calculated through admittance control with virtual fixtures, as above) and decompose it into three distinct velocity components Vi = V1, V2, V3. More specifically the output for each servo is an angular velocity: Θ ι = Θ 1 , Θ 2 , Θ 3  (Tzafestas, 2014). Relating the wheel velocity components to the rotation of the motors:

V i = r Θ                                                           (5)

Diagram of the omni wheel layout. The wheel is shown at an arbitrary angle and is accompanied by illustrations of the wheel’s spinners. The diagram shows the wheel’s velocity v h produced by summing the spinner’s linear velocity with the main wheel’s linear velocity. In addition the reference axes x and y of the main drive platform are shown superimposed on the wheel. From the given axes, the angles of the before mentioned velocities are shown.
Figure 4: Figure of omni wheel velocity Vi including spinner velocity Vi,spinner and resulting wheel velocity Vh.
The radius of the drive’s wheels is r and Vi denotes the wheel’s linear velocity. The relationship of the wheel velocity to the omni wheel design is further revealed in Figure 4 below. The small free rolling spinners oriented at, run along the radius of the main wheel frame. The spinners induce their own velocity Vi,spinner. Together the wheel velocity Vi and spinner velocity Vi,spinner produce a net velocity Vh which contributes to the final Vh,net. Vh is given:

V h = V i 2 + V i , roller 2                                            (6)

As Figure 4 shows above, the resulting velocity of the omni wheel is denoted as Vh and is at an angle δ. Thus Vh can also be written as.

V i = V h cos δ i                                                 (7)

If we take into account the angle of Vh, shown as γ, in respect to the drive platform’s x axis and angle β which is the angle of the wheel’s linear velocity Vi in respect to the x axis, then we get δ = γ – β.

V i = V h cos γ i β i                                            (8)

V i = V h cos γ i cos β i + sin γ i sin β i       (9)

If we distribute Vh then we can get the x and y components of Vh as:

V hx = V h cos γ i ;   V hy = V h sin γ                     (10)

Diagram of the omni drive layout. The diagram shows the drive’s net velocity as v h net and the three wheel velocities which are numbered 1 through 3. Super imposed onto the drive platform at the origin is the main coordinate axes x and y.
Figure 5: Diagram of omni drive. Vh,net is the resulting drive velocity of the drive and V1-3 are wheel linear velocities.
Thus substituting this into equation (9) and calculating for angular velocity as defined by equation (5), we get the following simplified system of equation for wheel 1, 2, and 3 respectively:

Θ 1 = V 1 r = 1 r V hx                                               (11)

Θ 2 = V 2 r = 1 r 1 2 V hx + 3 2 V hy                               (12)

Θ 3 = V 3 r = 1 r 1 2 V hx 3 2 V hy                               (13)

SOFTWARE TESTING

Screenshot of the guidance virtual fixture software running on a personal computer. The screenshot shows a traditional windows application layout. The majority of the layout comprises of an editing panel where a Bezier curve is drawn. A series of points and lines illustrate curtail positions of various elements and the important direction components needed for the calculation of the final output velocity. The left side of the layout contains important editing and parameter tools while the right side of the layout features system information for debugging purposes.
Figure 6: Screenshot of virtual fixture editor software.
The main testing software written in C# (Figure 6) contains a diagram editing panel capable of producing various simple paths such as line segments. The software successfully reinterprets the paths as virtual fixtures that map an input force F into a modified output velocity Vh,net.

OUTCOME AND DISCUSSION

We have successfully developed an omni directional drive prototype. The Arduino Due C++ code allows for an input net velocity to be decomposed into three distinct velocities for each wheel in real-time. The device’s drive is thus capable of producing smooth 2D translations of various magnitudes. Currently the most significant limitation is a loss of traction of the wheels on a smooth surface. Preliminary tests show the device capable of producing 0.36 pounds of force in any given direction before the plastic omni wheels lose traction with the surface underneath. Figure 7 below shows the drive prototype and wheel design. Traction is improved by operating on a matted rubber toolbox liner.

 

Picture of the drive system prototype and current omni wheels. The drive platform consists of three servo motors coupled to plastic omni wheels. Each omni wheel is provided with a closer zoomed in view for detail. The wheels consist of three spinners each and to produce smoother movement, each servo is coupled to a pair of wheels oriented at 60 degrees relative to each other. The platform itself is a custom 3D printed design that the three servos are faceted onto.
Figure 7: Picture of current device prototype hardware.

FUTURE WORK

The current prototype is developed in two separate components. The virtual fixtures software is separate from the actual device drive system. Future work includes the merging of the two components into a cohesive real-time system that is responsive to user inputs. We are also working towards a custom wheel design to improve traction and to miniaturize the entire design. End user testing and further device performance assessment is planned for the future.

REFERENCES

Abbott, J. J., Marayong, P., & Okamura, A. M. (2007). Haptic Virtual Fixtures for Robot-Assisted Manipulation. Springer Tracts in Advanced Robotics Robotics Research, 49-64.

Abu Doush, I., Pontelli, E., Simon, D., Cao Son, T. and Ma, O.  (2009).  Making Microsoft Excel Accessible: Multimodal Presentation of Charts.  ASSETS’09, Pittsburgh, PA, 147-154.

Bowyer, S. A., Davies, B. L., & Baena, F. R. (2014). Active Constraints/Virtual Fixtures: A Survey. IEEE Trans. Robot. IEEE Transactions on Robotics, 30(1), 138-157.

Gibson, J. J. (1962). Observations on active touch. Psychological Review, 69(6), 477-491.

Magee, L. E., & Kennedy, J. M. (1980). Exploring pictures tactually.Nature, 283(5744), 287-288.

Metec, AG. (2010). Braille cell P16. Retrieved September 27, 2011, from http://www.metecag.de/braille%20cell%20p16.html

Mihelj, M., & Podobnik, J. (2012). Haptics for Virtual Reality and Teleoperation (1st ed., Vol. 64, Intelligent Systems, Control and Automation: Science and Engineering). Springer Netherlands.

Owen J., Petro J., D’Souza S., Rastogi R., & Pawluk D.T.V. (2009). An Improved, Low-cost Tactile “Mouse” for Use by Individuals Who are Blind and Visually Impaired. Assets’ 09, October 25-28, Pittsburg, Pennsylvania, USA 2009.

Petit, G., Dufresne, A., Levesque, V., Hayward, V., & Trudeau, N. (2008).  Refreshable Tactile Graphics Applied to Schoolbook Illustrations for Students with Visual Impairment.  ASSETS 2008, 89-96.

Rastogi, R. and Pawluk, D.  (2013a). Tactile DiagramSimplification on Refreshable Displays.  Assistive Technology, 25 (1), 31-38.

Rastogi, R. and Pawluk, D.  (2013b). Intuitive Tactile Zooming for Graphics Accessed by Individuals who are Blind and Visually Impaired.  IEEE Transactions on Neural Systems and Rehabilitation Engineering, 21 (4), 655-663.

Symmons, M., Richardson, B., Wuillemin, D., & Vandoorn, G. (2005). Active versus Passive Touch in Three Dimensions. First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems.Tzafestas, S. G. (2014). Introduction To Mobile Robot Control (1st ed.). Waltham, MA: Elsevier.

Vermeij, G. J., Magee, L. E., & Kennedy, J. M. (1980). Exploring Pictures by Hand. Nature, 285, 59