INCORPORATION OF DATA COLLECTION IN COMPUTER ACCESS ASSESSMENTS TO ASSIST WITH DEVICE SELECTION

Meghan Lee Donahue, M.S., ATP

Stout Vocational Rehabilitation Institute, University of Wisconsin Stout

ABSTRACT

Data collection during computer access assessments can help strengthen and support the recommendations of the assistive technologist.  Many people who conduct these types of assessments don’t collect data because it is time consuming and it doesn’t account for the development of the skills with the tool after practice.  Typing and cursor control data can easily be collected and incorporated into the assessment process.  The collected data can be used to support the recommendations, facilitate discussions about what equipment should be recommended and compensate for the technical challenges that could be encountered during the assessment.

INTRODUCTION

Although many professionals agree that data collection is useful during assistive technology assessments, during professional conversations it is often discussed that there are many barriers that prevent it from being part of the standard practice.  Some of the barriers that are mentioned in the context of computer access and ergonomic assessments include that it is time consuming, doesn’t account for the learning curve and skill building with the assistive technology, fatigue induced through the assessment skews the results and that the information doesn’t add significant value or change the recommendation. 

The advantage of data collection is that it can provide baseline information for comparison of interventions, assist with device selection, assist with device positioning, assist with determining settings for the device, be used to assist with forecasting reasonable expectations of performance and defend the decision to recommend or not recommend a specific device.  Some funding sources appreciate data to support recommendations, although not all require it.  While it can be valuable to collect data during a computer access assessment, it is important to recognize that this is a way of supporting the clinical assessment process and is not a way to replace their judgement. 

Through experimentation during several computer access assessments, a method to incorporate data collection in computer access assessments has been developed as well as a list of ways that it can be used to assist and support clinical decision making.  The rest of this article is devoted to explaining how data can be collected for keyboard and mouse control and used by the assistive technologist during the assessment process. 

BACKGROUND

With regard to keyboard and cursor control, data can easily be collected with regard to typing speed, typing accuracy, mouse control, mouse clicking, switch control and scanning.  The tool that was used for collecting this data was Compass Software by Koester Performance Research.  The software allows for customization of the tests to accommodate endurance, needs and motivation.  Tests that are available within the software are pointing aim, pointing drag, pointing menu, text entry letter, text entry sentence, text entry word, scanning switch and scanning scan.  Upon completion of the tests, the program will compile data from similar tests and compile a report that compares all of them together.

DATA COLLECTION METHOD

In a computer access or ergonomic assessment, when we entered the phase of device trial and comparison, generally the consumer’s baseline performance was measured.  When possible, this was measured on the consumer’s own computer and setup.  To collect the data, the software was installed on a portable USB drive which was plugged into the consumer’s computer.  The software ran off of the drive, which overcomes any barriers related to installation privileges.  In test configuration, the name of the test was renamed to either “Baseline” or a description of the baseline setup.  This is helpful because when multi-test comparison reports and figures are generated by the software, the legend matches the device being compared.  The consumer then completes the desired test, generally Aim for a mouse and Sentence for keyboard comparison.  Subjective questions about whether they felt their performance compared to other days are asked.  Generally, (unless it will increase the consumers’ anxiety), the results of this test are discussed, followed by a conversation about what the goal of the assistive technology intervention is (e.g. reduce pain, increase endurance, increase speed, increase accuracy). 

Next, each device is tried, either on their computer or on the clinician’s computer.  Each test is renamed to briefly describe the device or settings (e.g. Jouse2, Dragon, Roller II, Orbit Trackball, Filter Keys). 

After each device is tested, before the results are viewed, the consumer is asked subjective questions regarding their opinion of the device.  The questions are along the lines of whether or not they liked the device, what features they liked about it, how comfortable they found the device and how easy/hard it was to use.  The data can then be viewed immediately or saved for later viewing.  The client and time available impact whether the data is viewed immediately.  Many consumers get anxious with the performance feedback and it is preferable to wait until the end of the equipment trials before reviewing the results.  

Once the equipment trials of devices and position options are complete, the consumer makes a subjective decision for their preference along with a conversation of why they selected it.  Then, a multi-test report is generated by the software.  The graphs in this report objectively compare the speed and accuracy of the different devices. 

USING DATA FOR DRIVING DECISIONS

After the report is generated and viewed, the leading device for the goals will either match the consumer’s preference, be close in comparison with the consumer’s preference or be significantly different from the consumer’s preference.  If they match than during documentation of recommendations the data collection can be acknowledged and inserted into the report or as an appendix.  Often specific speeds and accuracies were included in the text description. 

If the consumer’s preference was not the fastest and most accurate (or fastest or most accurate, depending on the goals) but was close than the clinician should determine how significant is the difference.  For example, if two mouse options are equally accurate and the consumer’s preferred mouse is 0.05seconds slower, then the assistive technologist needs to determine if that 0.05seconds is worth trading off the comfort.  If the difference isn’t significant than in the documentation of recommendations it is suggested to mention that data was collected, provide data on speed and accuracy of the two leading choices and explain that the less preferable one according to the data was chosen because between the two, the difference was determined to be insignificant, particularly when compared to the other factors that led to the device choice.  For example, the Switch Mouse and Vertical Mouse were equally accurate, but control of the Switch Mouse was 0.05seconds faster between targets.  The consumer found the Vertical Mouse more comfortable and intuitive to use, so we determined that the slower speed was negligible in comparison to the comfort of the device. 

In most instances, when the consumer’s preference has not closely matched the data results (assuming the data isn’t skewed due to technical difficulties), the assistive technologist doesn’t agree with the consumer’s preference as the best option.  In these situations, the data can be used to facilitate and start a discussion of why they chose their favorite device and how it isn’t a strong option to meet their goals. 

There are also situations where after using data collection for many consumers with similar skills and abilities, the assistive technologist starts to observe trends.  These can be used for forecasting skill growth and future performance with the tool.  This is particularly helpful when technical errors are encountered during the assessment.  For example, if the consumer has mild dysarthria, and experiences 0% recognition accuracy with speech to text software, which after significant troubleshooting can’t be improved.  The clinician may conclude from previous experience with people who have had more severe dysarthria and success with the tool, that it is a technical error with the computer.  The consumer and assistive technologist can then have a conversation about the tool and reasonable expectations based on other people’s performance and the assistive technologist’s observations and assessment of the consumer’s speech.  During documentation, the clinician explains that speech recognition software was considered and discussed, but significant technical errors were encountered that impeded the trial.  However, based on the performance of people with similar voice characteristics, it is reasonable to expect 20wpm with moderate recognition accuracy.  After a candid discussion of the merits and disadvantages of the tool, a demonstration of how it works and the practice required to use the tool effectively, it was determined that it should be used for text entry. 

Another challenge in the nature of many computer access assessments is there is not enough time for the consumer to fully develop their skill with the new tool before recommendations can be made.  A disadvantage of collecting data is that it is collected before the skill is developed and therefore doesn’t reflect the actual capacity, in particular relating to speed and accuracy that can be attained with practice.  During documentation this is addressed by reporting the data and then a professional opinion and projection that the performance will likely either improve or plateau.  If a plateau is anticipated, other ways for improvement or advantages of the intervention should be discussed.  This is particularly important if the change is not significantly different from their current system.  For example, a plateau could be anticipated, but the new system is anticipated to reduce fatigue and increase endurance and therefore increase the amount of hours the computer can be used in a setting.   

CONCLUSION

In conclusion, although data collection is not necessary for a computer access assessment, it can be used to strengthen the recommendations of the clinician.  Collecting data does not replace clinical judgement and expertise, instead it supports the decisions of the clinician.  Using these tools allows the assistive technologist to focus on other aspects of the assessment instead of subjectively comparing speed and accuracy.  If the tools were used for measurement of outcomes and skill performance over time, the data collected could potentially be used to further assist with forecasting performance of similar consumers.