Survey Data on PeRT (Performance Report Tool) Summary Measures

RESNA 28th Annual Conference - Atlanta, Georgia

Katya Hill, Ph.D., CCC-SLP1,2 and Barry Romich, P.E.1,3

1AAC Institute, Edinboro, PA
2Edinboro University of Pennsylvania,
Edinboro, PA
3Prentke Romich Company, Wooster, OH

ABSTRACT

Survey results from two National Institutes of Health (NIH) grants are reported. These data from 26 clinical sites across the country were used in the development of the AAC Performance Report Tool (PeRT). This paper focuses on the data analysis for practitioner perspectives on performance measures based on language activity monitoring (LAM) or logfiles which are reported using PeRT. In addition, summary measures are used to support monitoring domains of communicative competence as a component of AAC evidence-based practice.

BACKGROUND

The measurement of parameters of performance is an essential component of evidence-based practice (EBP). Performance measurement across assistive technology platforms is facilitated through automated data logging and language activity monitoring (LAM) [1]. The measurement of performance to support EBP applies to augmentative and alternative communication (AAC) service delivery at least as much as to other areas of assistive technology clinical practice. Clinical AAC practitioners benefit from automated methods, tools, and evidence to support language sampling, measure performance, and monitor change essential for EBP.

LAM supports the collection of language samples. LAM, or an alternative logging method, is available as a built-in feature in modern high performance AAC systems available from several manufacturers. LAM feasibility was evaluated through two National Institutes of Health (NIH) grants described in several publications [2]. These and subsequent research projects have resulted in several products to support performance measurement including the Performance Report Tool (PeRT) [3]. As part of the development and evaluation process for LAM tools, AAC practitioners provided input and feedback on design features, perspectives on application, and future applications. This paper focuses on the survey data regarding practitioner input on performance measures possible because of LAM data logging.

PeRT allows for the monitoring of AAC interventions reporting performance measures unavailable using traditional methods of observations and recording. Traditional methods of language sampling allow for the generation of a language transcript. Sampling, such as the conversational exchange among communication partners, provides information about the communicative competence of an individual under specific contexts. In terms of an individual who relies on an AAC system, the voice output from an AAC system along with use of verbalizations and gestures can be observed, analyzed, and reported. However, the human-machine interface is obscured, since traditional methods cannot record the methods and rates of utterance generation. The LAM provision of a time stamp, language representation method mnemonic, and event creates an opportunity to operationalize performance measures in order to examine and monitor an individual’s interaction and use of technology. Monitoring performance provides the data needed to measure progress and build communicative competence along specific domains.

STATEMENT OF PROBLEM

A set of communication performance summary measures was identified during the development of LAM tools. Since LAM or automated data logging was a relatively new application tool for AAC practitioners, perceptions on the value and usefulness of performance measures was unknown. As development and evaluation of PeRT was being implemented, input from AAC practitioners on summary measures essential for including in an AAC performance report was sought. Analysis of survey data could then be matched to domains of communicative competence to identify skill areas for AAC intervention.

METHOD

The research sought 1) to identify the types of quantitative information based on LAM data logging AAC practitioners believed important for effective clinical intervention, and 2) to determine practitioner perceptions of LAM after trial use. These goals were accomplished through the design, distribution, and completion of a survey.

The clinical application portion of the survey included questions on performance measures and LAM tools. Respondents were asked to select the top three valued performance measures from a random list. Table 1 depicts the two lists identified as monitoring utterance generations and monitoring access.

Table 1. Random list of performance measures used by respondents to select top three choices.
Monitoring Utterance Generation Monitoring Access to AAC System
Language representation methods used to construct a response Rate of output: best output rate
Number of pre-stored messages Rate of output: average output rate
Spontaneous utterances versus pre-stored messages Number of corrections in message construction
Spelled words versus stored single words Use of function keys
Spelled words versus use of word prediction Identification and comparison of different access method rates
Frequency of core and extended vocabulary Keystroke savings

In addition, respondents answered questions using a Likert scale regarding the use and value of LAM tools. The Likert scale allowed respondents to select from a four (4) item range of agree, agree somewhat, disagree somewhat, and disagree. Only questions related to perceptions on automated performance measures are included in this paper. The results of the survey were analyzed using descriptive statistics.

RESULTS

Respondents (N=26) identified specific performance measures valued from language sampling analysis only available using LAM tools, because of the time stamp. The preferred performance measures associated with monitoring utterance generation were selected as: 1) use of language representation method; 2) type of method of utterance generation (spontaneous generation versus pre-stored messages); 3) frequency of core vocabulary versus extended vocabulary. When questioned about monitoring access and key selections, respondents were interested in 1) identifying average communication rate; 2) identifying peak communication rate; 3) comparing access method rates.

Another component of the survey dealt with the perceived value of analyzed performance data. After experience with LAM tools, respondents had favorable impressions of the usefulness of the method and tools. Almost all respondents (96%) agreed that performance data would provide more effective data to justify the selection of an AAC system, and would provide more effective data to support the funding process. All respondents (100%) agreed that performance data would provide for more effective therapy, better clarity of needs for people who rely on AAC, and would lead to more effective communication.

DISCUSSION

A performance-based understanding of communicative competence has long been a basic aim of AAC clinicians [4]. The priority identification of performance measures provides data to support skill areas AAC practitioners consider important for monitoring and building communicative competence. Table 2 associates the top six selected performance measures for monitoring utterance generation (N=3) and access (N=3) with specific domains of communication competence.

Table 2. Domains of communicative competence matched with performance measures selected by survey respondents.

Domain of Communicative Competence

Performance Measure

Language representation Frequency of language representation method use
Strategic: construction Frequency of spontaneous novel utterance generation
Linguistic: content Frequency of core vocabulary
Strategic: rate Average communication rate
Strategic: rate Peak communication rate
Access Selection rate

Analysis of the performance measures indicate the importance clinical practitioners hold not only for how language is represented and generated using an AAC system, but the rate at which an individual can interface with the technology. To achieve the most effective communication possible, AAC practitioners need to monitor skills that directly impact on the human-machine interaction. For individuals who rely on AAC, these skills relate to accessing vocabulary to generate semantically and syntactically correct/appropriate messages as fast as possible. Respondents appear to value evidence that reports the multiple use of the three language representation methods of single meaning pictures, alphabet-based methods, and semantic compaction available on AAC systems. In addition, the selected summary measures indicate that respondents are interested in measuring how technology supports language use rather than merely identifying technology features.

Measuring performance allows AAC practitioners to target specific skills for intervention that should result in more competent AAC system use. EBP requires applying evidence of direct practical importance to clients [5]. Quantitative data used to improve the efficiency and effectiveness of communication has direct practical importance to individuals who rely on AAC.

REFERENCES

  1. Cooper, R., Hill, K., Koester, H., & Spaeth, D. (2004). Advances in data logging across assistive technology platforms. Paper presented at RESNA 2004 Annual Conference. Atlanta, GA.
  2. Hill, K. J. & Romich, B. A. (2001). A language activity monitor for supporting AAC evidence-based clinical practice. Assistive Technology, 13, 12-22.
  3. Romich, B., Hill, K., Seagull, A., Ahmad, N., Strecker, J., & Gotla, K (2003). AAC performance report tool. In Proceedings of the RESNA 2001 Annual Conference [CD-ROM]. Atlanta, GA: RESNA Press.
  4. Hill, K. (2004). Evidence-based practice and language activity monitoring. Topics in Language Disorders, 24:1, 18-30.
  5. Gibbs, L. B. (2003). Evidence-based practice for the helping professions: A practical guide with integrated multimedia. Pacific Grove, CA: Thompson Brooks/Cole.

ACKNOWLEDGMENTS

Initial work on AAC performance measurement was supports by grants awarded to Prentke Romich Company by the National Institute for Deafness and other Communication Disorders of NIH. Research was coordinated at Edinboro University of Pennsylvania.

CONTACT

Katya Hill, Ph.D., CCC-SLP
AAC Institute
Email: khill@aacinstitute.org