RESNA 27th International Annual Confence

Technology & Disability: Research, Design, Practice & Policy

June 18 to June 22, 2004
Orlando, Florida


Measuring AT Outcomes Using the Student Performance Profile: Analysis and Recommendations

Sally Fennema-Jansen, Roger O. Smith,
and Dave L. Edyburn,
University of Wisconsin-Milwaukee, Milwaukee, WI;
Susan Wilson, Ohio Department of Education;
Mary Binion, ORCLISH

ABSTRACT

The Student Performance Profile (SPP) is an instrument developed by the Assistive Technology Infusion Project (ATIP) of the Ohio Department of Education (ODE) in collaboration with the UW-Milwaukee ATOMS Project (Assistive Technology Outcomes Measurement System) team. The SPP was developed as a tool to be used in the administration of grant funds for students requiring assistive technology across the state of Ohio. Data related to the use of assistive technology has been collected on over 2500 students using the tool. Content analysis of the types of data collected using the SPP relative to the measurement of AT outcomes revealed that the tool gathered information on a variety of outcome domains including goal achievement, access and progress in general education, and device use. Satisfaction and quality of life were not addressed. Recommendations for updates to the current instrument are addressed.

KEYWORDS

Assistive technology, outcome, school, measurement

BACKGROUND

Although no one taxonomy of assistive technology outcomes has become accepted as the standard, theorists have proposed different taxonomies of outcomes over the years. DeRuyter (1, 2) identified five dimensions, that, when taken together, represent the outcomes of importance to various stakeholders: clinical results, functional status, quality of life, satisfaction, and cost. Edyburn (3) reported on the results of a focus group that identified ten variables that must be considered to understand AT outcomes: a) change in performance/function; b) change in participation; c) usage and why or why not; d) consumer satisfaction (process and devices); e) goal achievement; f) quality of life; g) cost; h) demographics; i) AT interventions (devices and services) and j) environmental context. Galvin's categories include measures of outcomes (benefits), satisfaction, progress, and efficiency (4). Tractman (5) suggests the categories of technology abandonment, user satisfaction, functional outcomes, and cost/benefit and believes that outcomes should be based on goals.

As in other service delivery settings, the need to measure the outcomes of assistive technology is becoming increasingly important in the schools (6). To demonstrate that assistive technology is effective in improving educational performance, schools are faced with the challenge of isolating the impact of assistive technology from the variety of other interventions provided.

In June of 2001, the Ohio Department of Education received a federal grant from the United States Department of Education School Renovation, IDEA, and Technology Grants, of which 9.3 million was used to assist districts in providing assistive technology (AT) devices for students with disabilities. The Assistive Technology Infusion Project (ATIP) was developed to administer the distribution of the assistive technology. Teams were required to submit an application for the assistive technology devices required by individual students.

One component of the grant administration process was the development of the Student Performance Profile (SPP). The SPP focuses on the area of need addressed through the AT, the rate of progress toward goals, and the contribution that each of a variety of interventions is making toward student progress. Teams were required to complete the SPP for each student who received funding for assistive technology through the ATIP prior to implementation and at eight months to one year follow-up.

RESEARCH QUESTIONS

What assistive technology outcome domains are included in the Student Performance Profile and how are they addressed?

What assistive technology outcome domains are not addressed in the Student Performance Profile?

What recommendations can be made for the next revision of the Student Performance Profile or for others interested in assessing assistive technology outcomes in the schools?

METHOD

The SPP was developed to serve a need within the administration of the ATIP and therefore the development did not proceed following typical test development protocol. Analysis of the instrument is important to improve the tool for future use. A review of assistive technology outcomes literature was completed to identify outcome domains as well as other factors important to measuring outcomes. The SPP was then examined to determine whether and how the primary domains or factors identified in the literature were addressed by the SPP.

RESULTS

In the absence of a commonly accepted taxonomy of assistive technology outcomes, the domains identified by Edyburn will be used as a basis for discussing the SPP.

The first domain, change in performance/function, is addressed directly by the questions that address the student's current ability on their Individualized Education Plan (IEP) goals. The teams are required to rate the student's current ability on the identified goal on a scale of zero to 100% (not able to fully able) prior to and after utilizing the AT. Additionally, teams are required to identify which AT items were used as direct supports to each goal.

The second domain, change in participation, is directly addressed by a series of questions that relate to access and progress in general education. Teams respond to statements about how the use of the assistive technology had contributed to participation in the general education classroom, participation in the general education curriculum, participation in statewide (proficiency) training, graduation from high school, and interactions with general education students. Again, a scale of zero to ten was used, with zero indicating “no contribution,” the midpoint indicating “some contribution,” and ten representing “substantial contribution.”

The thirds domain, usage and why or why not is partially addressed in the SPP. Teams were required to identify the items that the student used as direct supports for each goal that had identified. They then identified how often the items were used in support of the identified goal (frequency) and how long the student used the items each time it was used (duration). No specific questions addressed the issue of “why or why not.”

Consumer satisfaction, the fourth domain, was only addressed in one question and this related to the team's participation in the ATIP process. Teams were asked to indicate their agreement or disagreement with the statement, “From the team's perspective, completing the grant process to obtain this student's assistive technology was well worth the time and effort.” No information was gathered on the satisfaction of the student or the parents with the assistive technology or the ATIP process.

Goal achievement, the fifth domain, was addressed relative to the student's achievement of IEP goals and objectives as described in first domain, change in performance. The sixth domain, quality of life, is not addressed using the SPP. The next domains, cost, demographics, and AT devices are more directly addressed in the ATIP Application than in the SPP.

The next domain includes AT devices and services. Information on the AT devices obtained through the ATIP grant process is very specific, because the exact items that were ordered are identified. This information is not contained within the SPP itself, but is a part of the application completed by the teams. The Student Performance Profile (Post) asks what assistive technology services were provided to support the use of the specific item(s) for the identified goal. A list of services is provided including further evaluation, training for student, training for educational personnel, training for parents, device programming or set-up, repair, classroom implementation support, collaborative planning time, and other. Considering the importance of assistive technology services to the successful AT outcomes, the information available is minimal. For example, we are unable to ascertain the amount and frequency of training, or the time provided for customization of the device.

The final domain is the environmental context. Although information on the environment is contained in other forms used by the ATIP, the SPP does not directly address this domain.

DISCUSSION

The ability to tailor question sets based on an individual's response to a given question is highly feasible using a web-based format. Although this technique was used to a limited degree with the current version of the SPP, improvements to the instrument could be realized be taking further advantage of question tailoring. For example, if a respondent indicated that a student had not made progress on his or her IEP goals, the questions that followed could be different from those completed if a student had made progress.

Because practitioners are generally busy and stretched for time, the ease and efficiency of data collection, as well as the ease of interpretation is important. Jutai, Ladak, Schuller, Naumann, and Wright (1996) identify “ease of implementation” as a key factor facilitating assistive technology outcomes measurement. The data collection tool must be suitable for routine use by service providers, and data collection must be merged with the service delivery process as much as possible (p. 115). Implementation of the SPP was merged with the the service delivery process, however, the tool is lengthy and requires a fair time commitment to complete. Analyzing the data to determine which questions are essential and which is an important next step.

REFERENCES

  1. DeRuyter, F. (1995). Evaluating outcomes in assistive technology: Do we understand the commitment? Assistive Technology, 7 (1), 3-16.
  2. DeRuyter, F. (2002). Outcomes and performance monitoring. In D. A. Olson & F. DeRuyter (Eds.), Clinician's Guide to Assistive Technology (pp. 67-74). St. Louis: Mosby.
  3. Edyburn, D. L. (2003.) Measuring assistive technology outcomes: Key concepts. Journal of Special Education Technology, 18 (3), 53-55.
  4. Smith, R. O. (1996). Measuring the outcomes of assistive technology: Challenge and innovation. Assistive Technology, 8 (2), 71-81.
  5. Trachtman, L. (1996). Outcome measures. Are we ready to answer the tough questions? Assistive Technology, 6 (2), 91-92.
  6. Smith, R. O. (2000). Measuring assistive technology outcomes in education. Diagnostique, 25 (4), 273-290.
  7. Jutai, J., Ladak, N., Schuller, R., Naumann, S., & Wright, V. (1996). Outcomes measurement of assistive technologies: An institutional case study. Assistive Technology, 8 (2), 110-120.

ACKNOWLEDGMENTS

This project is funded in part by the U.S. Department of Education National Institute on Disability Related Research (NIDRR) under Grant # H133A010403 and the U.S. Department of Education School Renovation, IDEA, and Technology Grant # 84.352A. The opinions herein are those of the grantee and do not necessarily reflect those of the U.S. Department of Education.

Author Contact Information:

Sally Fennema-Jansen, Research Consultant;
University of Wisconsin – Milwaukee;
P.O. Box 413;
Milwaukee, Wisconsin, 53201;
phone: 414-229-5100;
email: sfennema@uwm.edu

RESNA Conference Logo