RESNA > Certification > Assistive Technology Professional (ATP) > About the ATP Exam > Exam Development

Exam Development

RESNA's Exam Development process is led by psychometric consultants and guided by the input of assistive technology subject matter experts. Decisions regarding exam content distribution and exam passing scores are made by committee and are based on data provided by practitioners in the field. The processes outlined below promote an exam that is fair, reliable, valid, and legally defensible.

Job Analysis Study

RESNA’s Job Task Analysis (JTA) is the largest research project related to the body of knowledge for Assistive Technology.  The results are used to verify and update the content of the Assistive Technology Professional (ATP) examination, initial certification, and renewal requirements. 

As per certification best practices, RESNA conducts a Job Task Analysis every five years in partnership with our testing vendor.  The JTA utilizes a multi-method approach incorporating numerous subject-matter experts (SMEs), an analysis of trends in the assistive technology (AT) profession, and a large-scale, statistically significant survey of the AT community.

The purpose of the JTA is to identify and validate the knowledge, skills, and abilities required to practice in assistive technology. The 2022 JTA Findings determined the Assistive Technology Job Task Domains that form the blueprint for the ATP Certification Examination – what content will be covered and how that content is weighted – as well as determine acceptable topics for continuing education for ATP certification renewal.

Item (Test Question) Writing/Item Review

Item writing is the process of creating individual questions—called "items"—for an exam. These items are designed to measure specific knowledge, skills, or abilities outlined in the exam’s blueprint or content outline as determined in the Job Task Analysis. Subject matter experts (SMEs) are trained on best practices, including how to format multiple-choice questions, write plausible distractors (incorrect options), and avoid common pitfalls like tricky wording or unintended clues. After drafting, items go through peer review and editing to ensure quality and consistency before being added to the exam item bank for further review, testing, or pretesting.

Standard Setting

A standard setting is held to determine the passing score for the exam, Standard setting defines the minimum level of knowledge and skills that a candidate must demonstrate to be considered competent in the role. Using the Modified Angoff Method, SMEs estimate the probability that a minimally qualified candidate would answer each question correctly. Their estimates are averaged and used to recommend a passing score.

After the passing score is set through standard setting, statistical equating is used to maintain fairness across different versions (forms) of the exam. Since new exam forms often introduce different combinations of items, equating ensures that each new form is statistically adjusted to be equivalent in difficulty to the exam used during the original cut score study. This means that candidates are held to the same standard regardless of which form they take, preserving the validity and comparability of exam results over time.

Scoring and Reporting

Scores are determined by converting the number of questions answered correctly to a scaled score range that has been determined by RESNA in advance through the standard setting process. A scaled score has been mathematically transformed from a raw score (i.e., the number of correct answers) to a standardized scale. For every possible raw score on a test form, there is a corresponding scaled score. When multiple forms of a test are used, or when results are compared over time, scaled scores allow accurate comparison between possible differences in test form length or difficulty. Scaled scores offer a fair assessment of everyone, regardless of when they took the test or which version they took. All candidates who take the exam must meet this standard in order to receive a passing score.

Preliminary Pass/Fail results are provided immediately following completion of the exam. Official notification of score status will be sent from RESNA within four weeks of the exam date.

Following the introduction of a new content outline, scores will be withheld from candidates for approximately 6–8 weeks to allow RESNA to conduct a Standard Setting Study to establish the passing score. Candidates will be informed before applying for the exam if this delay is in place, which only happens approximately once every five years. During this time, candidate responses are analyzed to detect potential test integrity and security issues, and testing staff perform an analysis to verify that each question on the examination is statistically valid. This process benefits all candidates by ensuring no one is unfairly penalized and helps preserve the integrity of the ATP certification program.

 

Search