TY - JOUR T1 - The Influence of Computerized Adaptive Testing on Psychometric Theory and Practice JF - Journal of Computerized Adaptive Testing Y1 - 2024 A1 - Reckase, Mark D. KW - computerized adaptive testing KW - Item Response Theory KW - paradigm shift KW - scaling theory KW - test design AB -

The major premise of this article is that part of the stimulus for the evolution of psychometric theory since the 1950s was the introduction of the concept of computerized adaptive testing (CAT) or its earlier non-CAT variations. The conceptual underpinnings of CAT that had the most influence on psychometric theory was the shift of emphasis from the test (or test score) as the focus of analysis to the test item (or item score). The change in focus allowed a change in the way that test results are conceived of as measurements. It also resolved the conflict among a number of ideas that were present in the early work on psychometric theory. Some of the conflicting ideas are summarized below to show how work on the development of CAT resolved some of those conflicts.

 

VL - 11 UR - https://jcatpub.net/index.php/jcat/issue/view/34/9 IS - 1 ER - TY - JOUR T1 - Expanding the Meaning of Adaptive Testing to Enhance Validity JF - Journal of Computerized Adaptive Testing Y1 - 2023 A1 - Steven L. Wise KW - Adaptive Testing KW - CAT KW - CBT KW - test-taking disengagement KW - validity VL - 10 IS - 2 ER - TY - JOUR T1 - How Do Trait Change Patterns Affect the Performance of Adaptive Measurement of Change? JF - Journal of Computerized Adaptive Testing Y1 - 2023 A1 - Ming Him Tai A1 - Allison W. Cooperman A1 - Joseph N. DeWeese A1 - David J. Weiss KW - adaptive measurement of change KW - computerized adaptive testing KW - longitudinal measurement KW - trait change patterns VL - 10 IS - 3 ER - TY - JOUR T1 - The (non)Impact of Misfitting Items in Computerized Adaptive Testing JF - Journal of Computerized Adaptive Testing Y1 - 2022 A1 - Christine E. DeMars KW - computerized adaptive testing KW - item fit KW - three-parameter logistic model VL - 9 UR - https://jcatpub.net/index.php/jcat/issue/view/26 IS - 2 ER - TY - CONF T1 - Grow a Tiger out of Your CAT T2 - IACAT 2017 Conference Y1 - 2017 A1 - Angela Verschoor KW - interoparability KW - Scalability KW - transparency AB -

The main focus in the community of test developers and researchers is on improving adaptive test procedures and methodologies. Yet, the transition from research projects to larger-scale operational CATs is facing its own challenges. Usually, these operational CATs find their origin in government tenders. “Scalability”, “Interoperability” and “Transparency” are three keywords often found in these documents. Scalability is concerned with parallel system architectures which are based upon stateless selection algorithms. Design capacities often range from 10,000 to well over 100,000 concurrent students. Interoperability is implemented in standards like QTI, standards that were not designed with adaptive testing in mind. Transparency is being realized by open source software: the adaptive test should not be a black box. These three requirements often complicate the development of an adaptive test, or sometimes even conflict.

Session Video

JF - IACAT 2017 Conference PB - Niigata Seiryo University CY - Niigata, Japan ER - TY - CONF T1 - Item Response Time on Task Effect in CAT T2 - IACAT 2017 Conference Y1 - 2017 A1 - Yang Shi KW - CAT KW - Response time KW - Task effect AB -

Introduction. In addition to reduced test length and increased measurement efficiency, computerized adaptive testing (CAT) can provide new insights into the cognitive process of task completion that cannot be mined via conventional tests. Response time is a primary characteristic of the task completion procedure. It has the potential to inform us about underlying processes. In this study, the relationship between response time and response accuracy will be investigated.

Hypothesis. The present study argues that the relationship between response time on task and response accuracy, which may be positive, negative, or curvilinear, will depend on cognitive nature of task items, holding ability of the subjects and difficulty of the items constant. The interpretations regarding the associations are not uniform either.

Research question. Is there a homogeneous effect of response time on test outcome across Graduate

Proposed explanations. If the accuracy of cognitive test responses decreases with response time, then it is an indication that the underlying cognitive process is a degrading process such as knowledge retrieval. More accessible knowledge can be retrieved faster than less accessible knowledge. It is inherent to knowledge retrieval that the success rate declines with elapsing response time. For instance, in reading tasks, the time on task effect is negative and the more negative, the easier a task is. However, if the accuracy of cognitive test responses increases with response time, then the process is of an upgrading nature, with an increasing success rate as a function of response time. For example, problem-solving takes time, and fast responses are less likely to be well-founded responses. It is of course also possible that the relationship is curvilinear, as when an increasing success rate is followed by a decreasing success rate or vice versa.

Methodology. The data are from computer-based GRE quantitative and verbal tests and will be analyzed with generalized linear mixed models (GLMM) framework after controlling the effect of ability and item difficulty as possible confounding factors. A linear model means a linear combination of predictors determining the probability of person p for answering item i correctly. The models are equivalent with advanced IRT models that go beyond the regular modeling of test responses in terms of one or more latent variables and item parameters. The lme4 package for R will be utilized to conduct the statistical calculation.

Implications. The right amount of testing time in CAT is important—too much is wasteful and costly, too little impacts score validity. The study is expected to provide new perception on the relationship between response time and response accuracy, which in turn, contribute to a better understanding of time effects and relevant cognitive process in CA.

Session Video

JF - IACAT 2017 Conference PB - Niigata Seiryo University CY - Niigata, Japan ER - TY - JOUR T1 - Implementing a CAT: The AMC Experience JF - Journal of Computerized Adaptive Testing Y1 - 2015 A1 - Barnard, John J KW - adaptive KW - Assessment KW - computer KW - medical KW - online KW - Testing VL - 3 UR - http://www.iacat.org/jcat/index.php/jcat/article/view/52/25 IS - 1 ER - TY - JOUR T1 - Detecting Item Preknowledge in Computerized Adaptive Testing Using Information Theory and Combinatorial Optimization JF - Journal of Computerized Adaptive Testing Y1 - 2014 A1 - Belov, D. I. KW - combinatorial optimization KW - hypothesis testing KW - item preknowledge KW - Kullback-Leibler divergence KW - simulated annealing. KW - test security VL - 2 UR - http://www.iacat.org/jcat/index.php/jcat/article/view/36/18 IS - 3 ER - TY - CONF T1 - A Heuristic Of CAT Item Selection Procedure For Testlets T2 - Annual Conference of the International Association for Computerized Adaptive Testing Y1 - 2011 A1 - Yuehmei Chien A1 - David Shin A1 - Walter Denny Way KW - CAT KW - shadow test KW - testlets JF - Annual Conference of the International Association for Computerized Adaptive Testing ER - TY - CONF T1 - A Test Assembly Model for MST T2 - Annual Conference of the International Association for Computerized Adaptive Testing Y1 - 2011 A1 - Angela Verschoor A1 - Ingrid Radtke A1 - Theo Eggen KW - CAT KW - mst KW - multistage testing KW - Rasch KW - routing KW - tif AB -

This study is just a short exploration in the matter of optimization of a MST. It is extremely hard or maybe impossible to chart influence of item pool and test specifications on optimization process. Simulations are very helpful in finding an acceptable MST.

JF - Annual Conference of the International Association for Computerized Adaptive Testing ER - TY - CONF T1 - Walking the Tightrope: Using Better Content Control to Improve CAT T2 - Annual Conference of the International Association for Computerized Adaptive Testing Y1 - 2011 A1 - Kathleen A. Gialluca KW - CAT KW - CAT evolution KW - test content AB -

All testing involves a balance between measurement precision and content considerations. CAT item-selection algorithms have evolved to accommodate content considerations. Reviews CAT evolution including: Original/”Pure” adaptive exams, Constrained CAT, Weighted-deviations method, Shadow-Test Approach, Testlets instead of fully adapted tests, Administration of one item may preclude the administration of other item(s), and item relationships.

Research Questions

 

JF - Annual Conference of the International Association for Computerized Adaptive Testing ER - TY - JOUR T1 - An evaluation of patient-reported outcomes found computerized adaptive testing was efficient in assessing stress perception JF - Journal of Clinical Epidemiology Y1 - 2009 A1 - Kocalevent, R. D. A1 - Rose, M. A1 - Becker, J. A1 - Walter, O. B. A1 - Fliege, H. A1 - Bjorner, J. B. A1 - Kleiber, D. A1 - Klapp, B. F. KW - *Diagnosis, Computer-Assisted KW - Adolescent KW - Adult KW - Aged KW - Aged, 80 and over KW - Confidence Intervals KW - Female KW - Humans KW - Male KW - Middle Aged KW - Perception KW - Quality of Health Care/*standards KW - Questionnaires KW - Reproducibility of Results KW - Sickness Impact Profile KW - Stress, Psychological/*diagnosis/psychology KW - Treatment Outcome AB - OBJECTIVES: This study aimed to develop and evaluate a first computerized adaptive test (CAT) for the measurement of stress perception (Stress-CAT), in terms of the two dimensions: exposure to stress and stress reaction. STUDY DESIGN AND SETTING: Item response theory modeling was performed using a two-parameter model (Generalized Partial Credit Model). The evaluation of the Stress-CAT comprised a simulation study and real clinical application. A total of 1,092 psychosomatic patients (N1) were studied. Two hundred simulees (N2) were generated for a simulated response data set. Then the Stress-CAT was given to n=116 inpatients, (N3) together with established stress questionnaires as validity criteria. RESULTS: The final banks included n=38 stress exposure items and n=31 stress reaction items. In the first simulation study, CAT scores could be estimated with a high measurement precision (SE<0.32; rho>0.90) using 7.0+/-2.3 (M+/-SD) stress reaction items and 11.6+/-1.7 stress exposure items. The second simulation study reanalyzed real patients data (N1) and showed an average use of items of 5.6+/-2.1 for the dimension stress reaction and 10.0+/-4.9 for the dimension stress exposure. Convergent validity showed significantly high correlations. CONCLUSIONS: The Stress-CAT is short and precise, potentially lowering the response burden of patients in clinical decision making. VL - 62 SN - 1878-5921 (Electronic)0895-4356 (Linking) N1 - Kocalevent, Ruya-DanielaRose, MatthiasBecker, JanineWalter, Otto BFliege, HerbertBjorner, Jakob BKleiber, DieterKlapp, Burghard FEvaluation StudiesUnited StatesJournal of clinical epidemiologyJ Clin Epidemiol. 2009 Mar;62(3):278-87, 287.e1-3. Epub 2008 Jul 18. ER - TY - JOUR T1 - Reduction in patient burdens with graphical computerized adaptive testing on the ADL scale: tool development and simulation JF - Health and Quality of Life Outcomes Y1 - 2009 A1 - Chien, T. W. A1 - Wu, H. M. A1 - Wang, W-C. A1 - Castillo, R. V. A1 - Chou, W. KW - *Activities of Daily Living KW - *Computer Graphics KW - *Computer Simulation KW - *Diagnosis, Computer-Assisted KW - Female KW - Humans KW - Male KW - Point-of-Care Systems KW - Reproducibility of Results KW - Stroke/*rehabilitation KW - Taiwan KW - United States AB - BACKGROUND: The aim of this study was to verify the effectiveness and efficacy of saving time and reducing burden for patients, nurses, and even occupational therapists through computer adaptive testing (CAT). METHODS: Based on an item bank of the Barthel Index (BI) and the Frenchay Activities Index (FAI) for assessing comprehensive activities of daily living (ADL) function in stroke patients, we developed a visual basic application (VBA)-Excel CAT module, and (1) investigated whether the averaged test length via CAT is shorter than that of the traditional all-item-answered non-adaptive testing (NAT) approach through simulation, (2) illustrated the CAT multimedia on a tablet PC showing data collection and response errors of ADL clinical functional measures in stroke patients, and (3) demonstrated the quality control of endorsing scale with fit statistics to detect responding errors, which will be further immediately reconfirmed by technicians once patient ends the CAT assessment. RESULTS: The results show that endorsed items could be shorter on CAT (M = 13.42) than on NAT (M = 23) at 41.64% efficiency in test length. However, averaged ability estimations reveal insignificant differences between CAT and NAT. CONCLUSION: This study found that mobile nursing services, placed at the bedsides of patients could, through the programmed VBA-Excel CAT module, reduce the burden to patients and save time, more so than the traditional NAT paper-and-pencil testing appraisals. VL - 7 SN - 1477-7525 (Electronic)1477-7525 (Linking) N1 - Chien, Tsair-WeiWu, Hing-ManWang, Weng-ChungCastillo, Roberto VasquezChou, WillyComparative StudyValidation StudiesEnglandHealth and quality of life outcomesHealth Qual Life Outcomes. 2009 May 5;7:39. U2 - 2688502 ER - TY - JOUR T1 - Using computerized adaptive testing to reduce the burden of mental health assessment JF - Psychiatric Services Y1 - 2008 A1 - Gibbons, R. D. A1 - Weiss, D. J. A1 - Kupfer, D. J. A1 - Frank, E. A1 - Fagiolini, A. A1 - Grochocinski, V. J. A1 - Bhaumik, D. K. A1 - Stover, A. A1 - Bock, R. D. A1 - Immekus, J. C. KW - *Diagnosis, Computer-Assisted KW - *Questionnaires KW - Adolescent KW - Adult KW - Aged KW - Agoraphobia/diagnosis KW - Anxiety Disorders/diagnosis KW - Bipolar Disorder/diagnosis KW - Female KW - Humans KW - Male KW - Mental Disorders/*diagnosis KW - Middle Aged KW - Mood Disorders/diagnosis KW - Obsessive-Compulsive Disorder/diagnosis KW - Panic Disorder/diagnosis KW - Phobic Disorders/diagnosis KW - Reproducibility of Results KW - Time Factors AB - OBJECTIVE: This study investigated the combination of item response theory and computerized adaptive testing (CAT) for psychiatric measurement as a means of reducing the burden of research and clinical assessments. METHODS: Data were from 800 participants in outpatient treatment for a mood or anxiety disorder; they completed 616 items of the 626-item Mood and Anxiety Spectrum Scales (MASS) at two times. The first administration was used to design and evaluate a CAT version of the MASS by using post hoc simulation. The second confirmed the functioning of CAT in live testing. RESULTS: Tests of competing models based on item response theory supported the scale's bifactor structure, consisting of a primary dimension and four group factors (mood, panic-agoraphobia, obsessive-compulsive, and social phobia). Both simulated and live CAT showed a 95% average reduction (585 items) in items administered (24 and 30 items, respectively) compared with administration of the full MASS. The correlation between scores on the full MASS and the CAT version was .93. For the mood disorder subscale, differences in scores between two groups of depressed patients--one with bipolar disorder and one without--on the full scale and on the CAT showed effect sizes of .63 (p<.003) and 1.19 (p<.001) standard deviation units, respectively, indicating better discriminant validity for CAT. CONCLUSIONS: Instead of using small fixed-length tests, clinicians can create item banks with a large item pool, and a small set of the items most relevant for a given individual can be administered with no loss of information, yielding a dramatic reduction in administration time and patient and clinician burden. VL - 59 SN - 1075-2730 (Print) N1 - Gibbons, Robert DWeiss, David JKupfer, David JFrank, EllenFagiolini, AndreaGrochocinski, Victoria JBhaumik, Dulal KStover, AngelaBock, R DarrellImmekus, Jason CR01-MH-30915/MH/United States NIMHR01-MH-66302/MH/United States NIMHResearch Support, N.I.H., ExtramuralUnited StatesPsychiatric services (Washington, D.C.)Psychiatr Serv. 2008 Apr;59(4):361-8. ER - TY - JOUR T1 - Computerized adaptive testing for polytomous motivation items: Administration mode effects and a comparison with short forms JF - Applied Psychological Measurement Y1 - 2007 A1 - Hol, A. M. A1 - Vorst, H. C. M. A1 - Mellenbergh, G. J. KW - 2220 Tests & Testing KW - Adaptive Testing KW - Attitude Measurement KW - computer adaptive testing KW - Computer Assisted Testing KW - items KW - Motivation KW - polytomous motivation KW - Statistical Validity KW - Test Administration KW - Test Forms KW - Test Items AB - In a randomized experiment (n=515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible consequences of model misfit. CAT efficiency was studied by a systematic comparison of the CAT with two types of conventional fixed length short forms, which are created to be good CAT competitors. Results showed no essential administration mode effects. Efficiency analyses show that CAT outperformed the short forms in almost all aspects when results are aggregated along the latent trait scale. The real and the simulated data results are very similar, which indicate that the real data results are not affected by model misfit. (PsycINFO Database Record (c) 2007 APA ) (journal abstract) VL - 31 SN - 0146-6216 N1 - 10.1177/0146621606297314Journal; Peer Reviewed Journal; Journal Article ER - TY - JOUR T1 - The effect of including pretest items in an operational computerized adaptive test: Do different ability examinees spend different amounts of time on embedded pretest items? JF - Educational Assessment Y1 - 2007 A1 - Ferdous, A. A. A1 - Plake, B. S. A1 - Chang, S-R. KW - ability KW - operational computerized adaptive test KW - pretest items KW - time AB - The purpose of this study was to examine the effect of pretest items on response time in an operational, fixed-length, time-limited computerized adaptive test (CAT). These pretest items are embedded within the CAT, but unlike the operational items, are not tailored to the examinee's ability level. If examinees with higher ability levels need less time to complete these items than do their counterparts with lower ability levels, they will have more time to devote to the operational test questions. Data were from a graduate admissions test that was administered worldwide. Data from both quantitative and verbal sections of the test were considered. For the verbal section, examinees in the lower ability groups spent systematically more time on their pretest items than did those in the higher ability groups, though for the quantitative section the differences were less clear. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Lawrence Erlbaum: US VL - 12 SN - 1062-7197 (Print); 1532-6977 (Electronic) ER - TY - JOUR T1 - Psychometric properties of an emotional adjustment measure: An application of the graded response model JF - European Journal of Psychological Assessment Y1 - 2007 A1 - Rubio, V. J. A1 - Aguado, D. A1 - Hontangas, P. M. A1 - Hernández, J. M. KW - computerized adaptive tests KW - Emotional Adjustment KW - Item Response Theory KW - Personality Measures KW - personnel recruitment KW - Psychometrics KW - Samejima's graded response model KW - test reliability KW - validity AB - Item response theory (IRT) provides valuable methods for the analysis of the psychometric properties of a psychological measure. However, IRT has been mainly used for assessing achievements and ability rather than personality factors. This paper presents an application of the IRT to a personality measure. Thus, the psychometric properties of a new emotional adjustment measure that consists of a 28-six graded response items is shown. Classical test theory (CTT) analyses as well as IRT analyses are carried out. Samejima's (1969) graded-response model has been used for estimating item parameters. Results show that the bank of items fulfills model assumptions and fits the data reasonably well, demonstrating the suitability of the IRT models for the description and use of data originating from personality measures. In this sense, the model fulfills the expectations that IRT has undoubted advantages: (1) The invariance of the estimated parameters, (2) the treatment given to the standard error of measurement, and (3) the possibilities offered for the construction of computerized adaptive tests (CAT). The bank of items shows good reliability. It also shows convergent validity compared to the Eysenck Personality Inventory (EPQ-A; Eysenck & Eysenck, 1975) and the Big Five Questionnaire (BFQ; Caprara, Barbaranelli, & Borgogni, 1993). (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Hogrefe & Huber Publishers GmbH: Germany VL - 23 SN - 1015-5759 (Print) ER - TY - JOUR T1 - Test design optimization in CAT early stage with the nominal response model JF - Applied Psychological Measurement Y1 - 2007 A1 - Passos, V. L. A1 - Berger, M. P. F. A1 - Tan, F. E. KW - computerized adaptive testing KW - nominal response model KW - robust performance KW - test design optimization AB - The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A polytomous unconstrained CAT simulation is carried out to evaluate this criterion's performance under different test premises. The simulation shows that the extent of early stage instability depends primarily on the quality of the item pool information and its size and secondarily on the item selection criteria. The efficiency of the D-optimality criterion is similar to the efficiency of other known item selection criteria. Yet, it often yields estimates that, at the beginning of CAT, display a more robust performance against instability. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Sage Publications: US VL - 31 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Assembling a computerized adaptive testing item pool as a set of linear tests JF - Journal of Educational and Behavioral Statistics Y1 - 2006 A1 - van der Linden, W. J. A1 - Ariel, A. A1 - Veldkamp, B. P. KW - Algorithms KW - computerized adaptive testing KW - item pool KW - linear tests KW - mathematical models KW - statistics KW - Test Construction KW - Test Items AB - Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content constraints, and/or have unfavorable exposure rates. Although at first sight somewhat counterintuitive, it is shown that if the CAT pool is assembled as a set of linear test forms, undesirable correlations can be broken down effectively. It is proposed to assemble such pools using a mixed integer programming model with constraints that guarantee that each test meets all content specifications and an objective function that requires them to have maximal information at a well-chosen set of ability values. An empirical example with a previous master pool from the Law School Admission Test (LSAT) yielded a CAT with nearly uniform bias and mean-squared error functions for the ability estimator and item-exposure rates that satisfied the target for all items in the pool. PB - Sage Publications: US VL - 31 SN - 1076-9986 (Print) ER - TY - JOUR T1 - Comparing methods of assessing differential item functioning in a computerized adaptive testing environment JF - Journal of Educational Measurement Y1 - 2006 A1 - Lei, P-W. A1 - Chen, S-Y. A1 - Yu, L. KW - computerized adaptive testing KW - educational testing KW - item response theory likelihood ratio test KW - logistic regression KW - trait estimation KW - unidirectional & non-unidirectional differential item functioning AB - Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional and non-unidirectional DIF, to the CAT environment in which pretest items are assumed to be seeded in CATs but not used for trait estimation. The proposed adaptation methods were evaluated with simulated data under different sample size ratios and impact conditions in terms of Type I error, power, and specificity in identifying the form of DIF. The adapted LR and IRT-LRT procedures are more powerful than the CAT version of SIBTEST for non-unidirectional DIF detection. The good Type I error control provided by IRT-LRT under extremely unequal sample sizes and large impact is encouraging. Implications of these and other findings are discussed. all rights reserved) PB - Blackwell Publishing: United Kingdom VL - 43 SN - 0022-0655 (Print) ER - TY - CHAP T1 - Computer-based testing T2 - Handbook of multimethod measurement in psychology Y1 - 2006 A1 - F Drasgow A1 - Chuah, S. C. KW - Adaptive Testing computerized adaptive testing KW - Computer Assisted Testing KW - Experimentation KW - Psychometrics KW - Theories AB - (From the chapter) There has been a proliferation of research designed to explore and exploit opportunities provided by computer-based assessment. This chapter provides an overview of the diverse efforts by researchers in this area. It begins by describing how paper-and-pencil tests can be adapted for administration by computers. Computerization provides the important advantage that items can be selected so they are of appropriate difficulty for each examinee. Some of the psychometric theory needed for computerized adaptive testing is reviewed. Then research on innovative computerized assessments is summarized. These assessments go beyond multiple-choice items by using formats made possible by computerization. Then some hardware and software issues are described, and finally, directions for future work are outlined. (PsycINFO Database Record (c) 2006 APA ) JF - Handbook of multimethod measurement in psychology PB - American Psychological Association CY - Washington D.C. USA VL - xiv N1 - Using Smart Source ParsingHandbook of multimethod measurement in psychology. (pp. 87-100). Washington, DC : American Psychological Association, [URL:http://www.apa.org/books]. xiv, 553 pp ER - TY - JOUR T1 - Equating scores from adaptive to linear tests JF - Applied Psychological Measurement Y1 - 2006 A1 - van der Linden, W. J. KW - computerized adaptive testing KW - equipercentile equating KW - local equating KW - score reporting KW - test characteristic function AB - Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test for a population of test takers. The two local methods were generally best. Surprisingly, the TCF method performed slightly worse than the equipercentile method. Both methods showed strong bias and uniformly large inaccuracy, but the TCF method suffered from extra error due to the lower asymptote of the test characteristic function. It is argued that the worse performances of the two methods are a consequence of the fact that they use a single equating transformation for an entire population of test takers and therefore have to compromise between the individual score distributions. PB - Sage Publications: US VL - 30 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Expansion of a physical function item bank and development of an abbreviated form for clinical research JF - Journal of Applied Measurement Y1 - 2006 A1 - Bode, R. K. A1 - Lai, J-S. A1 - Dineen, K. A1 - Heinemann, A. W. A1 - Shevrin, D. A1 - Von Roenn, J. A1 - Cella, D. KW - clinical research KW - computerized adaptive testing KW - performance levels KW - physical function item bank KW - Psychometrics KW - test reliability KW - Test Validity AB - We expanded an existing 33-item physical function (PF) item bank with a sufficient number of items to enable computerized adaptive testing (CAT). Ten items were written to expand the bank and the new item pool was administered to 295 people with cancer. For this analysis of the new pool, seven poorly performing items were identified for further examination. This resulted in a bank with items that define an essentially unidimensional PF construct, cover a wide range of that construct, reliably measure the PF of persons with cancer, and distinguish differences in self-reported functional performance levels. We also developed a 5-item (static) assessment form ("BriefPF") that can be used in clinical research to express scores on the same metric as the overall bank. The BriefPF was compared to the PF-10 from the Medical Outcomes Study SF-36. Both short forms significantly differentiated persons across functional performance levels. While the entire bank was more precise across the PF continuum than either short form, there were differences in the area of the continuum in which each short form was more precise: the BriefPF was more precise than the PF-10 at the lower functional levels and the PF-10 was more precise than the BriefPF at the higher levels. Future research on this bank will include the development of a CAT version, the PF-CAT. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Richard M Smith: US VL - 7 SN - 1529-7713 (Print) ER - TY - JOUR T1 - Optimal and nonoptimal computer-based test designs for making pass-fail decisions JF - Applied Measurement in Education Y1 - 2006 A1 - Hambleton, R. K. A1 - Xing, D. KW - adaptive test KW - credentialing exams KW - Decision Making KW - Educational Measurement KW - multistage tests KW - optimal computer-based test designs KW - test form AB - Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the impact of optimal and nonoptimal multistage test (MST) designs, linear parallel-form test designs (LPFT), and computer adaptive test (CAT) designs on the decision consistency and accuracy of pass-fail decisions. Realistic testing situations matching those of one of the large credentialing agencies were simulated to increase the generalizability of the findings. The conclusions were clear: (a) With the LPFTs, matching test information functions (TIFs) to the mean of the proficiency distribution produced slightly better results than matching them to the passing score; (b) all of the test designs worked better than test construction using random selection of items, subject to content constraints only; (c) CAT performed better than the other test designs; and (d) if matching a TIP to the passing score, the MST design produced a bit better results than the LPFT design. If an argument for the MST design is to be made, it can be made on the basis of slight improvements over the LPFT design and better expected item bank utilization, candidate preference, and the potential for improved diagnostic feedback, compared with the feedback that is possible with fixed linear test forms. (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Lawrence Erlbaum: US VL - 19 SN - 0895-7347 (Print); 1532-4818 (Electronic) ER - TY - JOUR T1 - Simulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function JF - Journal of Clinical Epidemiology Y1 - 2006 A1 - Hart, D. L. A1 - Mioduski, J. E. A1 - Werneke, M. W. A1 - Stratford, P. W. KW - Back Pain Functional Scale KW - computerized adaptive testing KW - Item Response Theory KW - Lumbar spine KW - Rehabilitation KW - True-score equating AB - Objective: To equate physical functioning (PF) items with Back Pain Functional Scale (BPFS) items, develop a computerized adaptive test (CAT) designed to assess lumbar spine functional status (LFS) in people with lumbar spine impairments, and compare discriminant validity of LFS measures (qIRT) generated using all items analyzed with a rating scale Item Response Theory model (RSM) and measures generated using the simulated CAT (qCAT). Methods: We performed a secondary analysis of retrospective intake rehabilitation data. Results: Unidimensionality and local independence of 25 BPFS and PF items were supported. Differential item functioning was negligible for levels of symptom acuity, gender, age, and surgical history. The RSM fit the data well. A lumbar spine specific CAT was developed that was 72% more efficient than using all 25 items to estimate LFS measures. qIRT and qCAT measures did not discriminate patients by symptom acuity, age, or gender, but discriminated patients by surgical history in similar clinically logical ways. qCAT measures were as precise as qIRT measures. Conclusion: A body part specific simulated CAT developed from an LFS item bank was efficient and produced precise measures of LFS without eroding discriminant validity. VL - 59 ER - TY - JOUR T1 - Simulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function JF - Journal of Clinical Epidemiology Y1 - 2006 A1 - Hart, D. L. A1 - Cook, K. F. A1 - Mioduski, J. E. A1 - Teal, C. R. A1 - Crane, P. K. KW - *Computer Simulation KW - *Range of Motion, Articular KW - Activities of Daily Living KW - Adult KW - Aged KW - Aged, 80 and over KW - Factor Analysis, Statistical KW - Female KW - Humans KW - Male KW - Middle Aged KW - Prospective Studies KW - Reproducibility of Results KW - Research Support, N.I.H., Extramural KW - Research Support, U.S. Gov't, Non-P.H.S. KW - Shoulder Dislocation/*physiopathology/psychology/rehabilitation KW - Shoulder Pain/*physiopathology/psychology/rehabilitation KW - Shoulder/*physiopathology KW - Sickness Impact Profile KW - Treatment Outcome AB - BACKGROUND AND OBJECTIVE: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items, develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (theta(IRT)) and measures generated using the simulated CAT (theta(CAT)). STUDY DESIGN AND SETTING: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients with shoulder impairments who completed 60 SFS items. RESULTS: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The theta(IRT) and theta(CAT) measures were highly correlated (r = .96) and resulted in similar classifications of patients. CONCLUSION: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good discriminating ability. VL - 59 N1 - 0895-4356 (Print)Journal ArticleValidation Studies ER - TY - JOUR T1 - Técnicas para detectar patrones de respuesta atípicos [Aberrant patterns detection methods] JF - Anales de Psicología Y1 - 2006 A1 - Núñez, R. M. N. A1 - Pina, J. A. L. KW - aberrant patterns detection KW - Classical Test Theory KW - generalizability theory KW - Item Response KW - Item Response Theory KW - Mathematics KW - methods KW - person-fit KW - Psychometrics KW - psychometry KW - Test Validity KW - test validity analysis KW - Theory AB - La identificación de patrones de respuesta atípicos es de gran utilidad para la construcción de tests y de bancos de ítems con propiedades psicométricas así como para el análisis de validez de los mismos. En este trabajo de revisión se han recogido los más relevantes y novedosos métodos de ajuste de personas que se han elaborado dentro de cada uno de los principales ámbitos de trabajo de la Psicometría: el escalograma de Guttman, la Teoría Clásica de Tests (TCT), la Teoría de la Generalizabilidad (TG), la Teoría de Respuesta al Ítem (TRI), los Modelos de Respuesta al Ítem No Paramétricos (MRINP), los Modelos de Clase Latente de Orden Restringido (MCL-OR) y el Análisis de Estructura de Covarianzas (AEC).Aberrant patterns detection has a great usefulness in order to make tests and item banks with psychometric characteristics and validity analysis of tests and items. The most relevant and newest person-fit methods have been reviewed. All of them have been made in each one of main areas of Psychometry: Guttman's scalogram, Classical Test Theory (CTT), Generalizability Theory (GT), Item Response Theory (IRT), Non-parametric Response Models (NPRM), Order-Restricted Latent Class Models (OR-LCM) and Covariance Structure Analysis (CSA). VL - 22 SN - 0212-9728 N1 - Spain: Universidad de Murcia ER - TY - CHAP T1 - Applications of item response theory to improve health outcomes assessment: Developing item banks, linking instruments, and computer-adaptive testing T2 - Outcomes assessment in cancer Y1 - 2005 A1 - Hambleton, R. K. ED - C. C. Gotay ED - C. Snyder KW - Computer Assisted Testing KW - Health KW - Item Response Theory KW - Measurement KW - Test Construction KW - Treatment Outcomes AB - (From the chapter) The current chapter builds on Reise's introduction to the basic concepts, assumptions, popular models, and important features of IRT and discusses the applications of item response theory (IRT) modeling to health outcomes assessment. In particular, we highlight the critical role of IRT modeling in: developing an instrument to match a study's population; linking two or more instruments measuring similar constructs on a common metric; and creating item banks that provide the foundation for tailored short-form instruments or for computerized adaptive assessments. (PsycINFO Database Record (c) 2005 APA ) JF - Outcomes assessment in cancer PB - Cambridge University Press CY - Cambridge, UK N1 - Using Smart Source ParsingOutcomes assessment in cancer: Measures, methods, and applications. (pp. 445-464). New York, NY : Cambridge University Press. xiv, 662 pp ER - TY - JOUR T1 - A computer-assisted test design and diagnosis system for use by classroom teachers JF - Journal of Computer Assisted Learning Y1 - 2005 A1 - He, Q. A1 - Tymms, P. KW - Computer Assisted Testing KW - Computer Software KW - Diagnosis KW - Educational Measurement KW - Teachers AB - Computer-assisted assessment (CAA) has become increasingly important in education in recent years. A variety of computer software systems have been developed to help assess the performance of students at various levels. However, such systems are primarily designed to provide objective assessment of students and analysis of test items, and focus has been mainly placed on higher and further education. Although there are commercial professional systems available for use by primary and secondary educational institutions, such systems are generally expensive and require skilled expertise to operate. In view of the rapid progress made in the use of computer-based assessment for primary and secondary students by education authorities here in the UK and elsewhere, there is a need to develop systems which are economic and easy to use and can provide the necessary information that can help teachers improve students' performance. This paper presents the development of a software system that provides a range of functions including generating items and building item banks, designing tests, conducting tests on computers and analysing test results. Specifically, the system can generate information on the performance of students and test items that can be easily used to identify curriculum areas where students are under performing. A case study based on data collected from five secondary schools in Hong Kong involved in the Curriculum, Evaluation and Management Centre's Middle Years Information System Project, Durham University, UK, has been undertaken to demonstrate the use of the system for diagnostic and performance analysis. (PsycINFO Database Record (c) 2006 APA ) (journal abstract) VL - 21 ER - TY - JOUR T1 - Infeasibility in automated test assembly models: A comparison study of different methods JF - Journal of Educational Measurement Y1 - 2005 A1 - Huitzing, H. A. A1 - Veldkamp, B. P. A1 - Verschoor, A. J. KW - Algorithms KW - Item Content (Test) KW - Models KW - Test Construction AB - Several techniques exist to automatically put together a test meeting a number of specifications. In an item bank, the items are stored with their characteristics. A test is constructed by selecting a set of items that fulfills the specifications set by the test assembler. Test assembly problems are often formulated in terms of a model consisting of restrictions and an objective to be maximized or minimized. A problem arises when it is impossible to construct a test from the item pool that meets all specifications, that is, when the model is not feasible. Several methods exist to handle these infeasibility problems. In this article, test assembly models resulting from two practical testing programs were reconstructed to be infeasible. These models were analyzed using methods that forced a solution (Goal Programming, Multiple-Goal Programming, Greedy Heuristic), that analyzed the causes (Relaxed and Ordered Deletion Algorithm (RODA), Integer Randomized Deletion Algorithm (IRDA), Set Covering (SC), and Item Sampling), or that analyzed the causes and used this information to force a solution (Irreducible Infeasible Set-Solver). Specialized methods such as the IRDA and the Irreducible Infeasible Set-Solver performed best. Recommendations about the use of different methods are given. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 42 ER - TY - JOUR T1 - La Validez desde una óptica psicométrica [Validity from a psychometric perspective] JF - Acta Comportamentalia Y1 - 2005 A1 - Muñiz, J. KW - Factor Analysis KW - Measurement KW - Psychometrics KW - Scaling (Testing) KW - Statistical KW - Technology KW - Test Validity AB - El estudio de la validez constituye el eje central de los análisis psicométricos de los instrumentos de medida. En esta comunicación se traza una breve nota histórica de los distintos modos de concebir la validez a lo largo de los tiempos, se comentan las líneas actuales, y se tratan de vislumbrar posibles vías futuras, teniendo en cuenta el impacto que las nuevas tecnologías informáticas están ejerciendo sobre los propios instrumentos de medida en Psicología y Educación. Cuestiones como los nuevos formatos multimedia de los ítems, la evaluación a distancia, el uso intercultural de las pruebas, las consecuencias de su uso, o los tests adaptativos informatizados, reclaman nuevas formas de evaluar y conceptualizar la validez. También se analizan críticamente algunos planteamientos recientes sobre el concepto de validez. The study of validity constitutes a central axis of psychometric analyses of measurement instruments. This paper presents a historical sketch of different modes of conceiving validity, with commentary on current views, and it attempts to predict future lines of research by considering the impact of new computerized technologies on measurement instruments in psychology and education. Factors such as the new multimedia format of items, distance assessment, the intercultural use of tests, the consequences of the latter, or the development of computerized adaptive tests demand new ways of conceiving and evaluating validity. Some recent thoughts about the concept of validity are also critically analyzed. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 13 ER - TY - JOUR T1 - Measuring physical function in patients with complex medical and postsurgical conditions: a computer adaptive approach JF - American Journal of Physical Medicine and Rehabilitation Y1 - 2005 A1 - Siebens, H. A1 - Andres, P. L. A1 - Pengsheng, N. A1 - Coster, W. J. A1 - Haley, S. M. KW - Activities of Daily Living/*classification KW - Adult KW - Aged KW - Cohort Studies KW - Continuity of Patient Care KW - Disability Evaluation KW - Female KW - Health Services Research KW - Humans KW - Male KW - Middle Aged KW - Postoperative Care/*rehabilitation KW - Prognosis KW - Recovery of Function KW - Rehabilitation Centers KW - Rehabilitation/*standards KW - Sensitivity and Specificity KW - Sickness Impact Profile KW - Treatment Outcome AB - OBJECTIVE: To examine whether the range of disability in the medically complex and postsurgical populations receiving rehabilitation is adequately sampled by the new Activity Measure--Post-Acute Care (AM-PAC), and to assess whether computer adaptive testing (CAT) can derive valid patient scores using fewer questions. DESIGN: Observational study of 158 subjects (mean age 67.2 yrs) receiving skilled rehabilitation services in inpatient (acute rehabilitation hospitals, skilled nursing facility units) and community (home health services, outpatient departments) settings for recent-onset or worsening disability from medical (excluding neurological) and surgical (excluding orthopedic) conditions. Measures were interviewer-administered activity questions (all patients) and physical functioning portion of the SF-36 (outpatients) and standardized chart items (11 Functional Independence Measure (FIM), 19 Standardized Outcome and Assessment Information Set (OASIS) items, and 22 Minimum Data Set (MDS) items). Rasch modeling analyzed all data and the relationship between person ability estimates and average item difficulty. CAT assessed the ability to derive accurate patient scores using a sample of questions. RESULTS: The 163-item activity item pool covered the range of physical movement and personal and instrumental activities. CAT analysis showed comparable scores between estimates using 10 items or the total item pool. CONCLUSION: The AM-PAC can assess a broad range of function in patients with complex medical illness. CAT achieves valid patient scores using fewer questions. VL - 84 N1 - 0894-9115 (Print)Comparative StudyJournal ArticleResearch Support, N.I.H., ExtramuralResearch Support, U.S. Gov't, P.H.S. ER - TY - JOUR T1 - Propiedades psicométricas de un test Adaptativo Informatizado para la medición del ajuste emocional [Psychometric properties of an Emotional Adjustment Computerized Adaptive Test] JF - Psicothema Y1 - 2005 A1 - Aguado, D. A1 - Rubio, V. J. A1 - Hontangas, P. M. A1 - Hernández, J. M. KW - Computer Assisted Testing KW - Emotional Adjustment KW - Item Response KW - Personality Measures KW - Psychometrics KW - Test Validity KW - Theory AB - En el presente trabajo se describen las propiedades psicométricas de un Test Adaptativo Informatizado para la medición del ajuste emocional de las personas. La revisión de la literatura acerca de la aplicación de los modelos de la teoría de la respuesta a los ítems (TRI) muestra que ésta se ha utilizado más en el trabajo con variables aptitudinales que para la medición de variables de personalidad, sin embargo diversos estudios han mostrado la eficacia de la TRI para la descripción psicométrica de dichasvariables. Aun así, pocos trabajos han explorado las características de un Test Adaptativo Informatizado, basado en la TRI, para la medición de una variable de personalidad como es el ajuste emocional. Nuestros resultados muestran la eficiencia del TAI para la evaluación del ajuste emocional, proporcionando una medición válida y precisa, utilizando menor número de elementos de medida encomparación con las escalas de ajuste emocional de instrumentos fuertemente implantados. Psychometric properties of an emotional adjustment computerized adaptive test. In the present work it was described the psychometric properties of an emotional adjustment computerized adaptive test. An examination of Item Response Theory (IRT) research literature indicates that IRT has been mainly used for assessing achievements and ability rather than personality factors. Nevertheless last years have shown several studies wich have successfully used IRT to personality assessment instruments. Even so, a few amount of works has inquired the computerized adaptative test features, based on IRT, for the measurement of a personality traits as it’s the emotional adjustment. Our results show the CAT efficiency for the emotional adjustment assessment so this provides a valid and accurate measurement; by using a less number of items in comparison with the emotional adjustment scales from the most strongly established questionnaires. VL - 17 ER - TY - JOUR T1 - A randomized experiment to compare conventional, computerized, and computerized adaptive administration of ordinal polytomous attitude items JF - Applied Psychological Measurement Y1 - 2005 A1 - Hol, A. M. A1 - Vorst, H. C. M. A1 - Mellenbergh, G. J. KW - Computer Assisted Testing KW - Test Administration KW - Test Items AB - A total of 520 high school students were randomly assigned to a paper-and-pencil test (PPT), a computerized standard test (CST), or a computerized adaptive test (CAT) version of the Dutch School Attitude Questionnaire (SAQ), consisting of ordinal polytomous items. The CST administered items in the same order as the PPT. The CAT administered all items of three SAQ subscales in adaptive order using Samejima's graded response model, so that six different stopping rule settings could be applied afterwards. School marks were used as external criteria. Results showed significant but small multivariate administration mode effects on conventional raw scores and small to medium effects on maximum likelihood latent trait estimates. When the precision of CAT latent trait estimates decreased, correlations with grade point average in general decreased. However, the magnitude of the decrease was not very large as compared to the PPT, the CST, and the CAT without the stopping rule. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 29 ER - TY - JOUR T1 - Test construction for cognitive diagnosis JF - Applied Psychological Measurement Y1 - 2005 A1 - Henson, R. K. A1 - Douglas, J. KW - (Measurement) KW - Cognitive Assessment KW - Item Analysis (Statistical) KW - Profiles KW - Test Construction KW - Test Interpretation KW - Test Items AB - Although cognitive diagnostic models (CDMs) can be useful in the analysis and interpretation of existing tests, little has been developed to specify how one might construct a good test using aspects of the CDMs. This article discusses the derivation of a general CDM index based on Kullback-Leibler information that will serve as a measure of how informative an item is for the classification of examinees. The effectiveness of the index is examined for items calibrated using the deterministic input noisy "and" gate model (DINA) and the reparameterized unified model (RUM) by implementing a simple heuristic to construct a test from an item bank. When compared to randomly constructed tests from the same item bank, the heuristic shows significant improvement in classification rates. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 29 ER - TY - JOUR T1 - Assisted self-adapted testing: A comparative study JF - European Journal of Psychological Assessment Y1 - 2004 A1 - Hontangas, P. A1 - Olea, J. A1 - Ponsoda, V. A1 - Revuelta, J. A1 - Wise, S. L. KW - Adaptive Testing KW - Anxiety KW - Computer Assisted Testing KW - Psychometrics KW - Test AB - A new type of self-adapted test (S-AT), called Assisted Self-Adapted Test (AS-AT), is presented. It differs from an ordinary S-AT in that prior to selecting the difficulty category, the computer advises examinees on their best difficulty category choice, based on their previous performance. Three tests (computerized adaptive test, AS-AT, and S-AT) were compared regarding both their psychometric (precision and efficiency) and psychological (anxiety) characteristics. Tests were applied in an actual assessment situation, in which test scores determined 20% of term grades. A sample of 173 high school students participated. Neither differences in posttest anxiety nor ability were obtained. Concerning precision, AS-AT was as precise as CAT, and both revealed more precision than S-AT. It was concluded that AS-AT acted as a CAT concerning precision. Some hints, but not conclusive support, of the psychological similarity between AS-AT and S-AT was also found. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 20 ER - TY - JOUR T1 - Computer adaptive testing: a strategy for monitoring stroke rehabilitation across settings JF - Stroke Rehabilitation Y1 - 2004 A1 - Andres, P. L. A1 - Black-Schaffer, R. M. A1 - Ni, P. A1 - Haley, S. M. KW - *Computer Simulation KW - *User-Computer Interface KW - Adult KW - Aged KW - Aged, 80 and over KW - Cerebrovascular Accident/*rehabilitation KW - Disabled Persons/*classification KW - Female KW - Humans KW - Male KW - Middle Aged KW - Monitoring, Physiologic/methods KW - Severity of Illness Index KW - Task Performance and Analysis AB - Current functional assessment instruments in stroke rehabilitation are often setting-specific and lack precision, breadth, and/or feasibility. Computer adaptive testing (CAT) offers a promising potential solution by providing a quick, yet precise, measure of function that can be used across a broad range of patient abilities and in multiple settings. CAT technology yields a precise score by selecting very few relevant items from a large and diverse item pool based on each individual's responses. We demonstrate the potential usefulness of a CAT assessment model with a cross-sectional sample of persons with stroke from multiple rehabilitation settings. VL - 11 SN - 1074-9357 (Print) N1 - Andres, Patricia LBlack-Schaffer, Randie MNi, PengshengHaley, Stephen MR01 hd43568/hd/nichdEvaluation StudiesResearch Support, U.S. Gov't, Non-P.H.S.Research Support, U.S. Gov't, P.H.S.United StatesTopics in stroke rehabilitationTop Stroke Rehabil. 2004 Spring;11(2):33-9. ER - TY - JOUR T1 - Computerized adaptive testing with multiple-form structures JF - Applied Psychological Measurement Y1 - 2004 A1 - Armstrong, R. D. A1 - Jones, D. H. A1 - Koppel, N. B. A1 - Pashley, P. J. KW - computerized adaptive testing KW - Law School Admission Test KW - multiple-form structure KW - testlets AB - A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible test forms, allowing test specialists the opportunity to review them before they are administered. Also, limiting the exposure of an individual MFS to a specific period of time can enhance test security. This article provides an overview of methods that have been developed to generate parallel MFSs. The approach is applied to the assembly of an experimental computerized Law School Admission Test (LSAT). (PsycINFO Database Record (c) 2007 APA, all rights reserved) PB - Sage Publications: US VL - 28 SN - 0146-6216 (Print) ER - TY - JOUR T1 - Évaluation et multimédia dans l'apprentissage d'une L2 [Assessment and multimedia in learning an L2] JF - ReCALL Y1 - 2004 A1 - Laurier, M. KW - Adaptive Testing KW - Computer Assisted Instruction KW - Educational KW - Foreign Language Learning KW - Program Evaluation KW - Technology computerized adaptive testing AB - In the first part of this paper different areas where technology may be used for second language assessment are described. First, item banking operations, which are generally based on item Response Theory but not necessarily restricted to dichotomously scored items, facilitate assessment task organization and require technological support. Second, technology may help to design more authentic assessment tasks or may be needed in some direct testing situations. Third, the assessment environment may be more adapted and more stimulating when technology is used to give the student more control. The second part of the paper presents different functions of assessment. The monitoring function (often called formative assessment) aims at adapting the classroom activities to students and to provide continuous feedback. Technology may be used to train the teachers in monitoring techniques, to organize data or to produce diagnostic information; electronic portfolios or quizzes that are built in some educational software may also be used for monitoring. The placement function is probably the one in which the application of computer adaptive testing procedures (e.g. French CAPT) is the most appropriate. Automatic scoring devices may also be used for placement purposes. Finally the certification function requires more valid and more reliable tools. Technology may be used to enhance the testing situation (to make it more authentic) or to facilitate data processing during the construction of a test. Almond et al. (2002) propose a four component model (Selection, Presentation, Scoring and Response) for designing assessment systems. Each component must be planned taking into account the assessment function. VL - 16 ER - TY - JOUR T1 - Kann die Konfundierung von Konzentrationsleistung und Aktivierung durch adaptives Testen mit dern FAKT vermieden werden? [Avoiding the confounding of concentration performance and activation by adaptive testing with the FACT] JF - Zeitschrift für Differentielle und Diagnostische Psychologie Y1 - 2004 A1 - Frey, A. A1 - Moosbrugger, H. KW - Adaptive Testing KW - Computer Assisted Testing KW - Concentration KW - Performance KW - Testing computerized adaptive testing AB - The study investigates the effect of computerized adaptive testing strategies on the confounding of concentration performance with activation. A sample of 54 participants was administered 1 out of 3 versions (2 adaptive, 1 non-adaptive) of the computerized Frankfurt Adaptive Concentration Test FACT (Moosbrugger & Heyden, 1997) at three subsequent points in time. During the test administration changes in activation (electrodermal activity) were recorded. The results pinpoint a confounding of concentration performance with activation for the non-adaptive test version, but not for the adaptive test versions (p = .01). Thus, adaptive FACT testing strategies can remove the confounding of concentration performance with activation, thereby increasing the discriminant validity. In conclusion, an attention-focusing-hypothesis is formulated to explain the observed effect. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 25 ER - TY - JOUR T1 - Using patterns of summed scores in paper-and-pencil tests and computer-adaptive tests to detect misfitting item score patterns JF - Journal of Educational Measurement Y1 - 2004 A1 - Meijer, R. R. KW - Computer Assisted Testing KW - Item Response Theory KW - person Fit KW - Test Scores AB - Two new methods have been proposed to determine unexpected sum scores on subtests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted ρ, was compared with a method where the probability for each score combination was calculated using a highest density region (HDR). Furthermore, these methods were compared with the standardized log-likelihood statistic with and without a correction for the estimated latent trait value (denoted as l-super(*)-sub(z) and l-sub(z), respectively). Data were simulated on the basis of the one-parameter logistic model, and both parametric and nonparametric logistic regression was used to obtain estimates of the latent trait. Results showed that it is important to take the trait level into account when comparing subtest scores. In a nonparametric item response theory (IRT) context, on adapted version of the HDR method was a powerful alterative to ρ. In a parametric IRT context, results showed that l-super(*)-sub(z) had the highest power when the data were simulated conditionally on the estimated latent trait level. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 41 ER - TY - CHAP T1 - Assessing question banks T2 - Reusing online resources: A sustanable approach to e-learning Y1 - 2003 A1 - Bull, J. A1 - Dalziel, J. A1 - Vreeland, T. KW - Computer Assisted Testing KW - Curriculum Based Assessment KW - Education KW - Technology computerized adaptive testing AB - In Chapter 14, Joanna Bull and James Daziel provide a comprehensive treatment of the issues surrounding the use of Question Banks and Computer Assisted Assessment, and provide a number of excellent examples of implementations. In their review of the technologies employed in Computer Assisted Assessment the authors include Computer Adaptive Testing and data generation. The authors reveal significant issues involving the impact of Intellectual Property rights and computer assisted assessment and make important suggestions for strategies to overcome these obstacles. (PsycINFO Database Record (c) 2005 APA )http://www-jime.open.ac.uk/2003/1/ (journal abstract) JF - Reusing online resources: A sustanable approach to e-learning PB - Kogan Page Ltd. CY - London, UK ER - TY - JOUR T1 - Computerized adaptive rating scales for measuring managerial performance JF - International Journal of Selection and Assessment Y1 - 2003 A1 - Schneider, R. J. A1 - Goff, M. A1 - Anderson, S. A1 - Borman, W. C. KW - Adaptive Testing KW - Algorithms KW - Associations KW - Citizenship KW - Computer Assisted Testing KW - Construction KW - Contextual KW - Item Response Theory KW - Job Performance KW - Management KW - Management Personnel KW - Rating Scales KW - Test AB - Computerized adaptive rating scales (CARS) had been developed to measure contextual or citizenship performance. This rating format used a paired-comparison protocol, presenting pairs of behavioral statements scaled according to effectiveness levels, and an iterative item response theory algorithm to obtain estimates of ratees' citizenship performance (W. C. Borman et al, 2001). In the present research, we developed CARS to measure the entire managerial performance domain, including task and citizenship performance, thus addressing a major limitation of the earlier CARS. The paper describes this development effort, including an adjustment to the algorithm that reduces substantially the number of item pairs required to obtain almost as much precision in the performance estimates. (PsycINFO Database Record (c) 2005 APA ) VL - 11 ER - TY - JOUR T1 - Optimal stratification of item pools in α-stratified computerized adaptive testing JF - Applied Psychological Measurement Y1 - 2003 A1 - Chang, Hua-Hua A1 - van der Linden, W. J. KW - Adaptive Testing KW - Computer Assisted Testing KW - Item Content (Test) KW - Item Response Theory KW - Mathematical Modeling KW - Test Construction computerized adaptive testing AB - A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in α-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized adaptive testing (CAT) version of the Graduate Record Exams (GRE) Quantitative Test. The results indicate that the new method performs well in practical situations. It improves item exposure control, reduces the mean squared error in the θ estimates, and increases test reliability. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 27 ER - TY - JOUR T1 - The relationship between item exposure and test overlap in computerized adaptive testing JF - Journal of Educational Measurement Y1 - 2003 A1 - Chen, S-Y. A1 - Ankemann, R. D. A1 - Spray, J. A. KW - (Statistical) KW - Adaptive Testing KW - Computer Assisted Testing KW - Human Computer KW - Interaction computerized adaptive testing KW - Item Analysis KW - Item Analysis (Test) KW - Test Items AB - The purpose of this article is to present an analytical derivation for the mathematical form of an average between-test overlap index as a function of the item exposure index, for fixed-length computerized adaptive tests (CATs). This algebraic relationship is used to investigate the simultaneous control of item exposure at both the item and test levels. The results indicate that, in fixed-length CATs, control of the average between-test overlap is achieved via the mean and variance of the item exposure rates of the items that constitute the CAT item pool. The mean of the item exposure rates is easily manipulated. Control over the variance of the item exposure rates can be achieved via the maximum item exposure rate (r-sub(max)). Therefore, item exposure control methods which implement a specification of r-sub(max) (e.g., J. B. Sympson and R. D. Hetter, 1985) provide the most direct control at both the item and test levels. (PsycINFO Database Record (c) 2005 APA ) VL - 40 ER - TY - JOUR T1 - Some alternatives to Sympson-Hetter item-exposure control in computerized adaptive testing JF - Journal of Educational and Behavioral Statistics Y1 - 2003 A1 - van der Linden, W. J. KW - Adaptive Testing KW - Computer Assisted Testing KW - Test Items computerized adaptive testing AB - TheHetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic set of values for the examinees’ ability parameter. Formal properties of the method are identified that help us explain why this iterative process can be slow and does not guarantee admissibility. In addition, some alternatives to the SH method are introduced. The behavior of these alternatives was estimated for an adaptive test from an item pool from the Law School Admission Test (LSAT). Two of the alternatives showed attractive behavior and converged smoothly to admissibility for all items in a relatively small number of iteration steps. VL - 28 ER - TY - CHAP T1 - Generating abstract reasoning items with cognitive theory T2 - Item generation for test development Y1 - 2002 A1 - Embretson, S. E. ED - P. Kyllomen KW - Cognitive Processes KW - Measurement KW - Reasoning KW - Test Construction KW - Test Items KW - Test Validity KW - Theories AB - (From the chapter) Developed and evaluated a generative system for abstract reasoning items based on cognitive theory. The cognitive design system approach was applied to generate matrix completion problems. Study 1 involved developing the cognitive theory with 191 college students who were administered Set I and Set II of the Advanced Progressive Matrices. Study 2 examined item generation by cognitive theory. Study 3 explored the psychometric properties and construct representation of abstract reasoning test items with 728 young adults. Five structurally equivalent forms of Abstract Reasoning Test (ART) items were prepared from the generated item bank and administered to the Ss. In Study 4, the nomothetic span of construct validity of the generated items was examined with 728 young adults who were administered ART items, and 217 young adults who were administered ART items and the Advanced Progressive Matrices. Results indicate the matrix completion items were effectively generated by the cognitive design system approach. (PsycINFO Database Record (c) 2005 APA ) JF - Item generation for test development PB - Lawrence Erlbaum Associates, Inc. CY - Mahwah, N.J. USA N1 - Using Smart Source ParsingItem generation for test development. (pp. 219-250). Mahwah, NJ : Lawrence Erlbaum Associates, Publishers. xxxii, 412 pp ER - TY - JOUR T1 - Hypergeometric family and item overlap rates in computerized adaptive testing JF - Psychometrika Y1 - 2002 A1 - Chang, Hua-Hua A1 - Zhang, J. KW - Adaptive Testing KW - Algorithms KW - Computer Assisted Testing KW - Taking KW - Test KW - Time On Task computerized adaptive testing AB - A computerized adaptive test (CAT) is usually administered to small groups of examinees at frequent time intervals. It is often the case that examinees who take the test earlier share information with examinees who will take the test later, thus increasing the risk that many items may become known. Item overlap rate for a group of examinees refers to the number of overlapping items encountered by these examinees divided by the test length. For a specific item pool, different item selection algorithms may yield different item overlap rates. An important issue in designing a good CAT item selection algorithm is to keep item overlap rate below a preset level. In doing so, it is important to investigate what the lowest rate could be for all possible item selection algorithms. In this paper we rigorously prove that if every item had an equal possibility to be selected from the pool in a fixed-length CAT, the number of overlapping item among any α randomly sampled examinees follows the hypergeometric distribution family for α ≥ 1. Thus, the expected values of the number of overlapping items among any randomly sampled α examinee can be calculated precisely. These values may serve as benchmarks in controlling item overlap rates for fixed-length adaptive tests. (PsycINFO Database Record (c) 2005 APA ) VL - 67 ER - TY - JOUR T1 - Information technology and literacy assessment JF - Reading and Writing Quarterly Y1 - 2002 A1 - Balajthy, E. KW - Computer Applications KW - Computer Assisted Testing KW - Information KW - Internet KW - Literacy KW - Models KW - Systems KW - Technology AB - This column discusses information technology and literacy assessment in the past and present. The author also describes computer-based assessments today including the following topics: computer-scored testing, computer-administered formal assessment, Internet formal assessment, computerized adaptive tests, placement tests, informal assessment, electronic portfolios, information management, and Internet information dissemination. A model of the major present-day applications of information technologies in reading and literacy assessment is also included. (PsycINFO Database Record (c) 2005 APA ) VL - 18 ER - TY - RPRT T1 - Mathematical-programming approaches to test item pool design Y1 - 2002 A1 - Veldkamp, B. P. A1 - van der Linden, W. J. A1 - Ariel, A. KW - Adaptive Testing KW - Computer Assisted KW - Computer Programming KW - Educational Measurement KW - Item Response Theory KW - Mathematics KW - Psychometrics KW - Statistical Rotation computerized adaptive testing KW - Test Items KW - Testing AB - (From the chapter) This paper presents an approach to item pool design that has the potential to improve on the quality of current item pools in educational and psychological testing and hence to increase both measurement precision and validity. The approach consists of the application of mathematical programming techniques to calculate optimal blueprints for item pools. These blueprints can be used to guide the item-writing process. Three different types of design problems are discussed, namely for item pools for linear tests, item pools computerized adaptive testing (CAT), and systems of rotating item pools for CAT. The paper concludes with an empirical example of the problem of designing a system of rotating item pools for CAT. PB - University of Twente, Faculty of Educational Science and Technology CY - Twente, The Netherlands SN - 02-09 N1 - Using Smart Source ParsingAdvances in psychology research, Vol. ( Hauppauge, NY : Nova Science Publishers, Inc, [URL:http://www.Novapublishers.com]. vi, 228 pp ER - TY - JOUR T1 - Outlier detection in high-stakes certification testing JF - Journal of Educational Measurement Y1 - 2002 A1 - Meijer, R. R. KW - Adaptive Testing KW - computerized adaptive testing KW - Educational Measurement KW - Goodness of Fit KW - Item Analysis (Statistical) KW - Item Response Theory KW - person Fit KW - Statistical Estimation KW - Statistical Power KW - Test Scores AB - Discusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA ) VL - 39 ER - TY - JOUR T1 - A structure-based approach to psychological measurement: Matching measurement models to latent structure JF - Assessment Y1 - 2002 A1 - Ruscio, John A1 - Ruscio, Ayelet Meron KW - Adaptive Testing KW - Assessment KW - Classification (Cognitive Process) KW - Computer Assisted KW - Item Response Theory KW - Psychological KW - Scaling (Testing) KW - Statistical Analysis computerized adaptive testing KW - Taxonomies KW - Testing AB - The present article sets forth the argument that psychological assessment should be based on a construct's latent structure. The authors differentiate dimensional (continuous) and taxonic (categorical) structures at the latent and manifest levels and describe the advantages of matching the assessment approach to the latent structure of a construct. A proper match will decrease measurement error, increase statistical power, clarify statistical relationships, and facilitate the location of an efficient cutting score when applicable. Thus, individuals will be placed along a continuum or assigned to classes more accurately. The authors briefly review the methods by which latent structure can be determined and outline a structure-based approach to assessment that builds on dimensional scaling models, such as item response theory, while incorporating classification methods as appropriate. Finally, the authors empirically demonstrate the utility of their approach and discuss its compatibility with traditional assessment methods and with computerized adaptive testing. (PsycINFO Database Record (c) 2005 APA ) (journal abstract) VL - 9 ER - TY - JOUR T1 - Computerized adaptive testing with the generalized graded unfolding model JF - Applied Psychological Measurement Y1 - 2001 A1 - Roberts, J. S. A1 - Lin, Y. A1 - Laughlin, J. E. KW - Attitude Measurement KW - College Students computerized adaptive testing KW - Computer Assisted Testing KW - Item Response KW - Models KW - Statistical Estimation KW - Theory AB - Examined the use of the generalized graded unfolding model (GGUM) in computerized adaptive testing. The objective was to minimize the number of items required to produce equiprecise estimates of person locations. Simulations based on real data about college student attitudes toward abortion and on data generated to fit the GGUM were used. It was found that as few as 7 or 8 items were needed to produce accurate and precise person estimates using an expected a posteriori procedure. The number items in the item bank (20, 40, or 60 items) and their distribution on the continuum (uniform locations or item clusters in moderately extreme locations) had only small effects on the accuracy and precision of the estimates. These results suggest that adaptive testing with the GGUM is a good method for achieving estimates with an approximately uniform level of precision using a small number of items. (PsycINFO Database Record (c) 2005 APA ) VL - 25 ER - TY - JOUR T1 - Differences between self-adapted and computerized adaptive tests: A meta-analysis JF - Journal of Educational Measurement Y1 - 2001 A1 - Pitkin, A. K. A1 - Vispoel, W. P. KW - Adaptive Testing KW - Computer Assisted Testing KW - Scores computerized adaptive testing KW - Test KW - Test Anxiety AB - Self-adapted testing has been described as a variation of computerized adaptive testing that reduces test anxiety and thereby enhances test performance. The purpose of this study was to gain a better understanding of these proposed effects of self-adapted tests (SATs); meta-analysis procedures were used to estimate differences between SATs and computerized adaptive tests (CATs) in proficiency estimates and post-test anxiety levels across studies in which these two types of tests have been compared. After controlling for measurement error the results showed that SATs yielded proficiency estimates that were 0.12 standard deviation units higher and post-test anxiety levels that were 0.19 standard deviation units lower than those yielded by CATs. The authors speculate about possible reasons for these differences and discuss advantages and disadvantages of using SATs in operational settings. (PsycINFO Database Record (c) 2005 APA ) VL - 38 ER - TY - JOUR T1 - Evaluation of an MMPI-A short form: Implications for adaptive testing JF - Journal of Personality Assessment Y1 - 2001 A1 - Archer, R. P. A1 - Tirrell, C. A. A1 - Elkins, D. E. KW - Adaptive Testing KW - Mean KW - Minnesota Multiphasic Personality Inventory KW - Psychometrics KW - Statistical Correlation KW - Statistical Samples KW - Test Forms AB - Reports some psychometric properties of an MMPI-Adolescent version (MMPI-A; J. N. Butcher et al, 1992) short form based on administration of the 1st 150 items of this test instrument. The authors report results for both the MMPI-A normative sample of 1,620 adolescents (aged 14-18 yrs) and a clinical sample of 565 adolescents (mean age 15.2 yrs) in a variety of treatment settings. The authors summarize results for the MMPI-A basic scales in terms of Pearson product-moment correlations generated between full administration and short-form administration formats and mean T score elevations for the basic scales generated by each approach. In this investigation, the authors also examine single-scale and 2-point congruences found for the MMPI-A basic clinical scales as derived from standard and short-form administrations. The authors present the relative strengths and weaknesses of the MMPI-A short form and discuss the findings in terms of implications for attempts to shorten the item pool through the use of computerized adaptive assessment approaches. (PsycINFO Database Record (c) 2005 APA ) VL - 76 ER - TY - JOUR T1 - An examination of the comparative reliability, validity, and accuracy of performance ratings made using computerized adaptive rating scales JF - Journal of Applied Psychology Y1 - 2001 A1 - Borman, W. C. A1 - Buck, D. E. A1 - Hanson, M. A. A1 - Motowidlo, S. J. A1 - Stark, S. A1 - F Drasgow KW - *Computer Simulation KW - *Employee Performance Appraisal KW - *Personnel Selection KW - Adult KW - Automatic Data Processing KW - Female KW - Human KW - Male KW - Reproducibility of Results KW - Sensitivity and Specificity KW - Support, U.S. Gov't, Non-P.H.S. KW - Task Performance and Analysis KW - Video Recording AB - This laboratory research compared the reliability, validity, and accuracy of a computerized adaptive rating scale (CARS) format and 2 relatively common and representative rating formats. The CARS is a paired-comparison rating task that uses adaptive testing principles to present pairs of scaled behavioral statements to the rater to iteratively estimate a ratee's effectiveness on 3 dimensions of contextual performance. Videotaped vignettes of 6 office workers were prepared, depicting prescripted levels of contextual performance, and 112 subjects rated these vignettes using the CARS format and one or the other competing format. Results showed 23%-37% lower standard errors of measurement for the CARS format. In addition, validity was significantly higher for the CARS format (d = .18), and Cronbach's accuracy coefficients showed significantly higher accuracy, with a median effect size of .08. The discussion focuses on possible reasons for the results. VL - 86 N1 - 214803450021-9010Journal ArticleValidation Studies ER - TY - CHAP T1 - Item response theory applied to combinations of multiple-choice and constructed-response items--approximation methods for scale scores T2 - Test scoring Y1 - 2001 A1 - Thissen, D. A1 - Nelson, L. A. A1 - Swygert, K. A. KW - Adaptive Testing KW - Item Response Theory KW - Method) KW - Multiple Choice (Testing KW - Scoring (Testing) KW - Statistical Estimation KW - Statistical Weighting KW - Test Items KW - Test Scores AB - (From the chapter) The authors develop approximate methods that replace the scoring tables with weighted linear combinations of the component scores. Topics discussed include: a linear approximation for the extension to combinations of scores; the generalization of two or more scores; potential applications of linear approximations to item response theory in computerized adaptive tests; and evaluation of the pattern-of-summed-scores, and Gaussian approximation, estimates of proficiency. (PsycINFO Database Record (c) 2005 APA ) JF - Test scoring PB - Lawrence Erlbaum Associates CY - Mahwah, N.J. USA N1 - Using Smart Source ParsingTest scoring. (pp. 293-341). Mahwah, NJ : Lawrence Erlbaum Associates, Publishers. xii, 422 pp ER - TY - JOUR T1 - Item selection in computerized adaptive testing: Should more discriminating items be used first? JF - Journal of Educational Measurement Y1 - 2001 A1 - Hau, Kit-Tai A1 - Chang, Hua-Hua KW - ability KW - Adaptive Testing KW - Computer Assisted Testing KW - Estimation KW - Statistical KW - Test Items computerized adaptive testing AB - During computerized adaptive testing (CAT), items are selected continuously according to the test-taker's estimated ability. Test security has become a problem because high-discrimination items are more likely to be selected and become overexposed. So, there seems to be a tradeoff between high efficiency in ability estimations and balanced usage of items. This series of four studies addressed the dilemma by focusing on the notion of whether more or less discriminating items should be used first in CAT. The first study demonstrated that the common maximum information method with J. B. Sympson and R. D. Hetter (1985) control resulted in the use of more discriminating items first. The remaining studies showed that using items in the reverse order, as described in H. Chang and Z. Yings (1999) stratified method had potential advantages: (a) a more balanced item usage and (b) a relatively stable resultant item pool structure with easy and inexpensive management. This stratified method may have ability-estimation efficiency better than or close to that of other methods. It is argued that the judicious selection of items, as in the stratified method, is a more active control of item exposure. (PsycINFO Database Record (c) 2005 APA ) VL - 38 ER - TY - JOUR T1 - Outlier measures and norming methods for computerized adaptive tests JF - Journal of Educational and Behavioral Statistics Y1 - 2001 A1 - Bradlow, E. T. A1 - Weiss, R. E. KW - Adaptive Testing KW - Computer Assisted Testing KW - Statistical Analysis KW - Test Norms AB - Notes that the problem of identifying outliers has 2 important aspects: the choice of outlier measures and the method to assess the degree of outlyingness (norming) of those measures. Several classes of measures for identifying outliers in Computerized Adaptive Tests (CATs) are introduced. Some of these measures are constructed to take advantage of CATs' sequential choice of items; other measures are taken directly from paper and pencil (P&P) tests and are used for baseline comparisons. Assessing the degree of outlyingness of CAT responses, however, can not be applied directly from P&P tests because stopping rules associated with CATs yield examinee responses of varying lengths. Standard outlier measures are highly correlated with the varying lengths which makes comparison across examinees impossible. Therefore, 4 methods are presented and compared which map outlier statistics to a familiar probability scale (a p value). The methods are explored in the context of CAT data from a 1995 Nationally Administered Computerized Examination (NACE). (PsycINFO Database Record (c) 2005 APA ) VL - 26 ER - TY - CHAP T1 - Practical issues in setting standards on computerized adaptive tests T2 - Setting performance standards: Concepts, methods, and perspectives Y1 - 2001 A1 - Sireci, S. G. A1 - Clauser, B. E. KW - Adaptive Testing KW - Computer Assisted Testing KW - Performance Tests KW - Testing Methods AB - (From the chapter) Examples of setting standards on computerized adaptive tests (CATs) are hard to find. Some examples of CATs involving performance standards include the registered nurse exam and the Novell systems engineer exam. Although CATs do not require separate standard setting-methods, there are special issues to be addressed by test specialist who set performance standards on CATs. Setting standards on a CAT will typical require modifications on the procedures used with more traditional, fixed-form, paper-and -pencil examinations. The purpose of this chapter is to illustrate why CATs pose special challenges to the standard setter. (PsycINFO Database Record (c) 2005 APA ) JF - Setting performance standards: Concepts, methods, and perspectives PB - Lawrence Erlbaum Associates, Inc. CY - Mahwah, N.J. USA N1 - Using Smart Source ParsingSetting performance standards: Concepts, methods, and perspectives. (pp. 355-369). Mahwah, NJ : Lawrence Erlbaum Associates, Publishers. xiii, 510 pp ER - TY - JOUR T1 - Toepassing van een computergestuurde adaptieve testprocedure op persoonlijkheidsdata [Application of a computerised adaptive test procedure on personality data] JF - Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden Y1 - 2001 A1 - Hol, A. M. A1 - Vorst, H. C. M. A1 - Mellenbergh, G. J. KW - Adaptive Testing KW - Computer Applications KW - Computer Assisted Testing KW - Personality Measures KW - Test Reliability computerized adaptive testing AB - Studied the applicability of a computerized adaptive testing procedure to an existing personality questionnaire within the framework of item response theory. The procedure was applied to the scores of 1,143 male and female university students (mean age 21.8 yrs) in the Netherlands on the Neuroticism scale of the Amsterdam Biographical Questionnaire (G. J. Wilde, 1963). The graded response model (F. Samejima, 1969) was used. The quality of the adaptive test scores was measured based on their correlation with test scores for the entire item bank and on their correlation with scores on other scales from the personality test. The results indicate that computerized adaptive testing can be applied to personality scales. (PsycINFO Database Record (c) 2005 APA ) VL - 56 ER - TY - JOUR T1 - The development of a computerized version of Vandenberg's mental rotation test and the effect of visuo-spatial working memory loading JF - Dissertation Abstracts International Section A: Humanities and Social Sciences Y1 - 2000 A1 - Strong, S. D. KW - Computer Assisted Testing KW - Mental Rotation KW - Short Term Memory computerized adaptive testing KW - Test Construction KW - Test Validity KW - Visuospatial Memory AB - This dissertation focused on the generation and evaluation of web-based versions of Vandenberg's Mental Rotation Test. Memory and spatial visualization theory were explored in relation to the addition of a visuo-spatial working memory component. Analysis of the data determined that there was a significant difference between scores on the MRT Computer and MRT Memory test. The addition of a visuo-spatial working memory component did significantly affect results at the .05 alpha level. Reliability and discrimination estimates were higher on the MRT Memory version. The computerization of the paper and pencil version on the MRT did not significantly effect scores but did effect the time required to complete the test. The population utilized in the quasi-experiment consisted of 107 university students from eight institutions in engineering graphics related courses. The subjects completed two researcher developed, Web-based versions of Vandenberg's Mental Rotation Test and the original paper and pencil version of the Mental Rotation Test. One version of the test included a visuo-spatial working memory loading. Significant contributions of this study included developing and evaluating computerized versions of Vandenberg's Mental Rotation Test. Previous versions of Vandenberg's Mental Rotation Test did not take advantage of the ability of the computer to incorporate an interaction factor, such as a visuo-spatial working memory loading, into the test. The addition of an interaction factor results in a more discriminate test which will lend itself well to computerized adaptive testing practices. Educators in engineering graphics related disciplines should strongly consider the use of spatial visualization tests to aid in establishing the effects of modern computer systems on fundamental design/drafting skills. Regular testing of spatial visualization skills will result assist in the creation of a more relevant curriculum. Computerized tests which are valid and reliable will assist in making this task feasible. (PsycINFO Database Record (c) 2005 APA ) VL - 60 ER - TY - JOUR T1 - Emergence of item response modeling in instrument development and data analysis JF - Medical Care Y1 - 2000 A1 - Hambleton, R. K. KW - Computer Assisted Testing KW - Health KW - Item Response Theory KW - Measurement KW - Statistical Validity computerized adaptive testing KW - Test Construction KW - Treatment Outcomes VL - 38 ER - TY - JOUR T1 - An examination of the reliability and validity of performance ratings made using computerized adaptive rating scales JF - Dissertation Abstracts International: Section B: The Sciences and Engineering Y1 - 2000 A1 - Buck, D. E. KW - Adaptive Testing KW - Computer Assisted Testing KW - Performance Tests KW - Rating Scales KW - Reliability KW - Test KW - Test Validity AB - This study compared the psychometric properties of performance ratings made using recently-developed computerized adaptive rating scales (CARS) to the psyc hometric properties of ratings made using more traditional paper-and-pencil rati ng formats, i.e., behaviorally-anchored and graphic rating scales. Specifically, the reliability, validity and accuracy of the performance ratings from each for mat were examined. One hundred twelve participants viewed six 5-minute videotape s of office situations and rated the performance of a target person in each vide otape on three contextual performance dimensions-Personal Support, Organizationa l Support, and Conscientious Initiative-using CARS and either behaviorally-ancho red or graphic rating scales. Performance rating properties were measured using Shrout and Fleiss's intraclass correlation (2, 1), Borman's differential accurac y measure, and Cronbach's accuracy components as indexes of rating reliability, validity, and accuracy, respectively. Results found that performance ratings mad e using the CARS were significantly more reliable and valid than performance rat ings made using either of the other formats. Additionally, CARS yielded more acc urate performance ratings than the paper-and-pencil formats. The nature of the C ARS system (i.e., its adaptive nature and scaling methodology) and its paired co mparison judgment task are offered as possible reasons for the differences found in the psychometric properties of the performance ratings made using the variou s rating formats. (PsycINFO Database Record (c) 2005 APA ) VL - 61 ER - TY - JOUR T1 - An integer programming approach to item bank design JF - Applied Psychological Measurement Y1 - 2000 A1 - van der Linden, W. J. A1 - Veldkamp, B. P. A1 - Reese, L. M. KW - Aptitude Measures KW - Item Analysis (Test) KW - Item Response Theory KW - Test Construction KW - Test Items AB - An integer programming approach to item bank design is presented that can be used to calculate an optimal blueprint for an item bank, in order to support an existing testing program. The results are optimal in that they minimize the effort involved in producing the items as revealed by current item writing patterns. Also presented is an adaptation of the models, which can be used as a set of monitoring tools in item bank management. The approach is demonstrated empirically for an item bank that was designed for the Law School Admission Test. VL - 24 ER -