00460nas a2200097 4500008004900000245009300049210006900142100001700211700001600228856011800244 In Press Engldsh 00aOptimizing cognitive ability measurement with multidimensional computer adaptive testing0 aOptimizing cognitive ability measurement with multidimensional c1 aMakransky, G1 aGlas, C A W uhttp://mail.iacat.org/content/optimizing-cognitive-ability-measurement-multidimensional-computer-adaptive-testing01382nas a2200133 4500008003900000245010400039210006900143300001200212490000700224520093300231100001701164700002201181856004501203 2020 d00aThe Optimal Item Pool Design in Multistage Computerized Adaptive Tests With the p-Optimality Method0 aOptimal Item Pool Design in Multistage Computerized Adaptive Tes a955-9740 v803 aThe present study extended the p-optimality method to the multistage computerized adaptive test (MST) context in developing optimal item pools to support different MST panel designs under different test configurations. Using the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure control. A total number of 72 simulated optimal item pools were generated and evaluated by an overall sample and conditional sample using various statistical measures. Results showed that the optimal item pools built with the p-optimality method provide sufficient measurement accuracy under all simulated MST panel designs. Exposure control affected the item pool size, but not the item distributions and item pool characteristics. This study demonstrated that the p-optimality method can adapt to MST item pool design, facilitate the MST assembly process, and improve its scoring accuracy.1 aYang, Lihong1 aReckase, Mark, D. uhttps://doi.org/10.1177/001316441990129201848nas a2200145 4500008003900000245011000039210006900149300001200218490000700230520135600237100001801593700001301611700001501624856006301639 2018 d00aOn-the-Fly Constraint-Controlled Assembly Methods for Multistage Adaptive Testing for Cognitive Diagnosis0 aOntheFly ConstraintControlled Assembly Methods for Multistage Ad a595-6130 v553 aAbstract This study applied the mode of on-the-fly assembled multistage adaptive testing to cognitive diagnosis (CD-OMST). Several and several module assembly methods for CD-OMST were proposed and compared in terms of measurement precision, test security, and constrain management. The module assembly methods in the study included the maximum priority index method (MPI), the revised maximum priority index (RMPI), the weighted deviation model (WDM), and the two revised Monte Carlo methods (R1-MC, R2-MC). Simulation results showed that on the whole the CD-OMST performs well in that it not only has acceptable attribute pattern correct classification rates but also satisfies both statistical and nonstatistical constraints; the RMPI method was generally better than the MPI method, the R2-MC method was generally better than the R1-MC method, and the two revised Monte Carlo methods performed best in terms of test security and constraint management, whereas the RMPI and WDM methods worked best in terms of measurement precision. The study is not only expected to provide information about how to combine MST and CD using an on-the-fly method and how do these assembled methods in CD-OMST perform relative to each other but also offer guidance for practitioners to assemble modules in CD-OMST with both statistical and nonstatistical constraints.1 aLiu, Shuchang1 aCai, Yan1 aTu, Dongbo uhttps://onlinelibrary.wiley.com/doi/abs/10.1111/jedm.1219401720nas a2200121 4500008003900000245008600039210006900125300001200194490000700206520131800213100001401531856005301545 2016 d00aOnline Calibration of Polytomous Items Under the Generalized Partial Credit Model0 aOnline Calibration of Polytomous Items Under the Generalized Par a434-4500 v403 aOnline calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively.1 aZheng, Yi uhttp://apm.sagepub.com/content/40/6/434.abstract01483nas a2200157 4500008003900000245004600039210004600085300001200131490000700143520104600150100001901196700002601215700001201241700001901253856005301272 2016 d00aOptimal Reassembly of Shadow Tests in CAT0 aOptimal Reassembly of Shadow Tests in CAT a469-4850 v403 aEven in the age of abundant and fast computing resources, concurrency requirements for large-scale online testing programs still put an uninterrupted delivery of computer-adaptive tests at risk. In this study, to increase the concurrency for operational programs that use the shadow-test approach to adaptive testing, we explored various strategies aiming for reducing the number of reassembled shadow tests without compromising the measurement quality. Strategies requiring fixed intervals between reassemblies, a certain minimal change in the interim ability estimate since the last assembly before triggering a reassembly, and a hybrid of the two strategies yielded substantial reductions in the number of reassemblies without degradation in the measurement accuracy. The strategies effectively prevented unnecessary reassemblies due to adapting to the noise in the early test stages. They also highlighted the practicality of the shadow-test approach by minimizing the computational load involved in its use of mixed-integer programming.1 aChoi, Seung, W1 aMoellering, Karin, T.1 aLi, Jie1 aLinden, Wim, J uhttp://apm.sagepub.com/content/40/7/469.abstract01322nas a2200145 4500008003900000245005100039210004900090300000900139490000700148520091500155100001801070700001801088700001901106856005101125 2015 d00aOnline Item Calibration for Q-Matrix in CD-CAT0 aOnline Item Calibration for QMatrix in CDCAT a5-150 v393 a
Item replenishment is important for maintaining a large-scale item bank. In this article, the authors consider calibrating new items based on pre-calibrated operational items under the deterministic inputs, noisy-and-gate model, the specification of which includes the so-called -matrix, as well as the slipping and guessing parameters. Making use of the maximum likelihood and Bayesian estimators for the latent knowledge states, the authors propose two methods for the calibration. These methods are applicable to both traditional paper–pencil–based tests, for which the selection of operational items is prefixed, and computerized adaptive tests, for which the selection of operational items is sequential and random. Extensive simulations are done to assess and to compare the performance of these approaches. Extensions to other diagnostic classification models are also discussed.
1 aChen, Yunxiao1 aLiu, Jingchen1 aYing, Zhiliang uhttp://apm.sagepub.com/content/39/1/5.abstract01517nas a2200133 4500008003900000245005300039210005100092300001200143490000700155520113500162100001401297700001901311856005301330 2015 d00aOn-the-Fly Assembled Multistage Adaptive Testing0 aOntheFly Assembled Multistage Adaptive Testing a104-1180 v393 aRecently, multistage testing (MST) has been adopted by several important large-scale testing programs and become popular among practitioners and researchers. Stemming from the decades of history of computerized adaptive testing (CAT), the rapidly growing MST alleviates several major problems of earlier CAT applications. Nevertheless, MST is only one among all possible solutions to these problems. This article presents a new adaptive testing design, “on-the-fly assembled multistage adaptive testing” (OMST), which combines the benefits of CAT and MST and offsets their limitations. Moreover, OMST also provides some unique advantages over both CAT and MST. A simulation study was conducted to compare OMST with MST and CAT, and the results demonstrated the promising features of OMST. Finally, the “Discussion” section provides suggestions on possible future adaptive testing designs based on the OMST framework, which could provide great flexibility for adaptive tests in the digital future and open an avenue for all types of hybrid designs based on the different needs of specific tests.
1 aZheng, Yi1 aChang, Hua-Hua uhttp://apm.sagepub.com/content/39/2/104.abstract01104nas a2200169 4500008004100000245006600041210006600107260001200173520055300185653002600738653000800764653002100772653001700793653001000810100002200820856009200842 2011 eng d00aOptimal Calibration Designs for Computerized Adaptive Testing0 aOptimal Calibration Designs for Computerized Adaptive Testing c10/20113 aOptimaztion
How can we exploit the advantages of Balanced Block Design while keeping the logistics manageable?
Homogeneous Designs: Overlap between test booklets as regular as possible
Conclusions:
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one or the two-parameter logistic model results in each individual answering about 50% of the items correctly. Two item selection procedures giving easier (or more difficult) tests for students are presented and evaluated. Item selection on probability points of items yields good results only with the one-parameter logistic model and not with the two-parameter logistic model. An alternative selection procedure, based on maximum information at a shifted ability level, gives satisfactory results with both models. Index terms: computerized adaptive testing, item selection, item response theory
1 aEggen, Theo1 aVerschoor, Angela, J uhttp://apm.sagepub.com/content/30/5/379.abstract01622nas a2200217 4500008004100000020002200041245008200063210006900145260002600214300001200240490000700252520088700259653002801146653002501174653002501199653001901224653001601243100001601259700002501275856010401300 2006 eng d a0146-6216 (Print)00aOptimal testing with easy or difficult items in computerized adaptive testing0 aOptimal testing with easy or difficult items in computerized ada bSage Publications: US a379-3930 v303 aComputerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model results in each individual answering about 50% of the items correctly. Two item selection procedures giving easier (or more difficult) tests for students are presented and evaluated. Item selection on probability points of items yields good results only with the one-parameter logistic model and not with the two-parameter logistic model. An alternative selection procedure, based on maximum information at a shifted ability level, gives satisfactory results with both models. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputer adaptive tests10aindividualized tests10aItem Response Theory10aitem selection10aMeasurement1 aEggen, Theo1 aVerschoor, Angela, J uhttp://mail.iacat.org/content/optimal-testing-easy-or-difficult-items-computerized-adaptive-testing01482nas a2200145 4500008003900000245006500039210006500104300001200169490000700181520102700188100002001215700002501235700002301260856005301283 2006 d00aOptimal Testlet Pool Assembly for Multistage Testing Designs0 aOptimal Testlet Pool Assembly for Multistage Testing Designs a204-2150 v303 aComputerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST design, testlets of differing difficulty levels must be created. Statistical properties for assembly of the testlets can be expressed using item response theory (IRT) parameters. The testlet test information function (TIF) value can be maximized at a specific point on the IRT ability scale. In practical MST designs, parallel versions of testlets are needed, so sets of testlets with equivalent properties are built according to equivalent specifications. In this project, the authors study the use of a mathematical programming technique to simultaneously assemble testlets to ensure equivalence and fairness to candidates who may be administered different testlets.
1 aAriel, Adelaide1 aVeldkamp, Bernard, P1 aBreithaupt, Krista uhttp://apm.sagepub.com/content/30/3/204.abstract02517nas a2200265 4500008004100000020004100041245013200082210006900214250001500283260000800298300001100306490000700317520154000324653003101864653003701895653003301932653002401965653001101989653002402000653002702024653003302051653003002084100001602114856012102130 2006 eng d a0025-7079 (Print)0025-7079 (Linking)00aOverview of quantitative measurement methods. Equivalence, invariance, and differential item functioning in health applications0 aOverview of quantitative measurement methods Equivalence invaria a2006/10/25 cNov aS39-490 v443 aBACKGROUND: Reviewed in this article are issues relating to the study of invariance and differential item functioning (DIF). The aim of factor analyses and DIF, in the context of invariance testing, is the examination of group differences in item response conditional on an estimate of disability. Discussed are parameters and statistics that are not invariant and cannot be compared validly in crosscultural studies with varying distributions of disability in contrast to those that can be compared (if the model assumptions are met) because they are produced by models such as linear and nonlinear regression. OBJECTIVES: The purpose of this overview is to provide an integrated approach to the quantitative methods used in this special issue to examine measurement equivalence. The methods include classical test theory (CTT), factor analytic, and parametric and nonparametric approaches to DIF detection. Also included in the quantitative section is a discussion of item banking and computerized adaptive testing (CAT). METHODS: Factorial invariance and the articles discussing this topic are introduced. A brief overview of the DIF methods presented in the quantitative section of the special issue is provided together with a discussion of ways in which DIF analyses and examination of invariance using factor models may be complementary. CONCLUSIONS: Although factor analytic and DIF detection methods share features, they provide unique information and can be viewed as complementary in informing about measurement equivalence.10a*Cross-Cultural Comparison10aData Interpretation, Statistical10aFactor Analysis, Statistical10aGuidelines as Topic10aHumans10aModels, Statistical10aPsychometrics/*methods10aStatistics as Topic/*methods10aStatistics, Nonparametric1 aTeresi, J A uhttp://mail.iacat.org/content/overview-quantitative-measurement-methods-equivalence-invariance-and-differential-item00543nas a2200109 4500008004100000245012100041210006900162260004000231100001600271700001900287856012700306 2004 eng d00aOptimal testing with easy items in computerized adaptive testing (Measurement and Research Department Report 2004-2)0 aOptimal testing with easy items in computerized adaptive testing aArnhem, The Netherlands: Cito Group1 aEggen, Theo1 aVerschoor, A J uhttp://mail.iacat.org/content/optimal-testing-easy-items-computerized-adaptive-testing-measurement-and-research-department00394nas a2200109 4500008004100000245006000041210006000101260001500161100001100176700001200187856008500199 2003 eng d00aOnline calibration and scale stability of a CAT program0 aOnline calibration and scale stability of a CAT program aChicago IL1 aGuo, F1 aWang, G uhttp://mail.iacat.org/content/online-calibration-and-scale-stability-cat-program00428nas a2200109 4500008004100000245007600041210006900117300001100186490000700197100001400204856010000218 2003 eng d00aAn optimal design approach to criterion-referenced computerized testing0 aoptimal design approach to criterionreferenced computerized test a97-1000 v281 aWiberg, M uhttp://mail.iacat.org/content/optimal-design-approach-criterion-referenced-computerized-testing01437nas a2200205 4500008004100000245008800041210007000129300001200199490000700211520067700218653002100895653003000916653002400946653002500970653002600995653005201021100001901073700002301092856011601115 2003 eng d00aOptimal stratification of item pools in α-stratified computerized adaptive testing0 aOptimal stratification of item pools in αstratified computerized a262-2740 v273 aA method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in α-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized adaptive testing (CAT) version of the Graduate Record Exams (GRE) Quantitative Test. The results indicate that the new method performs well in practical situations. It improves item exposure control, reduces the mean squared error in the θ estimates, and increases test reliability. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aAdaptive Testing10aComputer Assisted Testing10aItem Content (Test)10aItem Response Theory10aMathematical Modeling10aTest Construction computerized adaptive testing1 aChang, Hua-Hua1 avan der Linden, WJ uhttp://mail.iacat.org/content/optimal-stratification-item-pools-%CE%B1-stratified-computerized-adaptive-testing00431nas a2200109 4500008004100000245006900041210006900110260001800179100001600197700001700213856009100230 2003 eng d00aOptimal testing with easy items in computerized adaptive testing0 aOptimal testing with easy items in computerized adaptive testing aManchester UK1 aEggen, Theo1 aVerschoor, A uhttp://mail.iacat.org/content/optimal-testing-easy-items-computerized-adaptive-testing00461nas a2200121 4500008004100000245007300041210006900114260001900183100001400202700001900216700001300235856009100248 2002 eng d00aOptimum number of strata in the a-stratified adaptive testing design0 aOptimum number of strata in the astratified adaptive testing des aNew Orleans LA1 aWen, J -B1 aChang, Hua-Hua1 aHau, K-T uhttp://mail.iacat.org/content/optimum-number-strata-stratified-adaptive-testing-design01632nas a2200241 4500008004100000245005900041210005800100300001200158490000700170520087100177653002101048653003401069653002801103653002001131653003201151653002501183653001501208653002701223653002201250653001601272100001601288856008601304 2002 eng d00aOutlier detection in high-stakes certification testing0 aOutlier detection in highstakes certification testing a219-2330 v393 aDiscusses recent developments of person-fit analysis in computerized adaptive testing (CAT). Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in CAT Most person-fit research in CAT is restricted to simulated data. In this study, empirical data from a certification test were used. Alternatives are discussed to generate norms so that bounds can be determined to classify an item score pattern as fitting or misfitting. Using bounds determined from a sample of a high-stakes certification test, the empirical analysis showed that different types of misfit can be distinguished Further applications using statistical process control methods to detect misfitting item score patterns are discussed. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10acomputerized adaptive testing10aEducational Measurement10aGoodness of Fit10aItem Analysis (Statistical)10aItem Response Theory10aperson Fit10aStatistical Estimation10aStatistical Power10aTest Scores1 aMeijer, R R uhttp://mail.iacat.org/content/outlier-detection-high-stakes-certification-testing00522nas a2200121 4500008004100000245010800041210006900149260002400218100001100242700001300253700001200266856012200278 2001 eng d00aOn-line Calibration Using PARSCALE Item Specific Prior Method: Changing Test Population and Sample Size0 aOnline Calibration Using PARSCALE Item Specific Prior Method Cha aSeattle, Washington1 aGuo, F1 aStone, E1 aCruz, D uhttp://mail.iacat.org/content/line-calibration-using-parscale-item-specific-prior-method-changing-test-population-and00610nas a2200121 4500008004100000245017700041210006900218260002700287100001600314700001700330700001800347856012300365 2001 eng d00aOnline item parameter recalibration: Application of missing data treatments to overcome the effects of sparse data conditions in a computerized adaptive version of the MCAT0 aOnline item parameter recalibration Application of missing data aUnpublished manuscript1 aHarmes, J C1 aKromrey, J D1 aParshall, C G uhttp://mail.iacat.org/content/online-item-parameter-recalibration-application-missing-data-treatments-overcome-effects01706nas a2200181 4500008004100000245007300041210006900114300001100183490000700194520110100201653002101302653003001323653002501353653001501378100001701393700001501410856009901425 2001 eng d00aOutlier measures and norming methods for computerized adaptive tests0 aOutlier measures and norming methods for computerized adaptive t a85-1040 v263 aNotes that the problem of identifying outliers has 2 important aspects: the choice of outlier measures and the method to assess the degree of outlyingness (norming) of those measures. Several classes of measures for identifying outliers in Computerized Adaptive Tests (CATs) are introduced. Some of these measures are constructed to take advantage of CATs' sequential choice of items; other measures are taken directly from paper and pencil (P&P) tests and are used for baseline comparisons. Assessing the degree of outlyingness of CAT responses, however, can not be applied directly from P&P tests because stopping rules associated with CATs yield examinee responses of varying lengths. Standard outlier measures are highly correlated with the varying lengths which makes comparison across examinees impossible. Therefore, 4 methods are presented and compared which map outlier statistics to a familiar probability scale (a p value). The methods are explored in the context of CAT data from a 1995 Nationally Administered Computerized Examination (NACE). (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aComputer Assisted Testing10aStatistical Analysis10aTest Norms1 aBradlow, E T1 aWeiss, R E uhttp://mail.iacat.org/content/outlier-measures-and-norming-methods-computerized-adaptive-tests00520nas a2200097 4500008004100000245013000041210006900171260004000240100001600280856012600296 2001 eng d00aOverexposure and underexposure of items in computerized adaptive testing (Measurement and Research Department Reports 2001-1)0 aOverexposure and underexposure of items in computerized adaptive aArnhem, The Netherlands: CITO Groep1 aEggen, Theo uhttp://mail.iacat.org/content/overexposure-and-underexposure-items-computerized-adaptive-testing-measurement-and-research00611nas a2200097 4500008004100000245011100041210006900152260014400221100002300365856012500388 2000 eng d00aOptimal stratification of item pools in a-stratified computerized adaptive testing (Research Report 00-07)0 aOptimal stratification of item pools in astratified computerized aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 avan der Linden, WJ uhttp://mail.iacat.org/content/optimal-stratification-item-pools-stratified-computerized-adaptive-testing-research-report00868nas a2200145 4500008004100000245006600041210006600107300001200173490000700185520036100192653002100553653004400574100001500618856008900633 2000 eng d00aOverview of the computerized adaptive testing special section0 aOverview of the computerized adaptive testing special section a115-1200 v213 aThis paper provides an overview of the five papers included in the Psicologica special section on computerized adaptive testing. A short introduction to this topic is presented as well. The main results, the links between the five papers and the general research topic to which they are more related are also shown. (PsycINFO Database Record (c) 2005 APA )10aAdaptive Testing10aComputers computerized adaptive testing1 aPonsoda, V uhttp://mail.iacat.org/content/overview-computerized-adaptive-testing-special-section00447nas a2200097 4500008004100000245009500041210006900136260002100205100001500226856010800241 1999 eng d00aOn-the-fly adaptive tests: An application of generative modeling to quantitative reasoning0 aOnthefly adaptive tests An application of generative modeling to aMontreal, Canada1 aBejar, I I uhttp://mail.iacat.org/content/fly-adaptive-tests-application-generative-modeling-quantitative-reasoning02647nas a2200133 4500008004100000245007300041210006900114300000900183490000700192520216800199653003402367100001602401856009602417 1999 eng d00aOptimal design for item calibration in computerized adaptive testing0 aOptimal design for item calibration in computerized adaptive tes a42200 v593 aItem Response Theory is the psychometric model used for standardized tests such as the Graduate Record Examination. A test-taker's response to an item is modelled as a binary response with success probability depending on parameters for both the test-taker and the item. Two popular models are the two-parameter logistic (2PL) model and the three-parameter logistic (3PL) model. For the 2PL model, the logit of the probability of a correct response equals ai(theta j-bi), where ai and bi are item parameters, while thetaj is the test-taker's parameter, known as "proficiency." The 3PL model adds a nonzero left asymptote to model random response behavior by low theta test-takers. Assigning scores to students requires accurate estimation of theta s, while accurate estimation of theta s requires accurate estimation of the item parameters. The operational implementation of Item Response Theory, particularly following the advent of computerized adaptive testing, generally involves handling these two estimation problems separately. This dissertation addresses the optimal design for item parameter estimation. Most current designs calibrate items with a sample drawn from the overall test-taking population. For 2PL models a sequential design based on the D-optimality criterion has been proposed, while no 3PL design is in the literature. In this dissertation, we design the calibration with the ultimate use of the items in mind, namely to estimate test-takers' proficiency parameters. For both the 2PL and 3PL models, this criterion leads to a locally L-optimal design criterion, named the Minimal Information Loss criterion. In turn, this criterion and the General Equivalence Theorem give a two point design for the 2PL model and a three point design for the 3PL model. A sequential implementation of this optimal design is presented. For the 2PL model, this design is almost 55% more efficient than the simple random sample approach, and 12% more efficient than the locally D-optimal design. For the 3PL model, the proposed design is 34% more efficient than the simple random sample approach. (PsycINFO Database Record (c) 2003 APA, all rights reserved).10acomputerized adaptive testing1 aBuyske, S G uhttp://mail.iacat.org/content/optimal-design-item-calibration-computerized-adaptive-testing00439nas a2200121 4500008004100000245006700041210006700108300001200175490000700187100001800194700001500212856009000227 1998 eng d00aOptimal design of item pools for computerized adaptive testing0 aOptimal design of item pools for computerized adaptive testing a271-2790 v221 aStocking, M L1 aSwanson, L uhttp://mail.iacat.org/content/optimal-design-item-pools-computerized-adaptive-testing00391nas a2200109 4500008004100000245006000041210005900101300001200160490001000172100001300182856008600195 1998 eng d00aOptimal sequential rules for computer-based instruction0 aOptimal sequential rules for computerbased instruction a133-1540 v19(2)1 aVos, H J uhttp://mail.iacat.org/content/optimal-sequential-rules-computer-based-instruction00415nas a2200109 4500008004100000245006500041210006500106300001200171490000700183100002300190856009200213 1998 eng d00aOptimal test assembly of psychological and educational tests0 aOptimal test assembly of psychological and educational tests a195-2110 v221 avan der Linden, WJ uhttp://mail.iacat.org/content/optimal-test-assembly-psychological-and-educational-tests01319nas a2200265 4500008004100000245005500041210005400096300001200150490000600162520053000168653003800698653002000736653001400756653003500770653003100805653001100836653001900847653001800866653002800884100001300912700001500925700001700940700001400957856008200971 1997 eng d00aOn-line performance assessment using rating scales0 aOnline performance assessment using rating scales a173-1910 v13 aThe purpose of this paper is to report on the development of the on-line performance assessment instrument--the Assessment of Motor and Process Skills (AMPS). Issues that will be addressed in the paper include: (a) the establishment of the scoring rubric and its implementation in an extended Rasch model, (b) training of raters, (c) validation of the scoring rubric and procedures for monitoring the internal consistency of raters, and (d) technological implementation of the assessment instrument in a computerized program.10a*Outcome Assessment (Health Care)10a*Rehabilitation10a*Software10a*Task Performance and Analysis10aActivities of Daily Living10aHumans10aMicrocomputers10aPsychometrics10aPsychomotor Performance1 aStahl, J1 aShumway, R1 aBergstrom, B1 aFisher, A uhttp://mail.iacat.org/content/line-performance-assessment-using-rating-scales00433nam a2200097 4500008004100000245005800041210005800099260007600157100001700233856008500250 1997 eng d00aOptimization methods in computerized adaptive testing0 aOptimization methods in computerized adaptive testing aUnpublished doctoral dissertation, Rutgers University, New Brunswick NJ1 aCordova, M J uhttp://mail.iacat.org/content/optimization-methods-computerized-adaptive-testing00340nas a2200097 4500008004100000245005000041210005000091260001500141100001400156856007200170 1997 eng d00aOverview of practical issues in a CAT program0 aOverview of practical issues in a CAT program aChicago IL1 aWise, S L uhttp://mail.iacat.org/content/overview-practical-issues-cat-program00330nas a2200097 4500008004100000245004800041210004500089260001500134100001500149856006800164 1997 eng d00aAn overview of the LSAC CAT research agenda0 aoverview of the LSAC CAT research agenda aChicago IL1 aPashley, P uhttp://mail.iacat.org/content/overview-lsac-cat-research-agenda00394nas a2200109 4500008004100000245005700041210005700098260001500155100001500170700001900185856008000204 1997 eng d00aOverview of the USMLE Step 2 computerized field test0 aOverview of the USMLE Step 2 computerized field test aChicago IL1 aLuecht, RM1 aNungester, R J uhttp://mail.iacat.org/content/overview-usmle-step-2-computerized-field-test00502nas a2200109 4500008004100000245009100041210006900132260004600201100001800247700001500265856011200280 1996 eng d00aOptimal design of item pools for computerized adaptive testing (Research Report 96-34)0 aOptimal design of item pools for computerized adaptive testing R aPrinceton NJ: Educational Testing Service1 aStocking, M L1 aSwanson, L uhttp://mail.iacat.org/content/optimal-design-item-pools-computerized-adaptive-testing-research-report-96-3400528nas a2200133 4500008004500000245009500045210006900140300001200209490000700221100001400228700001400242700001800256856012000274 1989 Engldsh 00aOperational Characteristics of Adaptive Testing Procedures Using the Graded Response Model0 aOperational Characteristics of Adaptive Testing Procedures Using a129-1430 v131 aDodd, B G1 aKoch, W R1 aDe Ayala, R J uhttp://mail.iacat.org/content/operational-characteristics-adaptive-testing-procedures-using-graded-response-model-000524nas a2200133 4500008004100000245009500041210006900136300001200205490000700217100001400224700001400238700002000252856011800272 1989 eng d00aOperational characteristics of adaptive testing procedures using the graded response model0 aOperational characteristics of adaptive testing procedures using a129-1430 v131 aDodd, B G1 aKoch, W R1 ade Ayala, R. J. uhttp://mail.iacat.org/content/operational-characteristics-adaptive-testing-procedures-using-graded-response-model00483nas a2200109 4500008004100000245009200041210006900133260002100202100001400223700001700237856011900254 1986 eng d00aOperational characteristics of adaptive testing procedures using partial credit scoring0 aOperational characteristics of adaptive testing procedures using aSan Francisco CA1 aKoch, W R1 aG., Dodd., B uhttp://mail.iacat.org/content/operational-characteristics-adaptive-testing-procedures-using-partial-credit-scoring00418nas a2200109 4500008004100000245007100041210006900112300001200181490000700193100001500200856009300215 1981 eng d00aOptimal item difficulty for the three-parameter normal ogive model0 aOptimal item difficulty for the threeparameter normal ogive mode a461-4640 v461 aWolfe, J H uhttp://mail.iacat.org/content/optimal-item-difficulty-three-parameter-normal-ogive-model00484nas a2200121 4500008004100000245007800041210006900119300002500188490001600213100001300229700001700242856010300259 1980 eng d00aOperational characteristics of a one-parameter tailored testing procedure0 aOperational characteristics of a oneparameter tailored testing p a10, 66 (Ms No. 2104)0 vAugust 19801 aPatience1 aReckase, M D uhttp://mail.iacat.org/content/operational-characteristics-one-parameter-tailored-testing-procedure00534nas a2200109 4500008004100000245013600041210006900177260001800246100001800264700001700282856012500299 1979 eng d00aOperational characteristics of a Rasch model tailored testing procedure when program parameters and item pool attributes are varied0 aOperational characteristics of a Rasch model tailored testing pr aSan Francisco1 aPatience, W M1 aReckase, M D uhttp://mail.iacat.org/content/operational-characteristics-rasch-model-tailored-testing-procedure-when-program-parameters00555nas a2200097 4500008003900000245006400039210006400103260018600167100001300353856009100366 1977 d00aOperational Considerations in Implementing Tailored Testing0 aOperational Considerations in Implementing Tailored Testing aD. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aSegal, H uhttp://mail.iacat.org/content/operational-considerations-implementing-tailored-testing00505nas a2200097 4500008004100000245002000041210002000061260026000081100001600341856005000357 1976 eng d00aOpening remarks0 aOpening remarks aW. H. Gorham (Chair), Computers and testing: Steps toward the inevitable conquest (PS 76-1). Symposium presented at the 83rd annual convention of the APA, Chicago IL. Washington DC: U.S. Civil Service Commission, Personnel Research and Developement Center1 aGorham, W A uhttp://mail.iacat.org/content/opening-remarks00438nas a2200097 4500008004100000245006100041210005600102260008400158100001500242856008300257 1973 eng d00aAn overview of tailored testing (unpublished manuscript)0 aoverview of tailored testing unpublished manuscript aFlorida State University, Program of Educational Evaluation and Research Design1 aOlivier, P uhttp://mail.iacat.org/content/overview-tailored-testing-unpublished-manuscript