02064nas a2200157 4500008003900000245009100039210006900130300001200199490000700211520157600218100001501794700001901809700001401828700001901842856004501861 2020 d00aStratified Item Selection Methods in Cognitive Diagnosis Computerized Adaptive Testing0 aStratified Item Selection Methods in Cognitive Diagnosis Compute a346-3610 v443 aCognitive diagnostic computerized adaptive testing (CD-CAT) aims to obtain more useful diagnostic information by taking advantages of computerized adaptive testing (CAT). Cognitive diagnosis models (CDMs) have been developed to classify examinees into the correct proficiency classes so as to get more efficient remediation, whereas CAT tailors optimal items to the examinee’s mastery profile. The item selection method is the key factor of the CD-CAT procedure. In recent years, a large number of parametric/nonparametric item selection methods have been proposed. In this article, the authors proposed a series of stratified item selection methods in CD-CAT, which are combined with posterior-weighted Kullback–Leibler (PWKL), nonparametric item selection (NPS), and weighted nonparametric item selection (WNPS) methods, and named S-PWKL, S-NPS, and S-WNPS, respectively. Two different types of stratification indices were used: original versus novel. The performances of the proposed item selection methods were evaluated via simulation studies and compared with the PWKL, NPS, and WNPS methods without stratification. Manipulated conditions included calibration sample size, item quality, number of attributes, number of strata, and data generation models. Results indicated that the S-WNPS and S-NPS methods performed similarly, and both outperformed the S-PWKL method. And item selection methods with novel stratification indices performed slightly better than the ones with original stratification indices, and those without stratification performed the worst.1 aYang, Jing1 aChang, Hua-Hua1 aTao, Jian1 aShi, Ningzhong uhttps://doi.org/10.1177/014662161989378300949nas a2200169 4500008003900000022001400039245011500053210006900168260000800237300001600245490000700261520040300268100001700671700002400688700002100712856004600733 2018 d a1573-264900aSome recommendations for developing multidimensional computerized adaptive tests for patient-reported outcomes0 aSome recommendations for developing multidimensional computerize cApr a1055–10630 v273 aMultidimensional item response theory and computerized adaptive testing (CAT) are increasingly used in mental health, quality of life (QoL), and patient-reported outcome measurement. Although multidimensional assessment techniques hold promises, they are more challenging in their application than unidimensional ones. The authors comment on minimal standards when developing multidimensional CATs.1 aSmits, Niels1 aPaap, Muirne, C. S.1 aBöhnke, Jan, R. uhttps://doi.org/10.1007/s11136-018-1821-803598nas a2200169 4500008004100000245004300041210004100084260005500125520306600180653000803246653002303254653002303277100001703300700002003317700002003337856007103357 2017 eng d00aScripted On-the-fly Multistage Testing0 aScripted Onthefly Multistage Testing aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
On-the-fly multistage testing (OMST) was introduced recently as a promising alternative to preassembled MST. A decidedly appealing feature of both is the reviewability of items within the current stage. However, the fundamental difference is that, instead of routing to a preassembled module, OMST adaptively assembles a module at each stage according to an interim ability estimate. This produces more individualized forms with finer measurement precision, but imposing nonstatistical constraints and controlling item exposure become more cumbersome. One recommendation is to use the maximum priority index followed by a remediation step to satisfy content constraints, and the Sympson-Hetter method with a stratified item bank for exposure control.
However, these methods can be computationally expensive, thereby impeding practical implementation. Therefore, this study investigated the script method as a simpler solution to the challenge of strict content balancing and effective item exposure control in OMST. The script method was originally devised as an item selection algorithm for CAT and generally proceeds as follows: For a test with m items, there are m slots to be filled, and an item is selected according to pre-defined rules for each slot. For the first slot, randomly select an item from a designated content area (collection). For each subsequent slot, 1) Discard any enemies of items already administered in previous slots; 2) Draw a designated number of candidate items (selection length) from the designated collection according to the current ability estimate; 3) Randomly select one item from the set of candidates. There are two distinct features of the script method. First, a predetermined sequence of collections guarantees meeting content specifications. The specific ordering may be determined either randomly or deliberately by content experts. Second, steps 2 and 3 depict a method of exposure control, in which selection length balances item usage at the possible expense of ability estimation accuracy. The adaptation of the script method to OMST is straightforward. For the first module, randomly select each item from a designated collection. For each subsequent module, the process is the same as in scripted CAT (SCAT) except the same ability estimate is used for the selection of all items within the module. A series of simulations was conducted to evaluate the performance of scripted OMST (SOMST, with 3 or 4 evenly divided stages) relative to SCAT under various item exposure restrictions. In all conditions, reliability was maximized by programming an optimization algorithm that searches for the smallest possible selection length for each slot within the constraints. Preliminary results indicated that SOMST is certainly a capable design with performance comparable to that of SCAT. The encouraging findings and ease of implementation highly motivate the prospect of operational use for large-scale assessments.
10aCAT10amultistage testing10aOn-the-fly testing1 aChoe, Edison1 aWilliams, Bruce1 aLee, Sung-Hyuck uhttps://drive.google.com/open?id=1wKuAstITLXo6BM4APf2mPsth1BymNl-y01840nas a2200145 4500008004100000245010900041210006900150260005500219520128100274100001501555700001401570700001901584700002001603856007101623 2017 eng d00aA Simulation Study to Compare Classification Method in Cognitive Diagnosis Computerized Adaptive Testing0 aSimulation Study to Compare Classification Method in Cognitive D aNiigata, JapanbNiigata Seiryo Universityc08/20173 aCognitive Diagnostic Computerized Adaptive Testing (CD-CAT) combines the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models that can be viewed as restricted latent class models have been developed to classify the examinees into the correct profile of skills that have been mastered and those that have not so as to get more efficient remediation. Chiu & Douglas (2013) introduces a nonparametric procedure that only requires specification of Q-matrix to classify by proximity to ideal response pattern. In this article, we compare nonparametric procedure with common profile estimation method like maximum a posterior (MAP) in CD-CAT. Simulation studies consider a variety of Q-matrix structure, the number of attributes, ways to generate attribute profiles, and item quality. Results indicate that nonparametric procedure consistently gets the higher pattern and attribute recovery rate in nearly all conditions.
References
Chiu, C.-Y., & Douglas, J. (2013). A nonparametric approach to cognitive diagnosis by proximity to ideal response patterns. Journal of Classification, 30, 225-250. doi: 10.1007/s00357-013-9132-9
1 aYang, Jing1 aTao, Jian1 aChang, Hua-Hua1 aShi, Ning-Zhong uhttps://drive.google.com/open?id=1jCL3fPZLgzIdwvEk20D-FliZ15OTUtpr01723nas a2200145 4500008003900000245015500039210006900194300001000263490000700273520118100280100001701461700002701478700002001505856005201525 2016 d00aStochastic Curtailment of Questionnaires for Three-Level Classification: Shortening the CES-D for Assessing Low, Moderate, and High Risk of Depression0 aStochastic Curtailment of Questionnaires for ThreeLevel Classifi a22-360 v403 aIn clinical assessment, efficient screeners are needed to ensure low respondent burden. In this article, Stochastic Curtailment (SC), a method for efficient computerized testing for classification into two classes for observable outcomes, was extended to three classes. In a post hoc simulation study using the item scores on the Center for Epidemiologic Studies–Depression Scale (CES-D) of a large sample, three versions of SC, SC via Empirical Proportions (SC-EP), SC via Simple Ordinal Regression (SC-SOR), and SC via Multiple Ordinal Regression (SC-MOR) were compared at both respondent burden and classification accuracy. All methods were applied under the regular item order of the CES-D and under an ordering that was optimal in terms of the predictive power of the items. Under the regular item ordering, the three methods were equally accurate, but SC-SOR and SC-MOR needed less items. Under the optimal ordering, additional gains in efficiency were found, but SC-MOR suffered from capitalization on chance substantially. It was concluded that SC-SOR is an efficient and accurate method for clinical screening. Strengths and weaknesses of the methods are discussed.1 aSmits, Niels1 aFinkelman, Matthew, D.1 aKelderman, Henk uhttp://apm.sagepub.com/content/40/1/22.abstract01379nas a2200157 4500008003900000245012700039210006900166300001200235490000700247520082800254100001701082700002701099700001801126700002401144856005301168 2015 d00aStochastic Curtailment in Adaptive Mastery Testing: Improving the Efficiency of Confidence Interval–Based Stopping Rules0 aStochastic Curtailment in Adaptive Mastery Testing Improving the a278-2920 v393 aA well-known stopping rule in adaptive mastery testing is to terminate the assessment once the examinee’s ability confidence interval lies entirely above or below the cut-off score. This article proposes new procedures that seek to improve such a variable-length stopping rule by coupling it with curtailment and stochastic curtailment. Under the new procedures, test termination can occur earlier if the probability is high enough that the current classification decision remains the same should the test continue. Computation of this probability utilizes normality of an asymptotically equivalent version of the maximum likelihood ability estimate. In two simulation sets, the new procedures showed a substantial reduction in average test length while maintaining similar classification accuracy to the original method.1 aSie, Haskell1 aFinkelman, Matthew, D.1 aBartroff, Jay1 aThompson, Nathan, A uhttp://apm.sagepub.com/content/39/4/278.abstract02041nas a2200121 4500008003900000245007400039210006900113300001200182490000700194520163800201100002301839856005701862 2014 d00aThe Sequential Probability Ratio Test and Binary Item Response Models0 aSequential Probability Ratio Test and Binary Item Response Model a203-2300 v393 aThe sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has been previously noted (Spray & Reckase, 1994), the SPRT test statistic is not necessarily monotonic with respect to the classification bound when item response functions have nonzero lower asymptotes. Because of nonmonotonicity, several researchers (including Spray & Reckase, 1994) have recommended selecting items at the classification bound rather than the current ability estimate when terminating SPRT-based classification tests. Unfortunately, this well-worn advice is a bit too simplistic. Items yielding optimal evidence for classification depend on the IRT model, item parameters, and location of an examinee with respect to the classification bound. The current study illustrates, in depth, the relationship between the SPRT test statistic and classification evidence in binary IRT models. Unlike earlier studies, we examine the form of the SPRT-based log-likelihood ratio while altering the classification bound and item difficulty. These investigations motivate a novel item selection algorithm based on optimizing the expected SPRT criterion given the current ability estimate. The new expected log-likelihood ratio algorithm results in test lengths noticeably shorter than current, commonly used algorithms, and with no loss in classification accuracy.
1 aNydick, Steven, W. uhttp://jeb.sagepub.com/cgi/content/abstract/39/3/20301176nas a2200121 4500008003900000245009200039210006900131300001100200490000700211520076500218100001900983856005201002 2014 d00aA Sequential Procedure for Detecting Compromised Items in the Item Pool of a CAT System0 aSequential Procedure for Detecting Compromised Items in the Item a87-1040 v383 aTo maintain the validity of a continuous testing system, such as computerized adaptive testing (CAT), items should be monitored to ensure that the performance of test items has not gone through any significant changes during their lifetime in an item pool. In this article, the author developed a sequentially monitoring procedure based on a series of statistical hypothesis tests to examine whether the statistical characteristics of individual items have changed significantly during test administration. Simulation studies show that under the simulated setting, by choosing an appropriate cutoff point, the procedure can control the rate of Type I errors at any reasonable significance level and meanwhile, has a very low rate of Type II errors.
1 aZhang, Jinming uhttp://apm.sagepub.com/content/38/2/87.abstract01676nas a2200145 4500008003900000245012600039210006900165300001200234490000700246520115400253100002601407700002201433700002201455856005301477 2014 d00aStratified Item Selection and Exposure Control in Unidimensional Adaptive Testing in the Presence of Two-Dimensional Data0 aStratified Item Selection and Exposure Control in Unidimensional a563-5760 v383 aIt is not uncommon to use unidimensional item response theory models to estimate ability in multidimensional data with computerized adaptive testing (CAT). The current Monte Carlo study investigated the penalty of this model misspecification in CAT implementations using different item selection methods and exposure control strategies. Three item selection methods—maximum information (MAXI), a-stratification (STRA), and a-stratification with b-blocking (STRB) with and without Sympson–Hetter (SH) exposure control strategy—were investigated. Calibrating multidimensional items as unidimensional items resulted in inaccurate item parameter estimates. Therefore, MAXI performed better than STRA and STRB in estimating the ability parameters. However, all three methods had relatively large standard errors. SH exposure control had no impact on the number of overexposed items. Existing unidimensional CAT implementations might consider using MAXI only if recalibration as multidimensional model is too expensive. Otherwise, building a CAT pool by calibrating multidimensional data as unidimensional is not recommended.
1 aKalinowski, Kevin, E.1 aNatesan, Prathiba1 aHenson, Robin, K. uhttp://apm.sagepub.com/content/38/7/563.abstract01630nas a2200157 4500008003900000245010100039210006900140300001200209490000700221520111200228100001501340700001601355700001901371700002501390856005701415 2013 d00aA Semiparametric Model for Jointly Analyzing Response Times and Accuracy in Computerized Testing0 aSemiparametric Model for Jointly Analyzing Response Times and Ac a381-4170 v383 aThe item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees’ responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the advantage of conciseness, but may suffer from reduced flexibility to fit real data. We propose a semiparametric approach, specifically, the Cox proportional hazards model with a latent speed covariate to model the RTs, embedded within the hierarchical framework proposed by van der Linden to model the RTs and response accuracy simultaneously. This semiparametric approach combines the flexibility of nonparametric modeling and the brevity and interpretability of the parametric modeling. A Markov chain Monte Carlo method for parameter estimation is given and may be used with sparse data obtained by computerized adaptive testing. Both simulation studies and real data analysis are carried out to demonstrate the applicability of the new model.
1 aWang, Chun1 aFan, Zhewen1 aChang, Hua-Hua1 aDouglas, Jeffrey, A. uhttp://jeb.sagepub.com/cgi/content/abstract/38/4/38101189nas a2200133 4500008003900000245003700039210003700076300001200113490000700125520082900132100001900961700001800980856005700998 2013 d00aSpeededness and Adaptive Testing0 aSpeededness and Adaptive Testing a418-4380 v383 aTwo simple constraints on the item parameters in a response–time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain adaptive testing without any constraints. Both types of control are easy to implement and do not require any other real-time parameter estimation during the test than the regular update of the test taker’s ability estimate. Evaluation of the two approaches using simulated adaptive testing showed that the STA was especially effective. It guaranteed testing times that differed less than 10 seconds from a reference test across a variety of conditions.
1 aLinden, Wim, J1 aXiong, Xinhui uhttp://jeb.sagepub.com/cgi/content/abstract/38/4/41801281nas a2200133 4500008003900000245009500039210006900134300001200203490000700215520083800222100001801060700001601078856005301094 2012 d00aA Stochastic Method for Balancing Item Exposure Rates in Computerized Classification Tests0 aStochastic Method for Balancing Item Exposure Rates in Computeri a181-1880 v363 aComputerized classification tests (CCTs) classify examinees into categories such as pass/fail, master/nonmaster, and so on. This article proposes the use of stochastic methods from sequential analysis to address item overexposure, a practical concern in operational CCTs. Item overexposure is traditionally dealt with in CCTs by the Sympson-Hetter (SH) method, but this method is unable to restrict the exposure of the most informative items to the desired level. The authors’ new method of stochastic item exposure balance (SIEB) works in conjunction with the SH method and is shown to greatly reduce the number of overexposed items in a pool and improve overall exposure balance while maintaining classification accuracy comparable with using the SH method alone. The method is demonstrated using a simulation study.
1 aHuebner, Alan1 aLi, Zhushan uhttp://apm.sagepub.com/content/36/3/181.abstract01281nas a2200133 4500008003900000245009500039210006900134300001200203490000700215520083800222100001801060700001601078856005301094 2012 d00aA Stochastic Method for Balancing Item Exposure Rates in Computerized Classification Tests0 aStochastic Method for Balancing Item Exposure Rates in Computeri a181-1880 v363 aComputerized classification tests (CCTs) classify examinees into categories such as pass/fail, master/nonmaster, and so on. This article proposes the use of stochastic methods from sequential analysis to address item overexposure, a practical concern in operational CCTs. Item overexposure is traditionally dealt with in CCTs by the Sympson-Hetter (SH) method, but this method is unable to restrict the exposure of the most informative items to the desired level. The authors’ new method of stochastic item exposure balance (SIEB) works in conjunction with the SH method and is shown to greatly reduce the number of overexposed items in a pool and improve overall exposure balance while maintaining classification accuracy comparable with using the SH method alone. The method is demonstrated using a simulation study.
1 aHuebner, Alan1 aLi, Zhushan uhttp://apm.sagepub.com/content/36/3/181.abstract00318nas a2200109 4500008004100000245003200041210003100073653000800104653001600112100001800128856006200146 2011 eng d00aSmall-Sample Shadow Testing0 aSmallSample Shadow Testing10aCAT10ashadow test1 aJudd, Wallace uhttp://mail.iacat.org/content/small-sample-shadow-testing00297nas a2200085 4500008004100000245004000041210004000081100002300121856006700144 2010 eng d00aSequencing an Adaptive Test Battery0 aSequencing an Adaptive Test Battery1 avan der Linden, WJ uhttp://mail.iacat.org/content/sequencing-adaptive-test-battery00336nas a2200085 4500008004100000245009100041210006900132100001300201856003600214 2010 eng d00aSimulCAT: Windows application that simulates computerized adaptive test administration0 aSimulCAT Windows application that simulates computerized adaptiv1 aHan, K T uhttp://www.hantest.net/simulcat00520nas a2200133 4500008004100000245009400041210006900135300001200204490000700216100001200223700001400235700001600249856012100265 2010 Eng d00aStratified and maximum information item selection procedures in computer adaptive testing0 aStratified and maximum information item selection procedures in a202-2260 v471 aDeng, H1 aAnsley, T1 aChang, H -H uhttp://mail.iacat.org/content/stratified-and-maximum-information-item-selection-procedures-computer-adaptive-testing01719nas a2200157 4500008003900000022001400039245009400053210006900147300001400216490000700230520121600237100001401453700002001467700001901487856005501506 2010 d a1745-398400aStratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing0 aStratified and Maximum Information Item Selection Procedures in a202–2260 v473 aIn this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with completely random item selection (RAN). The comparisons were with respect to error variances, reliability of ability estimates and item usage through CATs simulated under nine test conditions of various practical constraints and item selection space. The results showed that F had an apparent precision advantage over STR and USTR under unconstrained item selection, but with very poor item usage. USTR reduced error variances for STR under various conditions, with small compromises in item usage. Compared to F, USTR enhanced item usage while achieving comparable precision in ability estimates; it achieved a precision level similar to F with improved item usage when items were selected under exposure control and with limited item selection space. The results provide implications for choosing an appropriate item selection procedure in applied settings.
1 aDeng, Hui1 aAnsley, Timothy1 aChang, Hua-Hua uhttp://dx.doi.org/10.1111/j.1745-3984.2010.00109.x01380nas a2200133 4500008003900000245013100039210006900170300001200239490000700251520087800258100002701136700003001163856005301193 2009 d00aStudying the Equivalence of Computer-Delivered and Paper-Based Administrations of the Raven Standard Progressive Matrices Test0 aStudying the Equivalence of ComputerDelivered and PaperBased Adm a855-8670 v693 aThis study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for computer-delivered test administrations. Administration mode effect is studied with repeated measures multivariate analysis of variance, internal consistency reliability estimates, and confirmatory factor analysis approaches. Results show a lack of test mode effect on distribution, accuracy, and meaning of raw scores. Participants indicate their preferences for the computer-delivered administration of the test. The article discusses findings in light of previous studies of the Raven Standard Progressive Matrices test.
1 aArce-Ferrer, Alvaro, J1 aMartínez Guzmán, Elvira uhttp://epm.sagepub.com/content/69/5/855.abstract01559nas a2200145 4500008003900000245009000039210006900129300001200198490000700210520109400217100001201311700001801323700001901341856005301360 2008 d00aSeverity of Organized Item Theft in Computerized Adaptive Testing: A Simulation Study0 aSeverity of Organized Item Theft in Computerized Adaptive Testin a543-5580 v323 aCriteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive testing (CAT) for two realistic item selection methods, maximum item information and a-stratified with content blocking, using the randomized method as a baseline for comparison. Damage caused by organized item theft was evaluated by the number of compromised items each examinee could encounter and the impact of the compromised items on examinees' ability estimates. Severity of test security violation was assessed under self-organized and organized item theft simulation scenarios. Results indicated that though item theft could cause severe damage to CAT with either item selection method, the maximum item information method was more vulnerable to the organized item theft simulation than was the a-stratified method.
1 aQing Yi1 aJinming Zhang1 aChang, Hua-Hua uhttp://apm.sagepub.com/content/32/7/543.abstract01294nas a2200133 4500008004100000245005700041210005700098300000900155490000800164520084700172653003401019100002301053856008401076 2008 eng d00aSome new developments in adaptive testing technology0 aSome new developments in adaptive testing technology a3-110 v2163 aIn an ironic twist of history, modern psychological testing has returned to an adaptive format quite common when testing was not yet standardized. Important stimuli to the renewed interest in adaptive testing have been the development of item-response theory in psychometrics, which models the responses on test items using separate parameters for the items and test takers, and the use of computers in test administration, which enables us to estimate the parameter for a test taker and select the items in real time. This article reviews a selection from the latest developments in the technology of adaptive testing, such as constrained adaptive item selection, adaptive testing using rule-based item generation, multidimensional adaptive testing, adaptive use of test batteries, and the use of response times in adaptive testing.
10acomputerized adaptive testing1 avan der Linden, WJ uhttp://mail.iacat.org/content/some-new-developments-adaptive-testing-technology01833nas a2200229 4500008004100000020004100041245010800082210006900190250001500259300000900274490000600283520101700289653001601306653001501322653005701337653001101394653003001405653001801435100001401453700001401467856012201481 2008 eng d a1529-7713 (Print)1529-7713 (Linking)00aStrategies for controlling item exposure in computerized adaptive testing with the partial credit model0 aStrategies for controlling item exposure in computerized adaptiv a2008/01/09 a1-170 v93 aExposure control research with polytomous item pools has determined that randomization procedures can be very effective for controlling test security in computerized adaptive testing (CAT). The current study investigated the performance of four procedures for controlling item exposure in a CAT under the partial credit model. In addition to a no exposure control baseline condition, the Kingsbury-Zara, modified-within-.10-logits, Sympson-Hetter, and conditional Sympson-Hetter procedures were implemented to control exposure rates. The Kingsbury-Zara and the modified-within-.10-logits procedures were implemented with 3 and 6 item candidate conditions. The results show that the Kingsbury-Zara and modified-within-.10-logits procedures with 6 item candidates performed as well as the conditional Sympson-Hetter in terms of exposure rates, overlap rates, and pool utilization. These two procedures are strongly recommended for use with partial credit CATs due to their simplicity and strength of their results.10a*Algorithms10a*Computers10a*Educational Measurement/statistics & numerical data10aHumans10aQuestionnaires/*standards10aUnited States1 aDavis, LL1 aDodd, B G uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-partial-credit-model01614nas a2200145 4500008003900000245009500039210006900134300001200203490000700215520113800222100001801360700001801378700001901396856005301415 2008 d00aA Strategy for Controlling Item Exposure in Multidimensional Computerized Adaptive Testing0 aStrategy for Controlling Item Exposure in Multidimensional Compu a215-2320 v683 aAlthough computerized adaptive tests have enjoyed tremendous growth, solutions for important problems remain unavailable. One problem is the control of item exposure rate. Because adaptive algorithms are designed to select optimal items, they choose items with high discriminating power. Thus, these items are selected more often than others, leading to both overexposure and underutilization of some parts of the item pool. Overused items are often compromised, creating a security problem that could threaten the validity of a test. Building on a previously proposed stratification scheme to control the exposure rate for one-dimensional tests, the authors extend their method to multidimensional tests. A strategy is proposed based on stratification in accordance with a functional of the vector of the discrimination parameter, which can be implemented with minimal computational overhead. Both theoretical and empirical validation studies are provided. Empirical results indicate significant improvement over the commonly used method of controlling exposure rate that requires only a reasonable sacrifice in efficiency.
1 aLee, Yi-Hsuan1 aIp, Edward, H1 aFuh, Cheng-Der uhttp://epm.sagepub.com/content/68/2/215.abstract00519nas a2200097 4500008004100000245008600041210006900127260009700196100002300293856010500316 2007 eng d00aThe shadow-test approach: A universal framework for implementing adaptive testing0 ashadowtest approach A universal framework for implementing adapt aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 avan der Linden, WJ uhttp://mail.iacat.org/content/shadow-test-approach-universal-framework-implementing-adaptive-testing00474nas a2200097 4500008004100000245006700041210006700108260009700175100001300272856009100285 2007 eng d00aSome thoughts on controlling item exposure in adaptive testing0 aSome thoughts on controlling item exposure in adaptive testing aD. J. Weiss (Ed.), Proceedings of the 2007 GMAC Conference on Computerized Adaptive Testing.1 aLewis, C uhttp://mail.iacat.org/content/some-thoughts-controlling-item-exposure-adaptive-testing00475nas a2200109 4500008004100000245004400041210004400085260012600129100002300255700001600278856007100294 2007 eng d00aStatistical aspects of adaptive testing0 aStatistical aspects of adaptive testing aC. R. Rao and S. Sinharay (Eds.), Handbook of statistics (Vol. 27: Psychometrics) (pp. 801838). Amsterdam: North-Holland.1 avan der Linden, WJ1 aGlas, C A W uhttp://mail.iacat.org/content/statistical-aspects-adaptive-testing02043nas a2200253 4500008004100000020002200041245007400063210006900137250001500206300001100221490000700232520121000239653002201449653001101471653005501482653005101537100001501588700002001603700002301623700001801646700001301664700001701677856009501694 2007 eng d a0885-3924 (Print)00aA system for interactive assessment and management in palliative care0 asystem for interactive assessment and management in palliative c a2007/03/16 a745-550 v333 aThe availability of psychometrically sound and clinically relevant screening, diagnosis, and outcome evaluation tools is essential to high-quality palliative care assessment and management. Such data will enable us to improve patient evaluations, prognoses, and treatment selections, and to increase patient satisfaction and quality of life. To accomplish these goals, medical care needs more precise, efficient, and comprehensive tools for data acquisition, analysis, interpretation, and management. We describe a system for interactive assessment and management in palliative care (SIAM-PC), which is patient centered, model driven, database derived, evidence based, and technology assisted. The SIAM-PC is designed to reliably measure the multiple dimensions of patients' needs for palliative care, and then to provide information to clinicians, patients, and the patients' families to achieve optimal patient care, while improving our capacity for doing palliative care research. This system is innovative in its application of the state-of-the-science approaches, such as item response theory and computerized adaptive testing, to many of the significant clinical problems related to palliative care.10a*Needs Assessment10aHumans10aMedical Informatics/*organization & administration10aPalliative Care/*organization & administration1 aChang, C-H1 aBoni-Saenz, A A1 aDurazo-Arvizu, R A1 aDesHarnais, S1 aLau, D T1 aEmanuel, L L uhttp://mail.iacat.org/content/system-interactive-assessment-and-management-palliative-care00579nas a2200133 4500008003900000245014600039210006900185300001200254490000700266100001500273700002500288700001000313856012200323 2006 d00aSensitivity of a computer adaptive assessment for measuring functional mobility changes in children enrolled in a community fitness programme0 aSensitivity of a computer adaptive assessment for measuring func a616-6220 v201 aHaley, S M1 aFragala-Pinkham, M A1 aNi, P uhttp://mail.iacat.org/content/sensitivity-computer-adaptive-assessment-measuring-functional-mobility-changes-children00395nas a2200109 4500008003900000245007400039210007200113300001000185490000600195100001800201856006600219 2006 d00aSequential Computerized Mastery Tests—Three Simulation Studies0 aSequential Computerized Mastery Tests†Three Simulation Studies a41-550 v61 aWiberg, Marie uhttp://www.tandfonline.com/doi/abs/10.1207/s15327574ijt0601_302347nas a2200217 4500008004100000020002200041245008000063210006900143260002600212300001000238490000700248520160400255653003001859653002101889653003201910653003001942653002501972100001501997700001502012856010202027 2006 eng d a0146-6216 (Print)00aSIMCAT 1.0: A SAS computer program for simulating computer adaptive testing0 aSIMCAT 10 A SAS computer program for simulating computer adaptiv bSage Publications: US a60-610 v303 aMonte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their flexibility is limited. SIMCAT 1.0 is aimed at the simulation of adaptive testing sessions under different adaptive expected a posteriori (EAP) proficiency-level estimation methods (Blais & Raîche, 2005; Raîche & Blais, 2005) based on the one-parameter Rasch logistic model. These methods are all adaptive in the a priori proficiency-level estimation, the proficiency-level estimation bias correction, the integration interval, or a combination of these factors. The use of these adaptive EAP estimation methods diminishes considerably the shrinking, and therefore biasing, effect of the estimated a priori proficiency level encountered when this a priori is fixed at a constant value independently of the computed previous value of the proficiency level. SIMCAT 1.0 also computes empirical and estimated skewness and kurtosis coefficients, such as the standard error, of the estimated proficiency-level sampling distribution. In this way, the program allows one to compare empirical and estimated properties of the estimated proficiency-level sampling distribution under different variations of the EAP estimation method: standard error and bias, like the skewness and kurtosis coefficients. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputer adaptive testing10acomputer program10aestimated proficiency level10aMonte Carlo methodologies10aRasch logistic model1 aRaîche, G1 aBlais, J-G uhttp://mail.iacat.org/content/simcat-10-sas-computer-program-simulating-computer-adaptive-testing02112nas a2200229 4500008004100000245013800041210006900179300001400248490000700262520127000269653003101539653003401570653002501604653001701629653001901646653002401665100001401689700001801703700001701721700001901738856012501757 2006 eng d00aSimulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with lumbar sp a947–9560 v593 aObjective: To equate physical functioning (PF) items with Back Pain Functional Scale (BPFS) items, develop a computerized adaptive test (CAT) designed to assess lumbar spine functional status (LFS) in people with lumbar spine impairments, and compare discriminant validity of LFS measures (qIRT) generated using all items analyzed with a rating scale Item Response Theory model (RSM) and measures generated using the simulated CAT (qCAT). Methods: We performed a secondary analysis of retrospective intake rehabilitation data. Results: Unidimensionality and local independence of 25 BPFS and PF items were supported. Differential item functioning was negligible for levels of symptom acuity, gender, age, and surgical history. The RSM fit the data well. A lumbar spine specific CAT was developed that was 72% more efficient than using all 25 items to estimate LFS measures. qIRT and qCAT measures did not discriminate patients by symptom acuity, age, or gender, but discriminated patients by surgical history in similar clinically logical ways. qCAT measures were as precise as qIRT measures. Conclusion: A body part specific simulated CAT developed from an LFS item bank was efficient and produced precise measures of LFS without eroding discriminant validity.10aBack Pain Functional Scale10acomputerized adaptive testing10aItem Response Theory10aLumbar spine10aRehabilitation10aTrue-score equating1 aHart, D L1 aMioduski, J E1 aWerneke, M W1 aStratford, P W uhttp://mail.iacat.org/content/simulated-computerized-adaptive-test-patients-lumbar-spine-impairments-was-efficient-and-000596nas a2200145 4500008004100000245013800041210006900179300001200248490000700260100001200267700001600279700001500295700001700310856012300327 2006 eng d00aSimulated computerized adaptive test for patients with lumbar spine impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with lumbar sp a947-9560 v591 aHart, D1 aMioduski, J1 aWerenke, M1 aStratford, P uhttp://mail.iacat.org/content/simulated-computerized-adaptive-test-patients-lumbar-spine-impairments-was-efficient-and02073nas a2200217 4500008004500000245013400045210006900179300001200248490000700260520127300267653003401540653004201574653002501616653001901641100001401660700001301674700001801687700001401705700001501719856012101734 2006 Engldsh 00aSimulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with shoulder a290-2980 v593 aBackground and Objective: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items,
develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (qIRT) and measures generated using the simulated CAT (qCAT).
Study Design and Setting: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients
with shoulder impairments who completed 60 SFS items.
Results: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items on were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The qIRT and qCAT measures were highly correlated (r 5 .96) and resulted in similar classifications of patients.
Conclusion: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good
discriminating ability.
Choosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline condition, the randomesque, modified-within-.10-logits, Sympson-Hetter, conditional Sympson-Hetter, a-stratified with multiple-stratification, and enhanced a-stratified with multiple-stratification procedures were implemented to control exposure rates. Two variations of the randomesque and modified-within-.10-logits procedures were examined, which varied the size of the item group from which the next item to be administered was randomly selected. The results indicate that although the conditional Sympson-Hetter provides somewhat lower maximum exposure rates, the randomesque and modified-within-.10-logits procedures with the six-item group variation have great utility for controlling overlap rates and increasing pool utilization and should be given further consideration.
1 aDavis, Laurie Laughlin uhttp://apm.sagepub.com/content/28/3/165.abstract01896nas a2200181 4500008004100000020002200041245012000063210006900183260002600252300001200278490000700290520119300297653003401490653003701524653001801561100001401579856012101593 2004 eng d a0146-6216 (Print)00aStrategies for controlling item exposure in computerized adaptive testing with the generalized partial credit model0 aStrategies for controlling item exposure in computerized adaptiv bSage Publications: US a165-1850 v283 aChoosing a strategy for controlling item exposure has become an integral part of test development for computerized adaptive testing (CAT). This study investigated the performance of six procedures for controlling item exposure in a series of simulated CATs under the generalized partial credit model. In addition to a no-exposure control baseline condition, the randomesque, modified-within-.10-logits, Sympson-Hetter, conditional Sympson-Hetter, a-stratified with multiple-stratification, and enhanced a-stratified with multiple-stratification procedures were implemented to control exposure rates. Two variations of the randomesque and modified-within-.10-logits procedures were examined, which varied the size of the item group from which the next item to be administered was randomly selected. The results indicate that although the conditional Sympson-Hetter provides somewhat lower maximum exposure rates, the randomesque and modified-within-.10-logits procedures with the six-item group variation have great utility for controlling overlap rates and increasing pool utilization and should be given further consideration. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10acomputerized adaptive testing10ageneralized partial credit model10aitem exposure1 aDavis, LL uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-generalized-partial03123nas a2200121 4500008004100000245009500041210006900136300000900205490000700214520263700221100002502858856011802883 2004 eng d00aStrategies for controlling testlet exposure rates in computerized adaptive testing systems0 aStrategies for controlling testlet exposure rates in computerize a58350 v643 aExposure control procedures in computerized adaptive testing (CAT) systems protect item pools from being compromised, however, this impacts measurement precision. Previous research indicates that exposure control procedures perform differently for dichotomously scored versus polytomously scored CAT systems. For dichotomously scored CATs, conditional selection procedures are often the optimal choice, while randomization procedures perform best for polytomously scored CATs. CAT systems modeled with testlet response theory have not been examined to determine optimal exposure control procedures. This dissertation examined various exposure control procedures in testlet-based CAT systems using the three-parameter logistic testlet response theory model and the partial credit model. The exposure control procedures were the randomesque procedure, the modified within .10 logits procedure, two levels of the progressive restricted procedure, and two levels of the Sympson-Hetter procedure. Each of these was compared to a baseline no exposure control procedure, maximum information. The testlets were reading passages with six to ten multiple-choice items. The CAT systems consisted of maximum information testlet selection contingent on an exposure control procedure and content balancing for passage type and the number of items per passage; expected a posteriori ability estimation; and a fixed length stopping rule of seven testlets totaling fifty multiple-choice items. Measurement precision and exposure rates were examined to evaluate the effectiveness of the exposure control procedures for each measurement model. The exposure control procedures yielded similar results for measurement precision within the models. The exposure rates distinguished which exposure control procedures were most effective. The Sympson-Hetter conditions, which are conditional procedures, maintained the pre-specified maximum exposure rate, but performed very poorly in terms of pool utilization. The randomization procedures, randomesque and modified within .10 logits, yielded low maximum exposure rates, but used only about 70% of the testlet pool. Surprisingly, the progressive restricted procedure, which is a combination of both a conditional and randomization procedure, yielded the best results in its ability to maintain and control the maximum exposure rate and it used the entire testlet pool. The progressive restricted conditions were the optimal procedures for both the partial credit CAT systems and the three-parameter logistic testlet response theory CAT systems. (PsycINFO Database Record (c) 2004 APA, all rights reserved).1 aBoyd, Aimee Michelle uhttp://mail.iacat.org/content/strategies-controlling-testlet-exposure-rates-computerized-adaptive-testing-systems00381nas a2200109 4500008004100000245005200041210005000093260001700143100001900160700001500179856007700194 2004 eng d00aA study of multiple stage adaptive test designs0 astudy of multiple stage adaptive test designs aSan Diego CA1 aArmstrong, R D1 aEdmonds, J uhttp://mail.iacat.org/content/study-multiple-stage-adaptive-test-designs00466nas a2200109 4500008004100000245007700041210006900118260003000187100002300217700001800240856009800258 2003 eng d00aA sequential Bayes procedure for item calibration in multi-stage testing0 asequential Bayes procedure for item calibration in multistage te aManuscript in preparation1 avan der Linden, WJ1 aMead, Alan, D uhttp://mail.iacat.org/content/sequential-bayes-procedure-item-calibration-multi-stage-testing00458nas a2200121 4500008004100000245007300041210006900114260001500183100001000198700001900208700001500227856009400242 2003 eng d00aA simulation study to compare CAT strategies for cognitive diagnosis0 asimulation study to compare CAT strategies for cognitive diagnos aChicago IL1 aXu, X1 aChang, Hua-Hua1 aDouglas, J uhttp://mail.iacat.org/content/simulation-study-compare-cat-strategies-cognitive-diagnosis01712nas a2200169 4500008004100000245015800041210006900199300001000268490000700278520105400285100001901339700001801358700001601376700001201392700001601404856012201420 2003 eng d00aSmall sample estimation in dichotomous item response models: Effect of priors based on judgmental information on the accuracy of item parameter estimates0 aSmall sample estimation in dichotomous item response models Effe a27-510 v273 aLarge item banks with properly calibrated test items are essential for ensuring the validity of computer-based tests. At the same time, item calibrations with small samples are desirable to minimize the amount of pretesting and limit item exposure. Bayesian estimation procedures show considerable promise with small examinee samples. The purposes of the study were (a) to examine how prior information for Bayesian item parameter estimation can be specified and (b) to investigate the relationship between sample size and the specification of prior information on the accuracy of item parameter estimates. The results of the simulation study were clear: Estimation of item response theory (IRT) model item parameters can be improved considerably. Improvements in the one-parameter model were modest; considerable improvements with the two- and three-parameter models were observed. Both the study of different forms of priors and ways to improve the judgmental data used in forming the priors appear to be promising directions for future research. 1 aSwaminathan, H1 aHambleton, RK1 aSireci, S G1 aXing, D1 aRizavi, S M uhttp://mail.iacat.org/content/small-sample-estimation-dichotomous-item-response-models-effect-priors-based-judgmental01521nas a2200157 4500008004100000245009500041210006900136300001200205490000700217520090100224653002101125653003001146653004501176100002301221856011901244 2003 eng d00aSome alternatives to Sympson-Hetter item-exposure control in computerized adaptive testing0 aSome alternatives to SympsonHetter itemexposure control in compu a249-2650 v283 aTheHetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic set of values for the examinees’ ability parameter. Formal properties of the method are identified that help us explain why this iterative process can be slow and does not guarantee admissibility. In addition, some alternatives to the SH method are introduced. The behavior of these alternatives was estimated for an adaptive test from an item pool from the Law School Admission Test (LSAT). Two of the alternatives showed attractive behavior and converged smoothly to admissibility for all items in a relatively small number of iteration steps. 10aAdaptive Testing10aComputer Assisted Testing10aTest Items computerized adaptive testing1 avan der Linden, WJ uhttp://mail.iacat.org/content/some-alternatives-sympson-hetter-item-exposure-control-computerized-adaptive-testing00404nas a2200097 4500008004100000245006100041210005900102260004100161100001600202856008800218 2003 eng d00aStandard-setting issues in computerized-adaptive testing0 aStandardsetting issues in computerizedadaptive testing aHalifax, Nova Scotia, May 30th, 20031 aGushta, M M uhttp://mail.iacat.org/content/standard-setting-issues-computerized-adaptive-testing02981nas a2200121 4500008004100000245010700041210006900148300000900217490000700226520249100233100001202724856012302736 2003 eng d00aStatistical detection and estimation of differential item functioning in computerized adaptive testing0 aStatistical detection and estimation of differential item functi a27360 v643 aDifferential item functioning (DIF) is an important issue in large scale standardized testing. DIF refers to the unexpected difference in item performances among groups of equally proficient examinees, usually classified by ethnicity or gender. Its presence could seriously affect the validity of inferences drawn from a test. Various statistical methods have been proposed to detect and estimate DIF. This dissertation addresses DIF analysis in the context of computerized adaptive testing (CAT), whose item selection algorithm adapts to the ability level of each individual examinee. In a CAT, a DIF item may be more consequential and more detrimental be cause fewer items are administered in a CAT than in a traditional paper-and-pencil test and because the remaining sequence of items presented to examinees depends in part on their responses to the DIF item. Consequently, an efficient, stable and flexible method to detect and estimate CAT DIF becomes necessary and increasingly important. We propose simultaneous implementations of online calibration and DIF testing. The idea is to perform online calibration of an item of interest separately in the focal and reference groups. Under any specific parametric IRT model, we can use the (online) estimated latent traits as covariates and fit a nonlinear regression model to each of the two groups. Because of the use of the estimated, not the true , the regression fit has to adjust for the covariate "measurement errors". It turns out that this situation fits nicely into the framework of nonlinear error-in-variable modelling, which has been extensively studied in statistical literature. We develop two bias-correction methods using asymptotic expansion and conditional score theory. After correcting the bias caused by measurement error, one can perform a significance test to detect DIF with the parameter estimates for different groups. This dissertation also discusses some general techniques to handle measurement error modelling with different IRT models, including the three-parameter normal ogive model and polytomous response models. Several methods of estimating DIF are studied as well. Large sample properties are established to justify the proposed methods. Extensive simulation studies show that the resulting methods perform well in terms of Type-I error rate control, accuracy in estimating DIF and power against both unidirectional and crossing DIF. (PsycINFO Database Record (c) 2004 APA, all rights reserved).1 aFeng, X uhttp://mail.iacat.org/content/statistical-detection-and-estimation-differential-item-functioning-computerized-adaptive02968nas a2200121 4500008004100000245010900041210006900150300000800219490000700227520247100234100001402705856012702719 2003 eng d00aStrategies for controlling item exposure in computerized adaptive testing with polytomously scored items0 aStrategies for controlling item exposure in computerized adaptiv a4580 v643 aChoosing a strategy for controlling the exposure of items to examinees has become an integral part of test development for computerized adaptive testing (CAT). Item exposure can be controlled through the use of a variety of algorithms which modify the CAT item selection process. This may be done through a randomization, conditional selection, or stratification approach. The effectiveness of each procedure as well as the degree to which measurement precision is sacrificed has been extensively studied with dichotomously scored item pools. However, only recently have researchers begun to examine these procedures in polytomously scored item pools. The current study investigated the performance of six different exposure control mechanisms under three polytomous IRT models in terms of measurement precision, test security, and ease of implementation. The three models examined in the current study were the partial credit, generalized partial credit, and graded response models. In addition to a no exposure control baseline condition, the randomesque, within .10 logits, Sympson-Hetter, conditional Sympson-Hetter, a-Stratified, and enhanced a-Stratified procedures were implemented to control item exposure rates. The a-Stratified and enhanced a-Stratified procedures were not evaluated with the partial credit model. Two variations of the randomesque and within .10 logits procedures were also examined which varied the size of the item group from which the next item to be administered was randomly selected. The results of this study were remarkably similar for all three models and indicated that the randomesque and within .10 logits procedures, when implemented with the six item group variation, provide the best option for controlling exposure rates when impact to measurement precision and ease of implementation are considered. The three item group variations of the procedures were, however, ineffective in controlling exposure, overlap, and pool utilization rates to desired levels. The Sympson-Hetter and conditional Sympson-Hetter procedures were difficult and time consuming to implement, and while they did control exposure rates to the target level, their performance in terms of item overlap (for the Sympson-Hetter) and pool utilization were disappointing. The a-Stratified and enhanced a-Stratified procedures both turned in surprisingly poor performances across all variables. (PsycINFO Database Record (c) 2004 APA, all rights reserved).1 aDavis, LL uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-polytomously-scored-items00480nas a2200097 4500008004100000245012000041210006900161260001500230100001400245856012300259 2003 eng d00aStrategies for controlling item exposure in computerized adaptive testing with the generalized partial credit model0 aStrategies for controlling item exposure in computerized adaptiv aChicago IL1 aDavis, LL uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-generalized-partial-000508nam a2200097 4500008004100000245009500041210006900136260007100205100001400276856012000290 2003 eng d00aStrategies for controlling testlet exposure rates in computerized adaptive testing systems0 aStrategies for controlling testlet exposure rates in computerize aUnpublished Ph.D. Dissertation, The University of Texas at Austin.1 aBoyd, A M uhttp://mail.iacat.org/content/strategies-controlling-testlet-exposure-rates-computerized-adaptive-testing-systems-000824nas a2200133 4500008004100000245005300041210005300094300001200147490000700159520041300166100001300579700001500592856008300607 2003 eng d00aStudent modeling and ab initio language learning0 aStudent modeling and ab initio language learning a519-5350 v313 aProvides examples of student modeling techniques that have been employed in computer-assisted language learning over the past decade. Describes two systems for learning German: "German Tutor" and "Geroline." Shows how a student model can support computerized adaptive language testing for diagnostic purposes in a Web-based language learning environment that does not rely on parsing technology. (Author/VWL)1 aHeift, T1 aSchulze, M uhttp://mail.iacat.org/content/student-modeling-and-ab-initio-language-learning00581nas a2200145 4500008004100000245012200041210006900163300001300232490000800245100001700253700001500270700001400285700001200299856012400311 2003 eng d00aA study of the feasibility of Internet administration of a computerized health survey: The Headache Impact Test (HIT)0 astudy of the feasibility of Internet administration of a compute a 953-9610 v 121 aBayliss, M S1 aDewey, J E1 aDunlap, I1 aet. al. uhttp://mail.iacat.org/content/study-feasibility-internet-administration-computerized-health-survey-headache-impact-test00373nas a2200133 4500008004100000245003800041210003600079300001200115490000700127100001400134700001500148700001200163856006400175 2002 eng d00aSelf-adapted testing: An overview0 aSelfadapted testing An overview a107-1220 v121 aWise, S L1 aPonsoda, V1 aOlea, J uhttp://mail.iacat.org/content/self-adapted-testing-overview00511nas a2200097 4500008004100000245014600041210006900187100001500256700001500271856012700286 2002 eng d00aSome features of the estimated sampling distribution of the ability estimate in computerized adaptive testing according to two stopping rules0 aSome features of the estimated sampling distribution of the abil1 aRaîche, G1 aBlais, J G uhttp://mail.iacat.org/content/some-features-estimated-sampling-distribution-ability-estimate-computerized-adaptive-testing00532nas a2200109 4500008004100000245013600041210006900177260002000246100001500266700001400281856012700295 2002 eng d00aSome features of the sampling distribution of the ability estimate in computerized adaptive testing according to two stopping rules0 aSome features of the sampling distribution of the ability estima aNew Orleans, LA1 aBlais, J-G1 aRaiche, G uhttp://mail.iacat.org/content/some-features-sampling-distribution-ability-estimate-computerized-adaptive-testing-according00450nas a2200097 4500008004100000245007500041210006900116260003300185100003000218856010400248 2002 eng d00aSTAR Math 2 Computer-Adaptive Math Test and Database: Technical Manual0 aSTAR Math 2 ComputerAdaptive Math Test and Database Technical Ma aWisconsin Rapids, WI: Author1 aRenaissance-Learning-Inc. uhttp://mail.iacat.org/content/star-math-2-computer-adaptive-math-test-and-database-technical-manual00505nas a2200121 4500008004100000245009700041210006900138260001900207100001100226700001000237700001300247856012300260 2002 eng d00aStatistical indexes for monitoring item behavior under computer adaptive testing environment0 aStatistical indexes for monitoring item behavior under computer aNew Orleans LA1 aZhu, R1 aYu, F1 aLiu, S M uhttp://mail.iacat.org/content/statistical-indexes-monitoring-item-behavior-under-computer-adaptive-testing-environment00525nam a2200097 4500008004100000245010900041210006900150260006700219100001400286856012700300 2002 eng d00aStrategies for controlling item exposure in computerized adaptive testing with polytomously scored items0 aStrategies for controlling item exposure in computerized adaptiv aUnpublished doctoral dissertation, University of Texas, Austin1 aDavis, LL uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-polytomously-scored-ite-000563nas a2200121 4500008004100000245009500041210006900136260008200205100001300287700001200300700001300312856011600325 2002 eng d00aA strategy for controlling item exposure in multidimensional computerized adaptive testing0 astrategy for controlling item exposure in multidimensional compu aAvailable from http://www3. tat.sinica.edu.tw/library/c_tec_rep/c-2002-11.pdf1 aLee, Y H1 aIp, E H1 aFuh, C D uhttp://mail.iacat.org/content/strategy-controlling-item-exposure-multidimensional-computerized-adaptive-testing02035nas a2200253 4500008004100000245010900041210006900150300000900219490000600228520114100234653002101375653001501396653003901411653002201450653002501472653001801497653002201515653005501537653001501592653001201607100001701619700002501636856012001661 2002 eng d00aA structure-based approach to psychological measurement: Matching measurement models to latent structure0 astructurebased approach to psychological measurement Matching me a4-160 v93 aThe present article sets forth the argument that psychological assessment should be based on a construct's latent structure. The authors differentiate dimensional (continuous) and taxonic (categorical) structures at the latent and manifest levels and describe the advantages of matching the assessment approach to the latent structure of a construct. A proper match will decrease measurement error, increase statistical power, clarify statistical relationships, and facilitate the location of an efficient cutting score when applicable. Thus, individuals will be placed along a continuum or assigned to classes more accurately. The authors briefly review the methods by which latent structure can be determined and outline a structure-based approach to assessment that builds on dimensional scaling models, such as item response theory, while incorporating classification methods as appropriate. Finally, the authors empirically demonstrate the utility of their approach and discuss its compatibility with traditional assessment methods and with computerized adaptive testing. (PsycINFO Database Record (c) 2005 APA ) (journal abstract)10aAdaptive Testing10aAssessment10aClassification (Cognitive Process)10aComputer Assisted10aItem Response Theory10aPsychological10aScaling (Testing)10aStatistical Analysis computerized adaptive testing10aTaxonomies10aTesting1 aRuscio, John1 aRuscio, Ayelet Meron uhttp://mail.iacat.org/content/structure-based-approach-psychological-measurement-matching-measurement-models-latent00532nas a2200121 4500008004100000245009200041210006900133260004600202100001300248700001700261700001600278856011600294 2001 eng d00aScoring alternatives for incomplete computerized adaptive tests (Research Report 01-20)0 aScoring alternatives for incomplete computerized adaptive tests aPrinceton NJ: Educational Testing Service1 aWay, W D1 aGawlick, L A1 aEignor, D R uhttp://mail.iacat.org/content/scoring-alternatives-incomplete-computerized-adaptive-tests-research-report-01-2000464nas a2200097 4500008004100000245008200041210006900123260003300192100003000225856011100255 2001 eng d00aSTAR Early Literacy Computer-Adaptive Diagnostic Assessment: Technical Manual0 aSTAR Early Literacy ComputerAdaptive Diagnostic Assessment Techn aWisconsin Rapids, WI: Author1 aRenaissance-Learning-Inc. uhttp://mail.iacat.org/content/star-early-literacy-computer-adaptive-diagnostic-assessment-technical-manual00318nas a2200097 4500008004100000245004500041210004100086260001500127100001600142856006200158 2001 eng d00aA system for on-the-fly adaptive testing0 asystem for onthefly adaptive testing aSeattle WA1 aWagner, M E uhttp://mail.iacat.org/content/system-fly-adaptive-testing00595nas a2200133 4500008004100000245013300041210006900174260003400243100000900277700001600286700001600302700001700318856012600335 2000 eng d00aA selection procedure for polytomous items in computerized adaptive testing (Measurement and Research Department Reports 2000-5)0 aselection procedure for polytomous items in computerized adaptiv aArnhem, The Netherlands: Cito1 aRijn1 aEggen, Theo1 aHemker, B T1 aSanders, P F uhttp://mail.iacat.org/content/selection-procedure-polytomous-items-computerized-adaptive-testing-measurement-and-research00497nas a2200121 4500008004100000245008600041210006900127260002100196100001500217700001900232700001300251856011100264 2000 eng d00aSolving complex constraints in a-stratified computerized adaptive testing designs0 aSolving complex constraints in astratified computerized adaptive aNew Orleans, USA1 aLeung, C-K1 aChang, Hua-Hua1 aHau, K-T uhttp://mail.iacat.org/content/solving-complex-constraints-stratified-computerized-adaptive-testing-designs00507nas a2200097 4500008004100000245014600041210006900187260001900256100001600275856011800291 2000 eng d00aSome considerations for improving accuracy of estimation of item characteristic curves in online calibration of computerized adaptive testing0 aSome considerations for improving accuracy of estimation of item aNew Orleans LA1 aSamejima, F uhttp://mail.iacat.org/content/some-considerations-improving-accuracy-estimation-item-characteristic-curves-online00400nas a2200109 4500008004100000245006100041210006100102260001600163100001300179700001100192856008700203 2000 eng d00aSpecific information item selection for adaptive testing0 aSpecific information item selection for adaptive testing aNew Orleans1 aDavey, T1 aFan, M uhttp://mail.iacat.org/content/specific-information-item-selection-adaptive-testing00462nas a2200097 4500008004100000245008100041210006900122260003300191100003000224856011000254 2000 eng d00aSTAR Reading 2 Computer-Adaptive Reading Test and Database: Technical Manual0 aSTAR Reading 2 ComputerAdaptive Reading Test and Database Techni aWisconsin Rapids, WI: Author1 aRenaissance-Learning-Inc. uhttp://mail.iacat.org/content/star-reading-2-computer-adaptive-reading-test-and-database-technical-manual00561nas a2200121 4500008004100000245013100041210006900172260001900241100001800260700001700278700001700295856012700312 2000 eng d00aSufficient simplicity or comprehensive complexity? A comparison of probabilitic and stratification methods of exposure control0 aSufficient simplicity or comprehensive complexity A comparison o aNew Orleans LA1 aParshall, C G1 aKromrey, J D1 aHogarty, K Y uhttp://mail.iacat.org/content/sufficient-simplicity-or-comprehensive-complexity-comparison-probabilitic-and-stratification00355nas a2200085 4500008004100000245006300041210006300104100001200167856009000179 1999 eng d00aSome relationship among issues in CAT item pool management0 aSome relationship among issues in CAT item pool management1 aWang, T uhttp://mail.iacat.org/content/some-relationship-among-issues-cat-item-pool-management01183nas a2200133 4500008004100000245006300041210006300104300001100167490000700178520073600185100002000921700001900941856008900960 1999 eng d00aSome reliability estimates for computerized adaptive tests0 aSome reliability estimates for computerized adaptive tests a239-470 v233 aThree reliability estimates are derived for the Bayes modal estimate (BME) and the maximum likelihood estimate (MLE) of θin computerized adaptive tests (CAT). Each reliability estimate is a function of test information. Two of the estimates are shown to be upper bounds to true reliability. The three reliability estimates and the true reliabilities of both MLE and BME were computed for seven simulated CATs. Results showed that the true reliabilities for MLE and BME were nearly identical in all seven tests. The three reliability estimates never differed from the true reliabilities by more than .02 (.01 in most cases). A simple implementation of one reliability estimate was found to accurately estimate reliability in CATs. 1 aNicewander, W A1 aThomasson, G L uhttp://mail.iacat.org/content/some-reliability-estimates-computerized-adaptive-tests00404nas a2200097 4500008004100000245006700041210006700108260002100175100001900196856009100215 1999 eng d00aStandard errors of proficiency estimates in stratum scored CAT0 aStandard errors of proficiency estimates in stratum scored CAT aMontreal, Canada1 aKingsbury, G G uhttp://mail.iacat.org/content/standard-errors-proficiency-estimates-stratum-scored-cat00489nas a2200121 4500008004100000245008200041210006900123260002100192100001700213700001900230700001500249856010300264 1999 eng d00aStudy of methods to detect aberrant response patterns in computerized testing0 aStudy of methods to detect aberrant response patterns in compute aMontreal, Canada1 aIwamoto, C K1 aNungester, R J1 aLuecht, RM uhttp://mail.iacat.org/content/study-methods-detect-aberrant-response-patterns-computerized-testing00560nas a2200109 4500008004100000245008700041210006900128260011900197100001000316700001400326856011000340 1998 eng d00aSimulating nonmodel-fitting responses in a CAT Environment (Research Report 98-10)0 aSimulating nonmodelfitting responses in a CAT Environment Resear aIowa City IA: ACT Inc. (Also presented at National Council on Measurement in Education, 1999: ERIC No. ED 427 042)1 aYi, Q1 aNering, L uhttp://mail.iacat.org/content/simulating-nonmodel-fitting-responses-cat-environment-research-report-98-1000655nas a2200109 4500008004100000245012200041210006900163260014400232100001600376700002700392856012600419 1998 eng d00aSimulating the null distribution of person-fit statistics for conventional and adaptive tests (Research Report 98-02)0 aSimulating the null distribution of personfit statistics for con aEnschede, The Netherlands: University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://mail.iacat.org/content/simulating-null-distribution-person-fit-statistics-conventional-and-adaptive-tests-research01299nas a2200157 4500008004100000245007500041210006900116300001000185490000700195520076100202653003400963100001800997700001401015700001701029856009501046 1998 eng d00aSimulating the use of disclosed items in computerized adaptive testing0 aSimulating the use of disclosed items in computerized adaptive t a48-680 v353 aRegular use of questions previously made available to the public (i.e., disclosed items) may provide one way to meet the requirement for large numbers of questions in a continuous testing environment, that is, an environment in which testing is offered at test taker convenience throughout the year rather than on a few prespecified test dates. First it must be shown that such use has effects on test scores small enough to be acceptable. In this study simulations are used to explore the use of disclosed items under a worst-case scenario which assumes that disclosed items are always answered correctly. Some item pool and test designs were identified in which the use of disclosed items produces effects on test scores that may be viewed as negligible.10acomputerized adaptive testing1 aStocking, M L1 aWard, W C1 aPotenza, M T uhttp://mail.iacat.org/content/simulating-use-disclosed-items-computerized-adaptive-testing00436nas a2200085 4500008004100000245010200041210006900143100001600212856012200228 1998 eng d00aSome considerations for eliminating biases in ability estimation in computerized adaptive testing0 aSome considerations for eliminating biases in ability estimation1 aSamejima, F uhttp://mail.iacat.org/content/some-considerations-eliminating-biases-ability-estimation-computerized-adaptive-testing00494nas a2200097 4500008004100000245013400041210006900175260001500244100001500259856012200274 1998 eng d00aSome item response theory to provide scale scores based on linear combinations of testlet scores, for computerized adaptive tests0 aSome item response theory to provide scale scores based on linea aUrbana, IL1 aThissen, D uhttp://mail.iacat.org/content/some-item-response-theory-provide-scale-scores-based-linear-combinations-testlet-scores00456nas a2200121 4500008004100000245007200041210006900113300001200182490000700194100001500201700001900216856009900235 1998 eng d00aSome practical examples of computerized adaptive sequential testing0 aSome practical examples of computerized adaptive sequential test a229-2490 v351 aLuecht, RM1 aNungester, R J uhttp://mail.iacat.org/content/some-practical-examples-computerized-adaptive-sequential-testing00423nas a2200109 4500008004100000245006400041210006400105260001500169100002000184700001900204856009000223 1998 eng d00aSome reliability estimators for computerized adaptive tests0 aSome reliability estimators for computerized adaptive tests aUrbana, IL1 aNicewander, W A1 aThomasson, G L uhttp://mail.iacat.org/content/some-reliability-estimators-computerized-adaptive-tests00651nas a2200121 4500008004100000245009700041210006900138260014500207100001600352700001600368700002700384856011800411 1998 eng d00aStatistical tests for person misfit in computerized adaptive testing (Research Report 98-01)0 aStatistical tests for person misfit in computerized adaptive tes aEnschede, The Netherlands : University of Twente, Faculty of Educational Science and Technology, Department of Measurement and Data Analysis1 aGlas, C A W1 aMeijer, R R1 aKrimpen-Stoop, E M L A uhttp://mail.iacat.org/content/statistical-tests-person-misfit-computerized-adaptive-testing-research-report-98-0100597nas a2200145 4500008004100000020001000041245007300051210006900124260010000193300000700293100001600300700001600316700002300332856009600355 1998 eng d a98-0100aStatistical tests for person misfit in computerized adaptive testing0 aStatistical tests for person misfit in computerized adaptive tes aEnschede, The NetherlandsbFaculty of Educational Science and Technology, Univeersity of Twente a281 aGlas, C A W1 aMeijer, R R1 aKrimpen-Stoop, E M uhttp://mail.iacat.org/content/statistical-tests-person-misfit-computerized-adaptive-testing00493nas a2200109 4500008004100000245010500041210006900146300001200215490000700227100002300234856012600257 1998 eng d00aStochastic order in dichotomous item response models for fixed, adaptive, and multidimensional tests0 aStochastic order in dichotomous item response models for fixed a a211-2260 v631 avan der Linden, WJ uhttp://mail.iacat.org/content/stochastic-order-dichotomous-item-response-models-fixed-adaptive-and-multidimensional-tests00524nas a2200121 4500008004100000245012000041210006900161300001200230490000600242100001600248700001700264856012100281 1998 eng d00aSwedish Enlistment Battery: Construct validity and latent variable estimation of cognitive abilities by the CAT-SEB0 aSwedish Enlistment Battery Construct validity and latent variabl a107-1140 v61 aMardberg, B1 aCarlstedt, B uhttp://mail.iacat.org/content/swedish-enlistment-battery-construct-validity-and-latent-variable-estimation-cognitive01523nas a2200121 4500008004100000245008800041210006900129300001100198490001000209520105600219100001501275856011101290 1997 eng d00aSelf-adapted testing: Improving performance by modifying tests instead of examinees0 aSelfadapted testing Improving performance by modifying tests ins a83-1040 v10(1)3 aThis paper describes self-adapted testing and some of the evidence concerning its effects, presents possible theoretical explanations for those effects, and discusses some of the practical concerns regarding self-adapted testing. Self-adapted testing is a variant of computerized adapted testing in which the examine makes dynamic choices about the difficulty of the items he or she attempts. Self-adapted testing generates scores that are, in constrast to computerized adapted test and fixed-item tests, uncorrelated with a measure of trait test anxiety. This lack of correlation with an irrelevant attribute of the examine is evidence of an improvement in the construct validity of the scores. This improvement comes at the cost of a decrease in testing efficiency. The interaction between test anxiety and test administration mode is more consistent with an interference theory of test anxiety than a deficit theory. Some of the practical concerns regarding self-adapted testing can be ruled out logically, but others await empirical investigation.1 aRocklin, T uhttp://mail.iacat.org/content/self-adapted-testing-improving-performance-modifying-tests-instead-examinees00543nas a2200121 4500008004100000245009900041210006900140260004600209100001800255700001400273700001700287856011700304 1997 eng d00aSimulating the use of disclosed items in computerized adaptive testing (Research Report 97-10)0 aSimulating the use of disclosed items in computerized adaptive t aPrinceton NJ: Educational Testing Service1 aStocking, M L1 aWard, W C1 aPotenza, M T uhttp://mail.iacat.org/content/simulating-use-disclosed-items-computerized-adaptive-testing-research-report-97-1000385nas a2200121 4500008004100000245004400041210004400085260001800129100001400147700001800161700001300179856007100192 1997 eng d00aSimulation of realistic ability vectors0 aSimulation of realistic ability vectors aGatlinburg TN1 aNering, M1 aThompson, T D1 aDavey, T uhttp://mail.iacat.org/content/simulation-realistic-ability-vectors00548nas a2200121 4500008004100000245013100041210006900172260001500241100001400256700001500270700001700285856012400302 1997 eng d00aA simulation study of the use of the Mantel-Haenszel and logistic regression procedures for assessing DIF in a CAT environment0 asimulation study of the use of the MantelHaenszel and logistic r aChicago IL1 aRoss, L P1 aNandakumar1 aClauser, B E uhttp://mail.iacat.org/content/simulation-study-use-mantel-haenszel-and-logistic-regression-procedures-assessing-dif-cat00415nas a2200121 4500008004100000245005800041210005800099300001200157490000700169100001300176700001800189856008600207 1997 eng d00aSome new item selection criteria for adaptive testing0 aSome new item selection criteria for adaptive testing a203-2260 v221 aVeerkamp1 aBerger, M P F uhttp://mail.iacat.org/content/some-new-item-selection-criteria-adaptive-testing-000413nas a2200121 4500008004100000245005800041210005800099300001200157490000700169100001100176700002000187856008400207 1997 eng d00aSome new item selection criteria for adaptive testing0 aSome new item selection criteria for adaptive testing a203-2260 v221 aBerger1 aVeerkamp, W J J uhttp://mail.iacat.org/content/some-new-item-selection-criteria-adaptive-testing00465nas a2200097 4500008004100000245010700041210006900148260001500217100001900232856011600251 1997 eng d00aSome questions that must be addressed to develop and maintain an item pool for use in an adaptive test0 aSome questions that must be addressed to develop and maintain an aChicago IL1 aKingsbury, G G uhttp://mail.iacat.org/content/some-questions-must-be-addressed-develop-and-maintain-item-pool-use-adaptive-test00446nam a2200097 4500008004100000245005800041210005800099260008700157100002000244856008400264 1997 eng d00aStatistical methods for computerized adaptive testing0 aStatistical methods for computerized adaptive testing aUnpublished doctoral dissertation, University of Twente, Enschede, The Netherlands1 aVeerkamp, W J J uhttp://mail.iacat.org/content/statistical-methods-computerized-adaptive-testing00510nas a2200109 4500008004100000245012400041210006900165260001300234100001300247700001300260856012700273 1996 eng d00aA search procedure to determine sets of decision points when using testlet-based Bayesian sequential testing procedures0 asearch procedure to determine sets of decision points when using aNew York1 aSmith, R1 aLewis, C uhttp://mail.iacat.org/content/search-procedure-determine-sets-decision-points-when-using-testlet-based-bayesian-sequential00513nas a2200109 4500008004100000245009000041210006900131260005400200100001500254700001900269856011500288 1996 eng d00aSome practical examples of computerized adaptive sequential testing (Internal Report)0 aSome practical examples of computerized adaptive sequential test aPhiladelphia: National Board of Medical Examiners1 aLuecht, RM1 aNungester, R J uhttp://mail.iacat.org/content/some-practical-examples-computerized-adaptive-sequential-testing-internal-report00433nas a2200121 4500008004100000245006500041210006500106260001400171100001300185700001200198700001300210856008800223 1996 eng d00aStrategies for managing item pools to maximize item security0 aStrategies for managing item pools to maximize item security aSan Diego1 aWay, W D1 aZara, A1 aLeahy, J uhttp://mail.iacat.org/content/strategies-managing-item-pools-maximize-item-security00370nas a2200085 4500008004100000245006700041210006700108100001800175856009100193 1995 eng d00aShortfall of questions curbs use of computerized graduate exam0 aShortfall of questions curbs use of computerized graduate exam1 aJacobson, R L uhttp://mail.iacat.org/content/shortfall-questions-curbs-use-computerized-graduate-exam00444nas a2200097 4500008004100000245006900041210006700110260005700177100001500234856009700249 1995 eng d00aSome alternative CAT item selection heuristics (Internal report)0 aSome alternative CAT item selection heuristics Internal report aPhiladelphia PA: National Board of Medical Examiners1 aLuecht, RM uhttp://mail.iacat.org/content/some-alternative-cat-item-selection-heuristics-internal-report00401nas a2200109 4500008004100000245005800041210005800099260001900157100001600176700001500192856008400207 1995 eng d00aSome new methods for content balancing adaptive tests0 aSome new methods for content balancing adaptive tests aMinneapolis MN1 aSegall, D O1 aDavey, T C uhttp://mail.iacat.org/content/some-new-methods-content-balancing-adaptive-tests01698nas a2200229 4500008004100000020004100041245006400082210006200146250001500208260000800223300001100231490000700242520102000249653003101269653002501300653001001325653001101335653001101346653000901357100001601366856008601382 1995 jpn d a0021-5236 (Print)0021-5236 (Linking)00aA study of psychologically optimal level of item difficulty0 astudy of psychologically optimal level of item difficulty a1995/02/01 cFeb a446-530 v653 aFor the purpose of selecting items in a test, this study presented a viewpoint of psychologically optimal difficulty level, as well as measurement efficiency, of items. A paper-and-pencil test (P & P) composed of hard, moderate and easy subtests was administered to 298 students at a university. A computerized adaptive test (CAT) was also administered to 79 students. The items of both tests were selected from Shiba's Word Meaning Comprehension Test, for which the estimates of parameters of two-parameter item response model were available. The results of P & P research showed that the psychologically optimal success level would be such that the proportion of right answers is somewhere between .75 and .85. A similar result was obtained from CAT research, where the proportion of about .8 might be desirable. Traditionally a success rate of .5 has been recommended in adaptive testing. In this study, however, it was suggested that the items of such level would be too hard psychologically for many examinees.10a*Adaptation, Psychological10a*Psychological Tests10aAdult10aFemale10aHumans10aMale1 aFujimori, S uhttp://mail.iacat.org/content/study-psychologically-optimal-level-item-difficulty00447nas a2200109 4500008004100000245008200041210006900123260001900192100001700211700001500228856009400243 1994 eng d00aThe selection of test items for decision making with a computer adaptive test0 aselection of test items for decision making with a computer adap aNew Orleans LA1 aReckase, M D1 aSpray, J A uhttp://mail.iacat.org/content/selection-test-items-decision-making-computer-adaptive-test00449nas a2200109 4500008004100000245008200041210006900123260001900192100001500211700001700226856009600243 1994 eng d00aThe selection of test items for decision making with a computer adaptive test0 aselection of test items for decision making with a computer adap aNew Orleans LA1 aSpray, J A1 aReckase, M D uhttp://mail.iacat.org/content/selection-test-items-decision-making-computer-adaptive-test-000287nas a2200109 4500008004100000245002500041210002400066300000900090490000600099100001700105856005500122 1994 eng d00aSelf-adapted testing0 aSelfadapted testing a3-140 v71 aRocklin, T R uhttp://mail.iacat.org/content/self-adapted-testing00436nas a2200097 4500008004100000245006800041210006600109260005100175100002000226856009200246 1994 eng d00aA simple and fast item selection procedure for adaptive testing0 asimple and fast item selection procedure for adaptive testing aResearch (Report 94-13). University of Twente.1 aVeerkamp, W J J uhttp://mail.iacat.org/content/simple-and-fast-item-selection-procedure-adaptive-testing00549nas a2200133 4500008004500000245010900045210006900154300001200223490000700235100001300242700001600255700001700271856012700288 1994 Engldsh 00aA Simulation Study of Methods for Assessing Differential Item Functioning in Computerized Adaptive Tests0 aSimulation Study of Methods for Assessing Differential Item Func a121-1400 v181 aZwick, R1 aThayer, D T1 aWingersky, M uhttp://mail.iacat.org/content/simulation-study-methods-assessing-differential-item-functioning-computerized-adaptive-tests00518nas a2200097 4500008004100000245012400041210006900165260004600234100001300280856012700293 1994 eng d00aA simulation study of the Mantel-Haenszel procedure for detecting DIF with the NCLEX using CAT (Technical Report xx-xx)0 asimulation study of the MantelHaenszel procedure for detecting D aPrinceton NJ: Educational Testing Service1 aWay, W D uhttp://mail.iacat.org/content/simulation-study-mantel-haenszel-procedure-detecting-dif-nclex-using-cat-technical-report-xx00546nas a2200109 4500008004100000245007800041210006900119260011000188100001800298700001800316856010200334 1994 eng d00aSome new item selection criteria for adaptive testing (Research Rep 94-6)0 aSome new item selection criteria for adaptive testing Research R aEnschede, The Netherlands: University of Twente, Department of Educational Measurement and Data Analysis.1 aVeerkamp, W J1 aBerger, M P F uhttp://mail.iacat.org/content/some-new-item-selection-criteria-adaptive-testing-research-rep-94-600527nas a2200121 4500008004100000245011500041210006900156260001200225100001800237700001700255700001400272856011900286 1993 eng d00aA simulated comparison of testlets and a content balancing procedure for an adaptive certification examination0 asimulated comparison of testlets and a content balancing procedu aAtlanta1 aReshetar, R A1 aNorcini, J J1 aShea, J A uhttp://mail.iacat.org/content/simulated-comparison-testlets-and-content-balancing-procedure-adaptive-certification00564nas a2200121 4500008004100000245014400041210006900185260001200254100001800266700001700284700001400301856012700315 1993 eng d00aA simulated comparison of two content balancing and maximum information item selection procedures for an adaptive certification examination0 asimulated comparison of two content balancing and maximum inform aAtlanta1 aReshetar, R A1 aNorcini, J J1 aShea, J A uhttp://mail.iacat.org/content/simulated-comparison-two-content-balancing-and-maximum-information-item-selection-procedures00606nas a2200121 4500008004100000245016000041210006900201260004700270100001300317700001400330700001700344856012300361 1993 eng d00aA simulation study of methods for assessing differential item functioning in computer-adaptive tests (Educational Testing Service Research Rep No RR 93-11)0 asimulation study of methods for assessing differential item func aPrinceton NJ: Educational Testing Service.1 aZwick, R1 aThayer, D1 aWingersky, M uhttp://mail.iacat.org/content/simulation-study-methods-assessing-differential-item-functioning-computer-adaptive-tests00435nas a2200097 4500008004100000245008800041210006900129260001700198100001300215856010900228 1993 eng d00aSome initial experiments with adaptive survey designs for structured questionnaires0 aSome initial experiments with adaptive survey designs for struct aCambridge MA1 aSingh, J uhttp://mail.iacat.org/content/some-initial-experiments-adaptive-survey-designs-structured-questionnaires00476nas a2200109 4500008004100000245010100041210006900142300001000211490001100221100001100232856012300243 1993 eng d00aSome practical considerations when converting a linearly administered test to an adaptive format0 aSome practical considerations when converting a linearly adminis a15-200 v12 (1)1 aWainer uhttp://mail.iacat.org/content/some-practical-considerations-when-converting-linearly-administered-test-adaptive-format00454nas a2200109 4500008004100000245007800041210006900119260001900188100002100207700001500228856010100243 1992 eng d00aScaling of two-stage adaptive test configurations for achievement testing0 aScaling of twostage adaptive test configurations for achievement aNew Orleans LA1 aHendrickson, A B1 aKolen, M J uhttp://mail.iacat.org/content/scaling-two-stage-adaptive-test-configurations-achievement-testing00522nas a2200097 4500008004100000245013200041210006900173260004600242100001100288856012500299 1992 eng d00aSome practical considerations when converting a linearly administered test to an adaptive format (Research Report 92-21 or 13?)0 aSome practical considerations when converting a linearly adminis aPrinceton NJ: Educational Testing Service1 aWainer uhttp://mail.iacat.org/content/some-practical-considerations-when-converting-linearly-administered-test-adaptive-format-000458nas a2200121 4500008004100000245006700041210006600108260002100174100001300195700001700208700001400225856009700239 1992 eng d00aStudent attitudes toward computer-adaptive test administration0 aStudent attitudes toward computeradaptive test administration aSan Francisco CA1 aBaghi, H1 aFerrara, S F1 aGabrys, R uhttp://mail.iacat.org/content/student-attitudes-toward-computer-adaptive-test-administration00470nas a2200109 4500008004100000245007800041210006900119260005300188100001700241700001300258856008900271 1991 eng d00aA simulation study of some simple approaches to the study of DIF for CATs0 asimulation study of some simple approaches to the study of DIF f aInternal memorandum, Educational Testing Service1 aHolland, P W1 aZwick, R uhttp://mail.iacat.org/content/simulation-study-some-simple-approaches-study-dif-cats00523nas a2200121 4500008004100000245007700041210006900118260007500187100001100262700001400273700001300287856010100300 1991 eng d00aSome empirical guidelines for building testlets (Technical Report 91-56)0 aSome empirical guidelines for building testlets Technical Report aPrinceton NJ: Educational Testing Service, Program Statistics Research1 aWainer1 aKaplan, B1 aLewis, C uhttp://mail.iacat.org/content/some-empirical-guidelines-building-testlets-technical-report-91-5600383nas a2200109 4500008003900000245006100039210006100100300001000161490000700171100001200178856008300190 1990 d00aSequential item response models with an ordered response0 aSequential item response models with an ordered response a39-550 v431 aTutz, G uhttp://mail.iacat.org/content/sequential-item-response-models-ordered-response01509nas a2200157 4500008004100000245008900041210006900130300001200199490000700211520093700218653003401155100002001189700001401209700001401223856011401237 1990 eng d00aA simulation and comparison of flexilevel and Bayesian computerized adaptive testing0 asimulation and comparison of flexilevel and Bayesian computerize a227-2390 v273 aComputerized adaptive testing (CAT) is a testing procedure that adapts an examination to an examinee's ability by administering only items of appropriate difficulty for the examinee. In this study, the authors compared Lord's flexilevel testing procedure (flexilevel CAT) with an item response theory-based CAT using Bayesian estimation of ability (Bayesian CAT). Three flexilevel CATs, which differed in test length (36, 18, and 11 items), and three Bayesian CATs were simulated; the Bayesian CATs differed from one another in the standard error of estimate (SEE) used for terminating the test (0.25, 0.10, and 0.05). Results showed that the flexilevel 36- and 18-item CATs produced ability estimates that may be considered as accurate as those of the Bayesian CAT with SEE = 0.10 and comparable to the Bayesian CAT with SEE = 0.05. The authors discuss the implications for classroom testing and for item response theory-based CAT.10acomputerized adaptive testing1 ade Ayala, R. J.1 aDodd, B G1 aKoch, W R uhttp://mail.iacat.org/content/simulation-and-comparison-flexilevel-and-bayesian-computerized-adaptive-testing00378nas a2200109 4500008004100000245005500041210005400096300001000150490000600160100001800166856008400184 1990 eng d00aSoftware review: MicroCAT Testing System Version 30 aSoftware review MicroCAT Testing System Version 3 a82-880 v71 aPatience, W M uhttp://mail.iacat.org/content/software-review-microcat-testing-system-version-300471nas a2200109 4500008004100000245008900041210006900130260001500199100002400214700001400238856010900252 1990 eng d00aThe stability of Rasch pencil and paper item calibrations on computer adaptive tests0 astability of Rasch pencil and paper item calibrations on compute aChicago IL1 aBergstrom, Betty, A1 aLunz, M E uhttp://mail.iacat.org/content/stability-rasch-pencil-and-paper-item-calibrations-computer-adaptive-tests00409nas a2200121 4500008004100000245005300041210005300094300001200147490001000159100002300169700001600192856007900208 1989 eng d00aSome procedures for computerized ability testing0 aSome procedures for computerized ability testing a175-1870 v13(2)1 avan der Linden, WJ1 aZwarts, M A uhttp://mail.iacat.org/content/some-procedures-computerized-ability-testing00429nas a2200097 4500008004100000245007000041210006400111260004600175100001800221856009200239 1988 eng d00aScale drift in on-line calibration (Research Report RR-88-28-ONR)0 aScale drift in online calibration Research Report RR8828ONR aPrinceton NJ: Educational Testing Service1 aStocking, M L uhttp://mail.iacat.org/content/scale-drift-line-calibration-research-report-rr-88-28-onr00428nas a2200097 4500008004100000245006900041210006400110260004900174100001800223856008900241 1988 eng d00aScale drift in on-line calibration (Tech Rep. No. ERIC ED389710)0 aScale drift in online calibration Tech Rep No ERIC ED389710 aEducational Testing Service, Princeton, N.J.1 aStocking, M L uhttp://mail.iacat.org/content/scale-drift-line-calibration-tech-rep-no-eric-ed38971000400nas a2200097 4500008004100000245006800041210006500109260001900174100001700193856009200210 1988 eng d00aSimple and effective algorithms [for] computer-adaptive testing0 aSimple and effective algorithms for computeradaptive testing aNew Orleans LA1 aLinacre, J M uhttp://mail.iacat.org/content/simple-and-effective-algorithms-computer-adaptive-testing00481nas a2200097 4500008004100000245009200041210006900133260004600202100001800248856011700266 1988 eng d00aSome considerations in maintaining adaptive test item pools (Research Report 88-33-ONR)0 aSome considerations in maintaining adaptive test item pools Rese aPrinceton NJ: Educational Testing Service1 aStocking, M L uhttp://mail.iacat.org/content/some-considerations-maintaining-adaptive-test-item-pools-research-report-88-33-onr00486nas a2200097 4500008004100000245009400041210006900135260004900204100001800253856011700271 1988 eng d00aSome considerations in maintaining adaptive test item pools (Tech Rep. No. ERIC ED391814)0 aSome considerations in maintaining adaptive test item pools Tech aEducational Testing Service, Princeton, N.J.1 aStocking, M L uhttp://mail.iacat.org/content/some-considerations-maintaining-adaptive-test-item-pools-tech-rep-no-eric-ed39181400494nas a2200121 4500008004100000245009300041210006900134300001200203490000700215100001200222700002100234856011700255 1987 eng d00aSelf-adapted testing: A performance improving variation of computerized adaptive testing0 aSelfadapted testing A performance improving variation of compute a315-3190 v791 aRocklin1 aO’Donnell, A M uhttp://mail.iacat.org/content/self-adapted-testing-performance-improving-variation-computerized-adaptive-testing00463nas a2200109 4500008004500000245008500045210006900130300001200199490000700211100002400218856011100242 1986 Engldsh 00aSome Applications of Optimization Algorithms in Test Design and Adaptive Testing0 aSome Applications of Optimization Algorithms in Test Design and a381-3890 v101 aTheunissen, T J J M uhttp://mail.iacat.org/content/some-applications-optimization-algorithms-test-design-and-adaptive-testing-000457nas a2200109 4500008004100000245008500041210006900126300001200195490000700207100002400214856010900238 1986 eng d00aSome applications of optimization algorithms in test design and adaptive testing0 aSome applications of optimization algorithms in test design and a381-3890 v101 aTheunissen, T J J M uhttp://mail.iacat.org/content/some-applications-optimization-algorithms-test-design-and-adaptive-testing00381nam a2200097 4500008004100000245005600041210005500097260003000152100001600182856008500198 1985 eng d00aSequential analysis: Tests and confidence intervals0 aSequential analysis Tests and confidence intervals aNew York: Springer-Verlag1 aSiegmund, D uhttp://mail.iacat.org/content/sequential-analysis-tests-and-confidence-intervals01655nas a2200121 4500008004100000245007900041210006900120300001200189490000700201520121400208100001401422856009701436 1985 eng d00aA structural comparison of conventional and adaptive versions of the ASVAB0 astructural comparison of conventional and adaptive versions of t a305-3220 v203 aExamined several structural models of similarity between the Armed Services Vocational Aptitude Battery (ASVAB) and a battery of computerized adaptive tests designed to measure the same aptitudes. 12 plausible models were fitted to sample data in a double cross-validation design. 1,411 US Navy recruits completed 10 ASVAB subtests. A computerized adaptive test version of the ASVAB subtests was developed on item pools of approximately 200 items each. The items were pretested using applicants from military entrance processing stations across the US, resulting in a total calibration sample size of approximately 60,000 for the computerized adaptive tests. Three of the 12 models provided reasonable summaries of the data. One model with a multiplicative structure (M. W. Browne; see record 1984-24964-001) performed quite well. This model provides an estimate of the disattenuated method correlation between conventional testing and adaptive testing. In the present data, this correlation was estimated to be 0.97 and 0.98 in the 2 halves of the data. Results support computerized adaptive tests as replacements for conventional tests. (33 ref) (PsycINFO Database Record (c) 2004 APA, all rights reserved).1 aCudeck, R uhttp://mail.iacat.org/content/structural-comparison-conventional-and-adaptive-versions-asvab00437nas a2200109 4500008004100000245007700041210006900118260001900187100001500206700001700221856008900238 1984 eng d00aThe selection of items for decision making with a computer adaptive test0 aselection of items for decision making with a computer adaptive aNew Orleans LA1 aSpray, J A1 aReckase, M D uhttp://mail.iacat.org/content/selection-items-decision-making-computer-adaptive-test00370nas a2200121 4500008004100000245003400041210003400075260003800109300001000147100001300157700001400170856006400184 1983 eng d00aSmall N justifies Rasch model0 aSmall N justifies Rasch model aNew York, NY. USAbAcademic Press a51-611 aLord, FM1 aBock, R D uhttp://mail.iacat.org/content/small-n-justifies-rasch-model00443nam a2200109 4500008004100000245006600041210006200107260004200169100001800211700001500229856008900244 1983 eng d00aThe stochastic modeling of elementary psychological processes0 astochastic modeling of elementary psychological processes aCambridge: Cambridge University Press1 aTownsend, J T1 aAshby, G F uhttp://mail.iacat.org/content/stochastic-modeling-elementary-psychological-processes00511nas a2200097 4500008004100000245007700041210006900118260010900187100001400296856010300310 1983 eng d00aThe stratified adaptive computerized ability test (Research Report 73-3)0 astratified adaptive computerized ability test Research Report 73 aMinneapolis: University of Minnesota, Department of Psychology, Computerized Adaptive Testing Laboratory1 aWeiss, DJ uhttp://mail.iacat.org/content/stratified-adaptive-computerized-ability-test-research-report-73-3-000324nas a2200109 4500008004100000245003700041210003700078300001200115490000600127100001800133856006300151 1982 eng d00aSequential testing for selection0 aSequential testing for selection a337-3510 v61 aWeitzman, R A uhttp://mail.iacat.org/content/sequential-testing-selection00330nas a2200109 4500008004500000245003700045210003700082300001200119490000600131100001800137856006500155 1982 Engldsh 00aSequential Testing for Selection0 aSequential Testing for Selection a337-3510 v61 aWeitzman, R A uhttp://mail.iacat.org/content/sequential-testing-selection-000370nas a2200133 4500008003900000245003800039210003600077300001200113490000700125100001400132700001300146700001400159856006300173 1980 d00aA simple form of tailored testing0 asimple form of tailored testing a301-3030 v501 aNisbet, J1 aAdams, M1 aArthur, J uhttp://mail.iacat.org/content/simple-form-tailored-testing00595nas a2200097 4500008004100000245005900041210005900100260024100159100001700400856008000417 1980 eng d00aSome decision procedures for use with tailored testing0 aSome decision procedures for use with tailored testing aD. J. Weiss (Ed.), Proceedings of the 1979 Computerized Adaptive Testing Conference (pp. 79-100). Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, Computerized Adaptive Testing Laboratory.1 aReckase, M D uhttp://mail.iacat.org/content/some-decision-procedures-use-tailored-testing00590nas a2200097 4500008004100000245005400041210005400095260025000149100001300399856008000412 1980 eng d00aSome how and which for practical tailored testing0 aSome how and which for practical tailored testing aL. J. T. van der Kamp, W. F. Langerak and D.N.M. de Gruijter (Eds): Psychometrics for educational debates (pp. 189-206). New York: John Wiley and Sons. Computer-Assisted Instruction, Testing, and Guidance (pp. 139-183). New York: Harper and Row.1 aLord, FM uhttp://mail.iacat.org/content/some-how-and-which-practical-tailored-testing00592nas a2200109 4500008004100000245010700041210006900148260010300217100001800320700001700338856012700355 1980 eng d00aA successful application of latent trait theory to tailored achievement testing (Research Report 80-1)0 asuccessful application of latent trait theory to tailored achiev aUniversity of Missouri, Department of Educational Psychology, Tailored Testing Research Laboratory1 aMcKinley, R L1 aReckase, M D uhttp://mail.iacat.org/content/successful-application-latent-trait-theory-tailored-achievement-testing-research-report-80-100400nas a2200097 4500008004100000245007100041210006900112260001300181100001700194856009100211 1979 eng d00aStudent reaction to computerized adaptive testing in the classroom0 aStudent reaction to computerized adaptive testing in the classro aNew York1 aJohnson, M J uhttp://mail.iacat.org/content/student-reaction-computerized-adaptive-testing-classroom00473nas a2200121 4500008004100000245008900041210006900130300001200199490000600211100001400217700001400231856010600245 1978 eng d00aThe stratified adaptive ability test as a tool for personnel selection and placement0 astratified adaptive ability test as a tool for personnel selecti a135-1510 v81 aVale, C D1 aWeiss, DJ uhttp://mail.iacat.org/content/stratified-adaptive-ability-test-tool-personnel-selection-and-placement00404nas a2200133 4500008003900000245004900039210004700088300001200135490000700147100001300154700001500167700001400182856007400196 1978 d00aA stratified adaptive test of verbal ability0 astratified adaptive test of verbal ability a229-2380 v261 aShiba, S1 aNoguchi, H1 aHaebra, T uhttp://mail.iacat.org/content/stratified-adaptive-test-verbal-ability00415nas a2200109 4500008004100000245006800041210006800109300001200177490000600189100001700195856009300212 1977 eng d00aSome properties of a Bayesian adaptive ability testing strategy0 aSome properties of a Bayesian adaptive ability testing strategy a121-1400 v11 aMcBride, J R uhttp://mail.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy00417nas a2200109 4500008004100000245006800041210006800109300001200177490000600189100001700195856009500212 1977 En d00aSome Properties of a Bayesian Adaptive Ability Testing Strategy0 aSome Properties of a Bayesian Adaptive Ability Testing Strategy a121-1400 v11 aMcBride, J R uhttp://mail.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy-000537nas a2200109 4500008004100000245004600041210004600087260018600133100001400319700001800333856007600351 1977 eng d00aStudent attitudes toward tailored testing0 aStudent attitudes toward tailored testing aD. J. Weiss (Ed.), Proceedings of the 1977 Computerized Adaptive Testing Conference. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aKoch, W R1 aPatience, W M uhttp://mail.iacat.org/content/student-attitudes-toward-tailored-testing00466nam a2200097 4500008004100000245006900041210006800110260008000178100001700258856009300275 1976 eng d00aSimulation studies of adaptive testing: A comparative evaluation0 aSimulation studies of adaptive testing A comparative evaluation aUnpublished doctoral dissertation, University of Minnesota, Minneapolis, MN1 aMcBride, J R uhttp://mail.iacat.org/content/simulation-studies-adaptive-testing-comparative-evaluation00500nas a2200097 4500008004100000245005600041210005600097260015300153100001300306856008300319 1976 eng d00aSome likelihood functions found in tailored testing0 aSome likelihood functions found in tailored testing aC. K. Clark (Ed.), Proceedings of the First Conference on Computerized Adaptive Testing (pp. 79-81). Washington DC: U.S. Government Printing Office.1 aLord, FM uhttp://mail.iacat.org/content/some-likelihood-functions-found-tailored-testing00543nas a2200109 4500008004100000245009100041210006900132260008700201100001700288700001400305856011400319 1976 eng d00aSome properties of a Bayesian adaptive ability testing strategy (Research Report 76-1)0 aSome properties of a Bayesian adaptive ability testing strategy aMinneapolis MN: Department of Psychology, Computerized Adaptive Testing Laboratory1 aMcBride, J R1 aWeiss, DJ uhttp://mail.iacat.org/content/some-properties-bayesian-adaptive-ability-testing-strategy-research-report-76-100486nas a2200097 4500008004100000245002700041210002700068260021900095100001700314856005700331 1975 eng d00aScoring adaptive tests0 aScoring adaptive tests aD. J. Weiss (Ed.), Computerized adaptive trait measurement: Problems and Prospects (Research Report 75-5), pp. 17-25. Minneapolis MN: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aMcBride, J R uhttp://mail.iacat.org/content/scoring-adaptive-tests00377nas a2200109 4500008004100000245005600041210005600097300001000153490000600163100001600169856008200185 1975 eng d00aSequential testing for instructional classification0 aSequential testing for instructional classification a92-990 v11 aThomas, D B uhttp://mail.iacat.org/content/sequential-testing-instructional-classification00522nas a2200109 4500008004100000245007700041210006900118260009700187100001400284700001400298856010000312 1975 eng d00aA simulation study of stradaptive ability testing (Research Report 75-6)0 asimulation study of stradaptive ability testing Research Report aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aVale, C D1 aWeiss, DJ uhttp://mail.iacat.org/content/simulation-study-stradaptive-ability-testing-research-report-75-600539nas a2200097 4500008004100000245004900041210004900090260021500139100001400354856007300368 1975 eng d00aStrategies of branching through an item pool0 aStrategies of branching through an item pool aD. J. Weiss (Ed.), Computerized adaptive trait measurement: Problems and Prospects (Research Report 75-5), pp. 1-16. Minneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program.1 aVale, C D uhttp://mail.iacat.org/content/strategies-branching-through-item-pool00544nas a2200109 4500008004100000245008800041210006900129260009700198100001400295700001400309856011100323 1975 eng d00aA study of computer-administered stradaptive ability testing (Research Report 75-4)0 astudy of computeradministered stradaptive ability testing Resear aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aVale, C D1 aWeiss, DJ uhttp://mail.iacat.org/content/study-computer-administered-stradaptive-ability-testing-research-report-75-400495nas a2200109 4500008004100000245007500041210006900116260007200185100001400257700001400271856010000285 1974 eng d00aSimulation studies of two-stage ability testing (Research Report 74-4)0 aSimulation studies of twostage ability testing Research Report 7 aMinneapolis: Department of Psychology, Psychometric Methods Program1 aBetz, N E1 aWeiss, DJ uhttp://mail.iacat.org/content/simulation-studies-two-stage-ability-testing-research-report-74-400482nas a2200097 4500008004100000245007000041210006700111260009700178100001400275856009500289 1974 eng d00aStrategies of adaptive ability measurement (Research Report 74-5)0 aStrategies of adaptive ability measurement Research Report 745 aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aWeiss, DJ uhttp://mail.iacat.org/content/strategies-adaptive-ability-measurement-research-report-74-500497nas a2200097 4500008004100000245007700041210006900118260009700187100001400284856010100298 1973 eng d00aThe stratified adaptive computerized ability test (Research Report 73-3)0 astratified adaptive computerized ability test Research Report 73 aMinneapolis: University of Minnesota, Department of Psychology, Psychometric Methods Program1 aWeiss, DJ uhttp://mail.iacat.org/content/stratified-adaptive-computerized-ability-test-research-report-73-300567nas a2200181 4500008004100000245005100041210004900092300001100141490000700152653000900159653004900168653004100217653000900258100001300267700001400280700001600294856007500310 1972 eng d00aSequential testing for dichotomous decisions. 0 aSequential testing for dichotomous decisions a85-95.0 v3210aCCAT10aCLASSIFICATION Computerized Adaptive Testing10asequential probability ratio testing10aSPRT1 aLinn, RL1 aRock, D A1 aCleary, T A uhttp://mail.iacat.org/content/sequential-testing-dichotomous-decisions00314nas a2200109 4500008004100000245003700041210003200078300001200110490000600122100001300128856006300141 1971 eng d00aThe self-scoring flexilevel test0 aselfscoring flexilevel test a147-1510 v81 aLord, FM uhttp://mail.iacat.org/content/self-scoring-flexilevel-test00355nas a2200097 4500008004100000245004700041210003900088260004600127100001300173856007100186 1970 eng d00aThe self-scoring flexilevel test (RB-7043)0 aselfscoring flexilevel test RB7043 aPrinceton NJ: Educational Testing Service1 aLord, FM uhttp://mail.iacat.org/content/self-scoring-flexilevel-test-rb-704300623nas a2200121 4500008004100000245017800041210006900219260004700288100001300335700001400348700001600362856012300378 1970 eng d00aSequential testing for dichotomous decisions. College Entrance Examination Board Research and Development Report (RDR 69-70, No 3", and Educational Testing Service RB-70-31)0 aSequential testing for dichotomous decisions College Entrance Ex aPrinceton NJ: Educational Testing Service.1 aLinn, RL1 aRock, D A1 aCleary, T A uhttp://mail.iacat.org/content/sequential-testing-dichotomous-decisions-college-entrance-examination-board-research-and00423nas a2200097 4500008004100000245004200041210004200083260011900125100001300244856006800257 1970 eng d00aSome test theory for tailored testing0 aSome test theory for tailored testing aW. H. Holtzman (Ed.), Computer-assisted instruction, testing, and guidance (pp.139-183). New York: Harper and Row.1 aLord, FM uhttp://mail.iacat.org/content/some-test-theory-tailored-testing00335nas a2200097 4500008004100000245003600041210003200077260004600109100001800155856006400173 1969 eng d00aShort tailored tests (RB-69-63)0 aShort tailored tests RB6963 aPrinceton NJ: Educational Testing Service1 aStocking, M L uhttp://mail.iacat.org/content/short-tailored-tests-rb-69-6300321nas a2200121 4500008004100000245002900041210002500070300000800095490000600103100001900109700001600128856005500144 1956 eng d00aThe sequential item test0 asequential item test a4190 v21 aKrathwohl, D R1 aHuyser, R J uhttp://mail.iacat.org/content/sequential-item-test00499nas a2200109 4500008004100000245011900041210006900160300001200229490000700241100001600248856012500264 1950 eng d00a Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis0 aSequential analysis with more than two alternative hypotheses an a137-1440 v121 aArmitage, P uhttp://mail.iacat.org/content/sequential-analysis-more-two-alternative-hypotheses-and-its-relation-discriminant-function00479nas a2200109 4500008004100000245010500041210006900146300001200215490000700227100001600234856011900250 1950 eng d00aSome empirical aspects of the sequential analysis technique as applied to an achievement examination0 aSome empirical aspects of the sequential analysis technique as a a195-2070 v181 aMoonan, W J uhttp://mail.iacat.org/content/some-empirical-aspects-sequential-analysis-technique-applied-achievement-examination