03184nas a2200157 4500008004100000245010000041210006900141260005500210520260800265653002002873653001602893653001102909100001902920700001602939856007102955 2017 eng d00aThe Implementation of Nationwide High Stakes Computerized (adaptive) Testing in the Netherlands0 aImplementation of Nationwide High Stakes Computerized adaptive T aNiigata, JapanbNiigata Seiryo Universityc08/20173 a
In this presentation the challenges of implementation of (adaptive) digital testing in the Facet system in the Netherlands is discussed. In the Netherlands there is a long tradition of implementing adaptive testing in educational settings. Already since the late nineties of the last century adaptive testing was used mostly in low stakes testing. Several CATs were implemented in student monitoring systems for primary education and in the general subjects language and arithmetic in vocational education. The only nationwide implemented high stakes CAT is the WISCAT-pabo: an arithmetic test for students in the first year of primary school teacher colleges. The psychometric advantages of item based adaptive testing are obvious. For example efficiency and high measurement precision. But there are also some disadvantages such as the impossibility of reviewing items during and after the test. During the test the student is not in control of his own test; e.q . he can only navigate forward to the next item. This is one of the reasons other methods of testing, such as multistage-testing, with adaptivity not on the item level but on subtest level, has become more popular to use in high stakes testing.
A main challenge of computerized (adaptive) testing is the implementation of the item bank and the test workflow in a digital system. Since 2014 a nationwide new digital system (Facet) was introduced in the Netherlands, with connections to the digital systems of different parties based on international standards (LTI and QTI). The first nationwide tests in the Facet-system were flexible exams Dutch and arithmetic for vocational (and secondary) education, taken as item response theory-based equated linear multiple forms tests, which are administered during 5 periods in a year. Nowadays there are some implementations of different methods of (multistage) adaptive testing in the same Facet system (DTT en Acet).
In this conference, other presenters of Cito will elaborate on the psychometric characteristics of this other adaptive testing methods. In this contribution, the system architecture and interoperability of the Facet system will be explained. The emphasis is on the implementation and the problems to be solved by using this digital system in all phases of the (adaptive) testing process: item banking, test construction, designing, publication, test taking, analyzing and reporting to the student. An evaluation of the use of the system will be presented.
10aHigh stakes CAT10aNetherlands10aWISCAT1 avan Boxel, Mia1 aEggen, Theo uhttps://drive.google.com/open?id=1Kn1PvgioUYaOJ5pykq-_XWnwDU15rRsf00689nas a2200193 4500008004500000022001400045245012100059210006900180300001000249490000600259653003100265653002300296653002200319653003200341653002500373653001800398100001500416856006400431 2014 Engldsh a2165-659200aDetecting Item Preknowledge in Computerized Adaptive Testing Using Information Theory and Combinatorial Optimization0 aDetecting Item Preknowledge in Computerized Adaptive Testing Usi a37-580 v210acombinatorial optimization10ahypothesis testing10aitem preknowledge10aKullback-Leibler divergence10asimulated annealing.10atest security1 aBelov, D I uhttp://www.iacat.org/jcat/index.php/jcat/article/view/36/1802777nas a2200229 4500008004100000022001400041245015400055210006900209260000900278300000800287490000700295520193000302653001802232653003702250653001102287653002702298653002302325653003702348100002002385700001902405856012302424 2012 eng d a1471-228800aComparison of two Bayesian methods to detect mode effects between paper-based and computerized adaptive assessments: a preliminary Monte Carlo study.0 aComparison of two Bayesian methods to detect mode effects betwee c2012 a1240 v123 aBACKGROUND: Computerized adaptive testing (CAT) is being applied to health outcome measures developed as paper-and-pencil (P&P) instruments. Differences in how respondents answer items administered by CAT vs. P&P can increase error in CAT-estimated measures if not identified and corrected.
METHOD: Two methods for detecting item-level mode effects are proposed using Bayesian estimation of posterior distributions of item parameters: (1) a modified robust Z (RZ) test, and (2) 95% credible intervals (CrI) for the CAT-P&P difference in item difficulty. A simulation study was conducted under the following conditions: (1) data-generating model (one- vs. two-parameter IRT model); (2) moderate vs. large DIF sizes; (3) percentage of DIF items (10% vs. 30%), and (4) mean difference in θ estimates across modes of 0 vs. 1 logits. This resulted in a total of 16 conditions with 10 generated datasets per condition.
RESULTS: Both methods evidenced good to excellent false positive control, with RZ providing better control of false positives and with slightly higher power for CrI, irrespective of measurement model. False positives increased when items were very easy to endorse and when there with mode differences in mean trait level. True positives were predicted by CAT item usage, absolute item difficulty and item discrimination. RZ outperformed CrI, due to better control of false positive DIF.
CONCLUSIONS: Whereas false positives were well controlled, particularly for RZ, power to detect DIF was suboptimal. Research is needed to examine the robustness of these methods under varying prior assumptions concerning the distribution of item and person parameters and when data fail to conform to prior assumptions. False identification of DIF when items were very easy to endorse is a problem warranting additional investigation.
10aBayes Theorem10aData Interpretation, Statistical10aHumans10aMathematical Computing10aMonte Carlo Method10aOutcome Assessment (Health Care)1 aRiley, Barth, B1 aCarle, Adam, C uhttp://mail.iacat.org/content/comparison-two-bayesian-methods-detect-mode-effects-between-paper-based-and-computerized00539nas a2200121 4500008004100000245011800041210006900159653000800228653002400236653001100260100002000271856012600291 2011 eng d00aHigh-throughput Health Status Measurement using CAT in the Era of Personal Genomics: Opportunities and Challenges0 aHighthroughput Health Status Measurement using CAT in the Era of10aCAT10ahealth applications10aPROMIS1 aKrishnan, Eswar uhttp://mail.iacat.org/content/high-throughput-health-status-measurement-using-cat-era-personal-genomics-opportunities-and03050nas a2200241 4500008004100000020004100041245017000082210007100252250001500323300001000338490000700348520214300355653001402498653006602512653001102578653001302589100001402602700001202616700001402628700001502642700001702657856013402674 2010 spa d a0214-9915 (Print)0214-9915 (Linking)00aDeterioro de parámetros de los ítems en tests adaptativos informatizados: estudio con eCAT [Item parameter drift in computerized adaptive testing: Study with eCAT]0 aDeterioro de parámetros de los ítems en tests adaptativos inform a2010/04/29 a340-70 v223 aEn el presente trabajo se muestra el análisis realizado sobre un Test Adaptativo Informatizado (TAI) diseñado para la evaluación del nivel de inglés, denominado eCAT, con el objetivo de estudiar el deterioro de parámetros (parameter drift) producido desde la calibración inicial del banco de ítems. Se ha comparado la calibración original desarrollada para la puesta en servicio del TAI (N= 3224) y la calibración actual obtenida con las aplicaciones reales del TAI (N= 7254). Se ha analizado el Funcionamiento Diferencial de los Ítems (FDI) en función de los parámetros utilizados y se ha simulado el impacto que sobre el nivel de rasgo estimado tiene la variación en los parámetros. Los resultados muestran que se produce especialmente un deterioro de los parámetros a y c, que hay unimportante número de ítems del banco para los que existe FDI y que la variación de los parámetros produce un impacto moderado en la estimación de θ de los evaluados con nivel de inglés alto. Se concluye que los parámetros de los ítems se han deteriorado y deben ser actualizados.Item parameter drift in computerized adaptive testing: Study with eCAT. This study describes the parameter drift analysis conducted on eCAT (a Computerized Adaptive Test to assess the written English level of Spanish speakers). The original calibration of the item bank (N = 3224) was compared to a new calibration obtained from the data provided by most eCAT operative administrations (N =7254). A Differential Item Functioning (DIF) study was conducted between the original and the new calibrations. The impact that the new parameters have on the trait level estimates was obtained by simulation. Results show that parameter drift is found especially for a and c parameters, an important number of bank items show DIF, and the parameter change has a moderate impact on high-level-English θ estimates. It is then recommended to replace the original estimates by the new set. by the new set.
10a*Software10aEducational Measurement/*methods/*statistics & numerical data10aHumans10aLanguage1 aAbad, F J1 aOlea, J1 aAguado, D1 aPonsoda, V1 aBarrada, J R uhttp://mail.iacat.org/content/deterioro-de-par%C3%A1metros-de-los-%C3%ADtems-en-tests-adaptativos-informatizados-estudio-con-ecat03104nas a2200445 4500008004100000020004100041245012000082210006900202250001500271260001000286300001100296490000700307520175400314653003802068653002102106653001002127653000902137653002202146653002802168653003302196653001102229653001102240653000902251653001602260653001802276653001902294653003102313653003102344653001602375100001602391700001002407700001402417700001502431700001402446700001502460700001802475700002402493700001802517856012302535 2010 eng d a0161-8105 (Print)0161-8105 (Linking)00aDevelopment and validation of patient-reported outcome measures for sleep disturbance and sleep-related impairments0 aDevelopment and validation of patientreported outcome measures f a2010/06/17 cJun 1 a781-920 v333 aSTUDY OBJECTIVES: To develop an archive of self-report questions assessing sleep disturbance and sleep-related impairments (SRI), to develop item banks from this archive, and to validate and calibrate the item banks using classic validation techniques and item response theory analyses in a sample of clinical and community participants. DESIGN: Cross-sectional self-report study. SETTING: Academic medical center and participant homes. PARTICIPANTS: One thousand nine hundred ninety-three adults recruited from an Internet polling sample and 259 adults recruited from medical, psychiatric, and sleep clinics. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: This study was part of PROMIS (Patient-Reported Outcomes Information System), a National Institutes of Health Roadmap initiative. Self-report item banks were developed through an iterative process of literature searches, collecting and sorting items, expert content review, qualitative patient research, and pilot testing. Internal consistency, convergent validity, and exploratory and confirmatory factor analysis were examined in the resulting item banks. Factor analyses identified 2 preliminary item banks, sleep disturbance and SRI. Item response theory analyses and expert content review narrowed the item banks to 27 and 16 items, respectively. Validity of the item banks was supported by moderate to high correlations with existing scales and by significant differences in sleep disturbance and SRI scores between participants with and without sleep disorders. CONCLUSIONS: The PROMIS sleep disturbance and SRI item banks have excellent measurement properties and may prove to be useful for assessing general aspects of sleep and SRI with various groups of patients and interventions.10a*Outcome Assessment (Health Care)10a*Self Disclosure10aAdult10aAged10aAged, 80 and over10aCross-Sectional Studies10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aPsychometrics10aQuestionnaires10aReproducibility of Results10aSleep Disorders/*diagnosis10aYoung Adult1 aBuysse, D J1 aYu, L1 aMoul, D E1 aGermain, A1 aStover, A1 aDodds, N E1 aJohnston, K L1 aShablesky-Cade, M A1 aPilkonis, P A uhttp://mail.iacat.org/content/development-and-validation-patient-reported-outcome-measures-sleep-disturbance-and-sleep02750nas a2200409 4500008004100000020004600041245009400087210006900181250001500250260000800265300001200273490000700285520144500292653001501737653002001752653003101772653003001803653002001833653001901853653002601872653001101898653001101909653000901920653001601929653002601945653003701971653003002008653004402038653001802082653002002100653002802120100002002148700002302168700001602191700001702207856011602224 2009 eng d a1528-8447 (Electronic)1526-5900 (Linking)00aDevelopment and preliminary testing of a computerized adaptive assessment of chronic pain0 aDevelopment and preliminary testing of a computerized adaptive a a2009/07/15 cSep a932-9430 v103 aThe aim of this article is to report the development and preliminary testing of a prototype computerized adaptive test of chronic pain (CHRONIC PAIN-CAT) conducted in 2 stages: (1) evaluation of various item selection and stopping rules through real data-simulated administrations of CHRONIC PAIN-CAT; (2) a feasibility study of the actual prototype CHRONIC PAIN-CAT assessment system conducted in a pilot sample. Item calibrations developed from a US general population sample (N = 782) were used to program a pain severity and impact item bank (kappa = 45), and real data simulations were conducted to determine a CAT stopping rule. The CHRONIC PAIN-CAT was programmed on a tablet PC using QualityMetric's Dynamic Health Assessment (DYHNA) software and administered to a clinical sample of pain sufferers (n = 100). The CAT was completed in significantly less time than the static (full item bank) assessment (P < .001). On average, 5.6 items were dynamically administered by CAT to achieve a precise score. Scores estimated from the 2 assessments were highly correlated (r = .89), and both assessments discriminated across pain severity levels (P < .001, RV = .95). Patients' evaluations of the CHRONIC PAIN-CAT were favorable. PERSPECTIVE: This report demonstrates that the CHRONIC PAIN-CAT is feasible for administration in a clinic. The application has the potential to improve pain assessment and help clinicians manage chronic pain.10a*Computers10a*Questionnaires10aActivities of Daily Living10aAdaptation, Psychological10aChronic Disease10aCohort Studies10aDisability Evaluation10aFemale10aHumans10aMale10aMiddle Aged10aModels, Psychological10aOutcome Assessment (Health Care)10aPain Measurement/*methods10aPain, Intractable/*diagnosis/psychology10aPsychometrics10aQuality of Life10aUser-Computer Interface1 aAnatchkova, M D1 aSaris-Baglama, R N1 aKosinski, M1 aBjorner, J B uhttp://mail.iacat.org/content/development-and-preliminary-testing-computerized-adaptive-assessment-chronic-pain02882nas a2200493 4500008004100000020004100041245014100082210006900223250001500292260000800307300001100315490000700326520125100333653003001584653001001614653000901624653004601633653003301679653001101712653003101723653001101754653000901765653003301774653001601807653002401823653004601847653005501893653005501948653004602003653001902049653003102068653001402099100001602113700001502129700001302144700001402157700001502171700001702186700001502203700001702218700001502235700001302250856012502263 2009 eng d a0090-5550 (Print)0090-5550 (Linking)00aDevelopment of an item bank for the assessment of depression in persons with mental illnesses and physical diseases using Rasch analysis0 aDevelopment of an item bank for the assessment of depression in a2009/05/28 cMay a186-970 v543 aOBJECTIVE: The calibration of item banks provides the basis for computerized adaptive testing that ensures high diagnostic precision and minimizes participants' test burden. The present study aimed at developing a new item bank that allows for assessing depression in persons with mental and persons with somatic diseases. METHOD: The sample consisted of 161 participants treated for a depressive syndrome, and 206 participants with somatic illnesses (103 cardiologic, 103 otorhinolaryngologic; overall mean age = 44.1 years, SD =14.0; 44.7% women) to allow for validation of the item bank in both groups. Persons answered a pool of 182 depression items on a 5-point Likert scale. RESULTS: Evaluation of Rasch model fit (infit < 1.3), differential item functioning, dimensionality, local independence, item spread, item and person separation (>2.0), and reliability (>.80) resulted in a bank of 79 items with good psychometric properties. CONCLUSIONS: The bank provides items with a wide range of content coverage and may serve as a sound basis for computerized adaptive testing applications. It might also be useful for researchers who wish to develop new fixed-length scales for the assessment of depression in specific rehabilitation settings.10aAdaptation, Psychological10aAdult10aAged10aDepressive Disorder/*diagnosis/psychology10aDiagnosis, Computer-Assisted10aFemale10aHeart Diseases/*psychology10aHumans10aMale10aMental Disorders/*psychology10aMiddle Aged10aModels, Statistical10aOtorhinolaryngologic Diseases/*psychology10aPersonality Assessment/statistics & numerical data10aPersonality Inventory/*statistics & numerical data10aPsychometrics/statistics & numerical data10aQuestionnaires10aReproducibility of Results10aSick Role1 aForkmann, T1 aBoecker, M1 aNorra, C1 aEberle, N1 aKircher, T1 aSchauerte, P1 aMischke, K1 aWesthofen, M1 aGauggel, S1 aWirtz, M uhttp://mail.iacat.org/content/development-item-bank-assessment-depression-persons-mental-illnesses-and-physical-diseases02752nas a2200433 4500008004100000020004600041245012800087210006900215250001500284300001200299490000700311520139300318653003401711653001501745653001001760653000901770653002201779653002501801653001101826653001101837653000901848653001601857653001501873653003801888653001901926653003101945653002801976653004802004653002202052100002002074700001202094700001402106700001602120700001402136700001702150700001502167700001502182856012102197 2009 eng d a1878-5921 (Electronic)0895-4356 (Linking)00aAn evaluation of patient-reported outcomes found computerized adaptive testing was efficient in assessing stress perception0 aevaluation of patientreported outcomes found computerized adapti a2008/07/22 a278-2870 v623 aOBJECTIVES: This study aimed to develop and evaluate a first computerized adaptive test (CAT) for the measurement of stress perception (Stress-CAT), in terms of the two dimensions: exposure to stress and stress reaction. STUDY DESIGN AND SETTING: Item response theory modeling was performed using a two-parameter model (Generalized Partial Credit Model). The evaluation of the Stress-CAT comprised a simulation study and real clinical application. A total of 1,092 psychosomatic patients (N1) were studied. Two hundred simulees (N2) were generated for a simulated response data set. Then the Stress-CAT was given to n=116 inpatients, (N3) together with established stress questionnaires as validity criteria. RESULTS: The final banks included n=38 stress exposure items and n=31 stress reaction items. In the first simulation study, CAT scores could be estimated with a high measurement precision (SE<0.32; rho>0.90) using 7.0+/-2.3 (M+/-SD) stress reaction items and 11.6+/-1.7 stress exposure items. The second simulation study reanalyzed real patients data (N1) and showed an average use of items of 5.6+/-2.1 for the dimension stress reaction and 10.0+/-4.9 for the dimension stress exposure. Convergent validity showed significantly high correlations. CONCLUSIONS: The Stress-CAT is short and precise, potentially lowering the response burden of patients in clinical decision making.10a*Diagnosis, Computer-Assisted10aAdolescent10aAdult10aAged10aAged, 80 and over10aConfidence Intervals10aFemale10aHumans10aMale10aMiddle Aged10aPerception10aQuality of Health Care/*standards10aQuestionnaires10aReproducibility of Results10aSickness Impact Profile10aStress, Psychological/*diagnosis/psychology10aTreatment Outcome1 aKocalevent, R D1 aRose, M1 aBecker, J1 aWalter, O B1 aFliege, H1 aBjorner, J B1 aKleiber, D1 aKlapp, B F uhttp://mail.iacat.org/content/evaluation-patient-reported-outcomes-found-computerized-adaptive-testing-was-efficient01387nas a2200241 4500008004100000020002700041245005000068210005000118250001500168300001000183490000600193520067500199653002600874653001100900653004200911653002400953653001800977653002000995653001901015100001501034700001601049856008001065 2009 eng d a1548-5951 (Electronic)00aItem response theory and clinical measurement0 aItem response theory and clinical measurement a2008/11/04 a27-480 v53 aIn this review, we examine studies that use item response theory (IRT) to explore the psychometric properties of clinical measures. Next, we consider how IRT has been used in clinical research for: scale linking, computerized adaptive testing, and differential item functioning analysis. Finally, we consider the scale properties of IRT trait scores. We conclude that there are notable differences between cognitive and clinical measures that have relevance for IRT modeling. Future research should be directed toward a better understanding of the metric of the latent trait and the psychological processes that lead to individual differences in item response behaviors.10a*Psychological Theory10aHumans10aMental Disorders/diagnosis/psychology10aPsychological Tests10aPsychometrics10aQuality of Life10aQuestionnaires1 aReise, S P1 aWaller, N G uhttp://mail.iacat.org/content/item-response-theory-and-clinical-measurement01655nas a2200289 4500008004100000020004100041245011100082210006900193250001500262260000800277300001100285490000700296520053700303653004800840653006200888653005700950653001101007653002701018653002401045653005101069653004701120653003101167653001301198100001301211700001901224856012201243 2009 eng d a0007-1102 (Print)0007-1102 (Linking)00aThe maximum priority index method for severely constrained item selection in computerized adaptive testing0 amaximum priority index method for severely constrained item sele a2008/06/07 cMay a369-830 v623 aThis paper introduces a new heuristic approach, the maximum priority index (MPI) method, for severely constrained item selection in computerized adaptive testing. Our simulation study shows that it is able to accommodate various non-statistical constraints simultaneously, such as content balancing, exposure control, answer key balancing, and so on. Compared with the weighted deviation modelling method, it leads to fewer constraint violations and better exposure control while maintaining the same level of measurement precision.10aAptitude Tests/*statistics & numerical data10aDiagnosis, Computer-Assisted/*statistics & numerical data10aEducational Measurement/*statistics & numerical data10aHumans10aMathematical Computing10aModels, Statistical10aPersonality Tests/*statistics & numerical data10aPsychometrics/*statistics & numerical data10aReproducibility of Results10aSoftware1 aCheng, Y1 aChang, Hua-Hua uhttp://mail.iacat.org/content/maximum-priority-index-method-severely-constrained-item-selection-computerized-adaptive03148nas a2200457 4500008004100000020004100041245015500082210006900237250001500306260000800321300001200329490000700341520174100348653002502089653001902114653002502133653003002158653001502188653003602203653001002239653002102249653003302270653001102303653001102314653000902325653001802334653001702352653001902369653001602388100001502404700001002419700001502429700002502444700001802469700001702487700001602504700001602520700001402536700001602550856012402566 2009 eng d a0962-9343 (Print)0962-9343 (Linking)00aMeasuring global physical health in children with cerebral palsy: Illustration of a multidimensional bi-factor model and computerized adaptive testing0 aMeasuring global physical health in children with cerebral palsy a2009/02/18 cApr a359-3700 v183 aPURPOSE: The purposes of this study were to apply a bi-factor model for the determination of test dimensionality and a multidimensional CAT using computer simulations of real data for the assessment of a new global physical health measure for children with cerebral palsy (CP). METHODS: Parent respondents of 306 children with cerebral palsy were recruited from four pediatric rehabilitation hospitals and outpatient clinics. We compared confirmatory factor analysis results across four models: (1) one-factor unidimensional; (2) two-factor multidimensional (MIRT); (3) bi-factor MIRT with fixed slopes; and (4) bi-factor MIRT with varied slopes. We tested whether the general and content (fatigue and pain) person score estimates could discriminate across severity and types of CP, and whether score estimates from a simulated CAT were similar to estimates based on the total item bank, and whether they correlated as expected with external measures. RESULTS: Confirmatory factor analysis suggested separate pain and fatigue sub-factors; all 37 items were retained in the analyses. From the bi-factor MIRT model with fixed slopes, the full item bank scores discriminated across levels of severity and types of CP, and compared favorably to external instruments. CAT scores based on 10- and 15-item versions accurately captured the global physical health scores. CONCLUSIONS: The bi-factor MIRT CAT application, especially the 10- and 15-item versions, yielded accurate global physical health scores that discriminated across known severity groups and types of CP, and correlated as expected with concurrent measures. The CATs have potential for collecting complex data on the physical health of children with CP in an efficient manner.10a*Computer Simulation10a*Health Status10a*Models, Statistical10aAdaptation, Psychological10aAdolescent10aCerebral Palsy/*physiopathology10aChild10aChild, Preschool10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMassachusetts10aPennsylvania10aQuestionnaires10aYoung Adult1 aHaley, S M1 aNi, P1 aDumas, H M1 aFragala-Pinkham, M A1 aHambleton, RK1 aMontpetit, K1 aBilodeau, N1 aGorton, G E1 aWatson, K1 aTucker, C A uhttp://mail.iacat.org/content/measuring-global-physical-health-children-cerebral-palsy-illustration-multidimensional-bi02905nas a2200289 4500008004100000020004100041245011100082210006900193250001500262260000800277300001400285490000700299520193300306653002702239653003802266653004102304653001902345653001102364653001402375653003102389100001502420700001302435700001202448700001602460700001302476856012602489 2009 eng d a0315-162X (Print)0315-162X (Linking)00aProgress in assessing physical function in arthritis: PROMIS short forms and computerized adaptive testing0 aProgress in assessing physical function in arthritis PROMIS shor a2009/09/10 cSep a2061-20660 v363 aOBJECTIVE: Assessing self-reported physical function/disability with the Health Assessment Questionnaire Disability Index (HAQ) and other instruments has become central in arthritis research. Item response theory (IRT) and computerized adaptive testing (CAT) techniques can increase reliability and statistical power. IRT-based instruments can improve measurement precision substantially over a wider range of disease severity. These modern methods were applied and the magnitude of improvement was estimated. METHODS: A 199-item physical function/disability item bank was developed by distilling 1865 items to 124, including Legacy Health Assessment Questionnaire (HAQ) and Physical Function-10 items, and improving precision through qualitative and quantitative evaluation in over 21,000 subjects, which included about 1500 patients with rheumatoid arthritis and osteoarthritis. Four new instruments, (A) Patient-Reported Outcomes Measurement Information (PROMIS) HAQ, which evolved from the original (Legacy) HAQ; (B) "best" PROMIS 10; (C) 20-item static (short) forms; and (D) simulated PROMIS CAT, which sequentially selected the most informative item, were compared with the HAQ. RESULTS: Online and mailed administration modes yielded similar item and domain scores. The HAQ and PROMIS HAQ 20-item scales yielded greater information content versus other scales in patients with more severe disease. The "best" PROMIS 20-item scale outperformed the other 20-item static forms over a broad range of 4 standard deviations. The 10-item simulated PROMIS CAT outperformed all other forms. CONCLUSION: Improved items and instruments yielded better information. The PROMIS HAQ is currently available and considered validated. The new PROMIS short forms, after validation, are likely to represent further improvement. CAT-based physical function/disability assessment offers superior performance over static forms of equal length.10a*Disability Evaluation10a*Outcome Assessment (Health Care)10aArthritis/diagnosis/*physiopathology10aHealth Surveys10aHumans10aPrognosis10aReproducibility of Results1 aFries, J F1 aCella, D1 aRose, M1 aKrishnan, E1 aBruce, B uhttp://mail.iacat.org/content/progress-assessing-physical-function-arthritis-promis-short-forms-and-computerized-adaptive02598nas a2200337 4500008004100000020004600041245012800087210006900215250001500284300000700299490000600306520149200312653003201804653002301836653002501859653003401884653001101918653001101929653000901940653002601949653003101975653002702006653001102033653001802044100001502062700001202077700001402089700001802103700001202121856012702133 2009 eng d a1477-7525 (Electronic)1477-7525 (Linking)00aReduction in patient burdens with graphical computerized adaptive testing on the ADL scale: tool development and simulation0 aReduction in patient burdens with graphical computerized adaptiv a2009/05/07 a390 v73 aBACKGROUND: The aim of this study was to verify the effectiveness and efficacy of saving time and reducing burden for patients, nurses, and even occupational therapists through computer adaptive testing (CAT). METHODS: Based on an item bank of the Barthel Index (BI) and the Frenchay Activities Index (FAI) for assessing comprehensive activities of daily living (ADL) function in stroke patients, we developed a visual basic application (VBA)-Excel CAT module, and (1) investigated whether the averaged test length via CAT is shorter than that of the traditional all-item-answered non-adaptive testing (NAT) approach through simulation, (2) illustrated the CAT multimedia on a tablet PC showing data collection and response errors of ADL clinical functional measures in stroke patients, and (3) demonstrated the quality control of endorsing scale with fit statistics to detect responding errors, which will be further immediately reconfirmed by technicians once patient ends the CAT assessment. RESULTS: The results show that endorsed items could be shorter on CAT (M = 13.42) than on NAT (M = 23) at 41.64% efficiency in test length. However, averaged ability estimations reveal insignificant differences between CAT and NAT. CONCLUSION: This study found that mobile nursing services, placed at the bedsides of patients could, through the programmed VBA-Excel CAT module, reduce the burden to patients and save time, more so than the traditional NAT paper-and-pencil testing appraisals.10a*Activities of Daily Living10a*Computer Graphics10a*Computer Simulation10a*Diagnosis, Computer-Assisted10aFemale10aHumans10aMale10aPoint-of-Care Systems10aReproducibility of Results10aStroke/*rehabilitation10aTaiwan10aUnited States1 aChien, T W1 aWu, H M1 aWang, W-C1 aCastillo, R V1 aChou, W uhttp://mail.iacat.org/content/reduction-patient-burdens-graphical-computerized-adaptive-testing-adl-scale-tool-development02436nas a2200385 4500008004100000020004100041245009300082210006900175250001500244260000800259300001100267490000700278520128100285653003201566653002701598653002001625653002901645653001001674653000901684653001901693653003401712653001101746653001101757653000901768653001601777653004601793100001501839700001001854700001501864700001101879700001201890700001401902700001601916856011801932 2009 eng d a0962-9343 (Print)0962-9343 (Linking)00aReplenishing a computerized adaptive test of patient-reported daily activity functioning0 aReplenishing a computerized adaptive test of patientreported dai a2009/03/17 cMay a461-710 v183 aPURPOSE: Computerized adaptive testing (CAT) item banks may need to be updated, but before new items can be added, they must be linked to the previous CAT. The purpose of this study was to evaluate 41 pretest items prior to including them into an operational CAT. METHODS: We recruited 6,882 patients with spine, lower extremity, upper extremity, and nonorthopedic impairments who received outpatient rehabilitation in one of 147 clinics across 13 states of the USA. Forty-one new Daily Activity (DA) items were administered along with the Activity Measure for Post-Acute Care Daily Activity CAT (DA-CAT-1) in five separate waves. We compared the scoring consistency with the full item bank, test information function (TIF), person standard errors (SEs), and content range of the DA-CAT-1 to the new CAT (DA-CAT-2) with the pretest items by real data simulations. RESULTS: We retained 29 of the 41 pretest items. Scores from the DA-CAT-2 were more consistent (ICC = 0.90 versus 0.96) than DA-CAT-1 when compared with the full item bank. TIF and person SEs were improved for persons with higher levels of DA functioning, and ceiling effects were reduced from 16.1% to 6.1%. CONCLUSIONS: Item response theory and online calibration methods were valuable in improving the DA-CAT.10a*Activities of Daily Living10a*Disability Evaluation10a*Questionnaires10a*User-Computer Interface10aAdult10aAged10aCohort Studies10aComputer-Assisted Instruction10aFemale10aHumans10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods1 aHaley, S M1 aNi, P1 aJette, A M1 aTao, W1 aMoed, R1 aMeyers, D1 aLudlow, L H uhttp://mail.iacat.org/content/replenishing-computerized-adaptive-test-patient-reported-daily-activity-functioning02866nas a2200325 4500008004100000020002700041245007400068210006900142250001500211260000800226300001100234490000700245520188300252653003202135653003202167653002502199653002302224653004802247653001102295653001102306653000902317653001602326653001902342653002702361100001502388700001502403700001002418700001202428856010002440 2008 eng d a1537-7385 (Electronic)00aAdaptive short forms for outpatient rehabilitation outcome assessment0 aAdaptive short forms for outpatient rehabilitation outcome asses a2008/09/23 cOct a842-520 v873 aOBJECTIVE: To develop outpatient Adaptive Short Forms for the Activity Measure for Post-Acute Care item bank for use in outpatient therapy settings. DESIGN: A convenience sample of 11,809 adults with spine, lower limb, upper limb, and miscellaneous orthopedic impairments who received outpatient rehabilitation in 1 of 127 outpatient rehabilitation clinics in the United States. We identified optimal items for use in developing outpatient Adaptive Short Forms based on the Basic Mobility and Daily Activities domains of the Activity Measure for Post-Acute Care item bank. Patient scores were derived from the Activity Measure for Post-Acute Care computerized adaptive testing program. Items were selected for inclusion on the Adaptive Short Forms based on functional content, range of item coverage, measurement precision, item exposure rate, and data collection burden. RESULTS: Two outpatient Adaptive Short Forms were developed: (1) an 18-item Basic Mobility Adaptive Short Form and (2) a 15-item Daily Activities Adaptive Short Form, derived from the same item bank used to develop the Activity Measure for Post-Acute Care computerized adaptive testing program. Both Adaptive Short Forms achieved acceptable psychometric properties. CONCLUSIONS: In outpatient postacute care settings where computerized adaptive testing outcome applications are currently not feasible, item response theory-derived Adaptive Short Forms provide the efficient capability to monitor patients' functional outcomes. The development of Adaptive Short Form functional outcome instruments linked by a common, calibrated item bank has the potential to create a bridge to outcome monitoring across postacute care settings and can facilitate the eventual transformation from Adaptive Short Forms to computerized adaptive testing applications easier and more acceptable to the rehabilitation community.10a*Activities of Daily Living10a*Ambulatory Care Facilities10a*Mobility Limitation10a*Treatment Outcome10aDisabled Persons/psychology/*rehabilitation10aFemale10aHumans10aMale10aMiddle Aged10aQuestionnaires10aRehabilitation Centers1 aJette, A M1 aHaley, S M1 aNi, P1 aMoed, R uhttp://mail.iacat.org/content/adaptive-short-forms-outpatient-rehabilitation-outcome-assessment00718nas a2200229 4500008004100000020004100041245005200082210005100134250001500185260000800200300000800208490000700216653003400223653005000257653001100307653003200318653001300350100001500363700001500378700001800393856007700411 2008 eng d a1075-2730 (Print)1075-2730 (Linking)00aAre we ready for computerized adaptive testing?0 aAre we ready for computerized adaptive testing a2008/04/02 cApr a3690 v5910a*Attitude of Health Personnel10a*Diagnosis, Computer-Assisted/instrumentation10aHumans10aMental Disorders/*diagnosis10aSoftware1 aUnick, G J1 aShumway, M1 aHargreaves, W uhttp://mail.iacat.org/content/are-we-ready-computerized-adaptive-testing03437nas a2200481 4500008004100000020004600041245013800087210006900225250001500294260000800309300001200317490000700329520191400336653002702250653002302277653003102300653001502331653001602346653001002362653002102372653002402393653002302417653003802440653001102478653002202489653001102511653001102522653000902533653003702542653002102579653003102600653002602631653001702657653003202674653001602706653002802722100001602750700001502766700001002781700001502791700002502806856012402831 2008 eng d a1532-821X (Electronic)0003-9993 (Linking)00aAssessing self-care and social function using a computer adaptive testing version of the pediatric evaluation of disability inventory0 aAssessing selfcare and social function using a computer adaptive a2008/04/01 cApr a622-6290 v893 aOBJECTIVE: To examine score agreement, validity, precision, and response burden of a prototype computer adaptive testing (CAT) version of the self-care and social function scales of the Pediatric Evaluation of Disability Inventory compared with the full-length version of these scales. DESIGN: Computer simulation analysis of cross-sectional and longitudinal retrospective data; cross-sectional prospective study. SETTING: Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics; community-based day care, preschool, and children's homes. PARTICIPANTS: Children with disabilities (n=469) and 412 children with no disabilities (analytic sample); 38 children with disabilities and 35 children without disabilities (cross-validation sample). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Summary scores from prototype CAT applications of each scale using 15-, 10-, and 5-item stopping rules; scores from the full-length self-care and social function scales; time (in seconds) to complete assessments and respondent ratings of burden. RESULTS: Scores from both computer simulations and field administration of the prototype CATs were highly consistent with scores from full-length administration (r range, .94-.99). Using computer simulation of retrospective data, discriminant validity, and sensitivity to change of the CATs closely approximated that of the full-length scales, especially when the 15- and 10-item stopping rules were applied. In the cross-validation study the time to administer both CATs was 4 minutes, compared with over 16 minutes to complete the full-length scales. CONCLUSIONS: Self-care and social function score estimates from CAT administration are highly comparable with those obtained from full-length scale administration, with small losses in validity and precision and substantial decreases in administration time.10a*Disability Evaluation10a*Social Adjustment10aActivities of Daily Living10aAdolescent10aAge Factors10aChild10aChild, Preschool10aComputer Simulation10aCross-Over Studies10aDisabled Children/*rehabilitation10aFemale10aFollow-Up Studies10aHumans10aInfant10aMale10aOutcome Assessment (Health Care)10aReference Values10aReproducibility of Results10aRetrospective Studies10aRisk Factors10aSelf Care/*standards/trends10aSex Factors10aSickness Impact Profile1 aCoster, W J1 aHaley, S M1 aNi, P1 aDumas, H M1 aFragala-Pinkham, M A uhttp://mail.iacat.org/content/assessing-self-care-and-social-function-using-computer-adaptive-testing-version-pediatric02712nas a2200217 4500008004100000020004100041245010800082210006900190250001500259300001100274490000600285520189600291653003802187653002902225653005702254653001102311653001302322653002402335100001302359856012202372 2008 eng d a1529-7713 (Print)1529-7713 (Linking)00aBinary items and beyond: a simulation of computer adaptive testing using the Rasch partial credit model0 aBinary items and beyond a simulation of computer adaptive testin a2008/01/09 a81-1040 v93 aPast research on Computer Adaptive Testing (CAT) has focused almost exclusively on the use of binary items and minimizing the number of items to be administrated. To address this situation, extensive computer simulations were performed using partial credit items with two, three, four, and five response categories. Other variables manipulated include the number of available items, the number of respondents used to calibrate the items, and various manipulations of respondents' true locations. Three item selection strategies were used, and the theoretically optimal Maximum Information method was compared to random item selection and Bayesian Maximum Falsification approaches. The Rasch partial credit model proved to be quite robust to various imperfections, and systematic distortions did occur mainly in the absence of sufficient numbers of items located near the trait or performance levels of interest. The findings further indicate that having small numbers of items is more problematic in practice than having small numbers of respondents to calibrate these items. Most importantly, increasing the number of response categories consistently improved CAT's efficiency as well as the general quality of the results. In fact, increasing the number of response categories proved to have a greater positive impact than did the choice of item selection method, as the Maximum Information approach performed only slightly better than the Maximum Falsification approach. Accordingly, issues related to the efficiency of item selection methods are far less important than is commonly suggested in the literature. However, being based on computer simulations only, the preceding presumes that actual respondents behave according to the Rasch model. CAT research could thus benefit from empirical studies aimed at determining whether, and if so, how, selection strategies impact performance.10a*Data Interpretation, Statistical10a*User-Computer Interface10aEducational Measurement/*statistics & numerical data10aHumans10aIllinois10aModels, Statistical1 aLange, R uhttp://mail.iacat.org/content/binary-items-and-beyond-simulation-computer-adaptive-testing-using-rasch-partial-credit02583nas a2200241 4500008004100000020002200041245009000063210006900153250001500222260000800237300001100245490000700256520178300263653001502046653001502061653002502076653002902101653005002130653001102180100001602191700001902207856011502226 2008 eng d a1554-351X (Print)00aCombining computer adaptive testing technology with cognitively diagnostic assessment0 aCombining computer adaptive testing technology with cognitively a2008/08/14 cAug a808-210 v403 aA major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.10a*Cognition10a*Computers10a*Models, Statistical10a*User-Computer Interface10aDiagnosis, Computer-Assisted/*instrumentation10aHumans1 aMcGlohen, M1 aChang, Hua-Hua uhttp://mail.iacat.org/content/combining-computer-adaptive-testing-technology-cognitively-diagnostic-assessment03042nas a2200481 4500008004100000020004600041245012200087210006900209250001500278260000800293300001200301490000700313520155700320653003201877653003101909653002201940653002001962653001001982653000901992653002202001653002802023653003302051653001102084653001102095653002502106653000902131653001602140653004602156653002202202653002402224653003002248653002902278100001502307700001402322700001502336700002402351700001802375700001102393700001602404700001002420700001502430856011502445 2008 eng d a1532-821X (Electronic)0003-9993 (Linking)00aComputerized adaptive testing for follow-up after discharge from inpatient rehabilitation: II. Participation outcomes0 aComputerized adaptive testing for followup after discharge from a2008/01/30 cFeb a275-2830 v893 aOBJECTIVES: To measure participation outcomes with a computerized adaptive test (CAT) and compare CAT and traditional fixed-length surveys in terms of score agreement, respondent burden, discriminant validity, and responsiveness. DESIGN: Longitudinal, prospective cohort study of patients interviewed approximately 2 weeks after discharge from inpatient rehabilitation and 3 months later. SETTING: Follow-up interviews conducted in patient's home setting. PARTICIPANTS: Adults (N=94) with diagnoses of neurologic, orthopedic, or medically complex conditions. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Participation domains of mobility, domestic life, and community, social, & civic life, measured using a CAT version of the Participation Measure for Postacute Care (PM-PAC-CAT) and a 53-item fixed-length survey (PM-PAC-53). RESULTS: The PM-PAC-CAT showed substantial agreement with PM-PAC-53 scores (intraclass correlation coefficient, model 3,1, .71-.81). On average, the PM-PAC-CAT was completed in 42% of the time and with only 48% of the items as compared with the PM-PAC-53. Both formats discriminated across functional severity groups. The PM-PAC-CAT had modest reductions in sensitivity and responsiveness to patient-reported change over a 3-month interval as compared with the PM-PAC-53. CONCLUSIONS: Although continued evaluation is warranted, accurate estimates of participation status and responsiveness to change for group-level analyses can be obtained from CAT administrations, with a sizeable reduction in respondent burden.10a*Activities of Daily Living10a*Adaptation, Physiological10a*Computer Systems10a*Questionnaires10aAdult10aAged10aAged, 80 and over10aChi-Square Distribution10aFactor Analysis, Statistical10aFemale10aHumans10aLongitudinal Studies10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods10aPatient Discharge10aProspective Studies10aRehabilitation/*standards10aSubacute Care/*standards1 aHaley, S M1 aGandek, B1 aSiebens, H1 aBlack-Schaffer, R M1 aSinclair, S J1 aTao, W1 aCoster, W J1 aNi, P1 aJette, A M uhttp://mail.iacat.org/content/computerized-adaptive-testing-follow-after-discharge-inpatient-rehabilitation-ii03314nas a2200433 4500008004100000020004600041245007700087210006900164250001500233260001100248300001200259490000700271520203200278653002702310653003002337653002102367653001002388653000902398653001502407653003602422653002102458653004402479653002402523653001102547653001102558653001302569653000902582653001602591653003002607653003002637653003102667100001502698700001302713700001502726700001402741700001502755700001402770856009602784 2008 eng d a1528-1159 (Electronic)0362-2436 (Linking)00aComputerized adaptive testing in back pain: Validation of the CAT-5D-QOL0 aComputerized adaptive testing in back pain Validation of the CAT a2008/05/23 cMay 20 a1384-900 v333 aSTUDY DESIGN: We have conducted an outcome instrument validation study. OBJECTIVE: Our objective was to develop a computerized adaptive test (CAT) to measure 5 domains of health-related quality of life (HRQL) and assess its feasibility, reliability, validity, and efficiency. SUMMARY OF BACKGROUND DATA: Kopec and colleagues have recently developed item response theory based item banks for 5 domains of HRQL relevant to back pain and suitable for CAT applications. The domains are Daily Activities (DAILY), Walking (WALK), Handling Objects (HAND), Pain or Discomfort (PAIN), and Feelings (FEEL). METHODS: An adaptive algorithm was implemented in a web-based questionnaire administration system. The questionnaire included CAT-5D-QOL (5 scales), Modified Oswestry Disability Index (MODI), Roland-Morris Disability Questionnaire (RMDQ), SF-36 Health Survey, and standard clinical and demographic information. Participants were outpatients treated for mechanical back pain at a referral center in Vancouver, Canada. RESULTS: A total of 215 patients completed the questionnaire and 84 completed a retest. On average, patients answered 5.2 items per CAT-5D-QOL scale. Reliability ranged from 0.83 (FEEL) to 0.92 (PAIN) and was 0.92 for the MODI, RMDQ, and Physical Component Summary (PCS-36). The ceiling effect was 0.5% for PAIN compared with 2% for MODI and 5% for RMQ. The CAT-5D-QOL scales correlated as anticipated with other measures of HRQL and discriminated well according to the level of satisfaction with current symptoms, duration of the last episode, sciatica, and disability compensation. The average relative discrimination index was 0.87 for PAIN, 0.67 for DAILY and 0.62 for WALK, compared with 0.89 for MODI, 0.80 for RMDQ, and 0.59 for PCS-36. CONCLUSION: The CAT-5D-QOL is feasible, reliable, valid, and efficient in patients with back pain. This methodology can be recommended for use in back pain research and should improve outcome assessment, facilitate comparisons across studies, and reduce patient burden.10a*Disability Evaluation10a*Health Status Indicators10a*Quality of Life10aAdult10aAged10aAlgorithms10aBack Pain/*diagnosis/psychology10aBritish Columbia10aDiagnosis, Computer-Assisted/*standards10aFeasibility Studies10aFemale10aHumans10aInternet10aMale10aMiddle Aged10aPredictive Value of Tests10aQuestionnaires/*standards10aReproducibility of Results1 aKopec, J A1 aBadii, M1 aMcKenna, M1 aLima, V D1 aSayre, E C1 aDvorak, M uhttp://mail.iacat.org/content/computerized-adaptive-testing-back-pain-validation-cat-5d-qol01923nas a2200217 4500008004100000020004100041245009100082210006900173250001500242260000800257300001100265490000700276520119200283653004001475653002701515653001101542100001401553700001301567700001401580856011101594 2008 eng d a0007-1102 (Print)0007-1102 (Linking)00aControlling item exposure and test overlap on the fly in computerized adaptive testing0 aControlling item exposure and test overlap on the fly in compute a2007/07/26 cNov a471-920 v613 aThis paper proposes an on-line version of the Sympson and Hetter procedure with test overlap control (SHT) that can provide item exposure control at both the item and test levels on the fly without iterative simulations. The on-line procedure is similar to the SHT procedure in that exposure parameters are used for simultaneous control of item exposure rates and test overlap rate. The exposure parameters for the on-line procedure, however, are updated sequentially on the fly, rather than through iterative simulations conducted prior to operational computerized adaptive tests (CATs). Unlike the SHT procedure, the on-line version can control item exposure rate and test overlap rate without time-consuming iterative simulations even when item pools or examinee populations have been changed. Moreover, the on-line procedure was found to perform better than the SHT procedure in controlling item exposure and test overlap for examinees who take tests earlier. Compared with two other on-line alternatives, this proposed on-line method provided the best all-around test security control. Thus, it would be an efficient procedure for controlling item exposure and test overlap in CATs.10a*Decision Making, Computer-Assisted10a*Models, Psychological10aHumans1 aChen, S-Y1 aLei, P W1 aLiao, W H uhttp://mail.iacat.org/content/controlling-item-exposure-and-test-overlap-fly-computerized-adaptive-testing02561nas a2200313 4500008004100000020004100041245011500082210006900197250001500266300001100281490000700292520149300299653002701792653001001819653001401829653005301843653001501896653001101911653003701922653001801959653003101977653002602008653001402034653003202048100001502080700001002095700001502105856012702120 2008 eng d a0963-8288 (Print)0963-8288 (Linking)00aEfficiency and sensitivity of multidimensional computerized adaptive testing of pediatric physical functioning0 aEfficiency and sensitivity of multidimensional computerized adap a2008/02/26 a479-840 v303 aPURPOSE: Computerized adaptive tests (CATs) have efficiency advantages over fixed-length tests of physical functioning but may lose sensitivity when administering extremely low numbers of items. Multidimensional CATs may efficiently improve sensitivity by capitalizing on correlations between functional domains. Using a series of empirical simulations, we assessed the efficiency and sensitivity of multidimensional CATs compared to a longer fixed-length test. METHOD: Parent responses to the Pediatric Evaluation of Disability Inventory before and after intervention for 239 children at a pediatric rehabilitation hospital provided the data for this retrospective study. Reliability, effect size, and standardized response mean were compared between full-length self-care and mobility subscales and simulated multidimensional CATs with stopping rules at 40, 30, 20, and 10 items. RESULTS: Reliability was lowest in the 10-item CAT condition for the self-care (r = 0.85) and mobility (r = 0.79) subscales; all other conditions had high reliabilities (r > 0.94). All multidimensional CAT conditions had equivalent levels of sensitivity compared to the full set condition for both domains. CONCLUSIONS: Multidimensional CATs efficiently retain the sensitivity of longer fixed-length measures even with 5 items per dimension (10-item CAT condition). Measuring physical functioning with multidimensional CATs could enhance sensitivity following intervention while minimizing response burden.10a*Disability Evaluation10aChild10aComputers10aDisabled Children/*classification/rehabilitation10aEfficiency10aHumans10aOutcome Assessment (Health Care)10aPsychometrics10aReproducibility of Results10aRetrospective Studies10aSelf Care10aSensitivity and Specificity1 aAllen, D D1 aNi, P1 aHaley, S M uhttp://mail.iacat.org/content/efficiency-and-sensitivity-multidimensional-computerized-adaptive-testing-pediatric-physical03233nas a2200397 4500008004100000020002700041245014200068210006900210250001500279260001100294300001200305490000700317520193600324653002702260653003002287653001002317653000902327653002202336653003602358653001602394653002402410653004402434653001102478653001602489653002602505653003002531653003002561653003102591100001302622700001402635700001502649700001402664700001702678700001502695856012502710 2008 eng d a1528-1159 (Electronic)00aLetting the CAT out of the bag: Comparing computer adaptive tests and an 11-item short form of the Roland-Morris Disability Questionnaire0 aLetting the CAT out of the bag Comparing computer adaptive tests a2008/05/23 cMay 20 a1378-830 v333 aSTUDY DESIGN: A post hoc simulation of a computer adaptive administration of the items of a modified version of the Roland-Morris Disability Questionnaire. OBJECTIVE: To evaluate the effectiveness of adaptive administration of back pain-related disability items compared with a fixed 11-item short form. SUMMARY OF BACKGROUND DATA: Short form versions of the Roland-Morris Disability Questionnaire have been developed. An alternative to paper-and-pencil short forms is to administer items adaptively so that items are presented based on a person's responses to previous items. Theoretically, this allows precise estimation of back pain disability with administration of only a few items. MATERIALS AND METHODS: Data were gathered from 2 previously conducted studies of persons with back pain. An item response theory model was used to calibrate scores based on all items, items of a paper-and-pencil short form, and several computer adaptive tests (CATs). RESULTS: Correlations between each CAT condition and scores based on a 23-item version of the Roland-Morris Disability Questionnaire ranged from 0.93 to 0.98. Compared with an 11-item short form, an 11-item CAT produced scores that were significantly more highly correlated with scores based on the 23-item scale. CATs with even fewer items also produced scores that were highly correlated with scores based on all items. For example, scores from a 5-item CAT had a correlation of 0.93 with full scale scores. Seven- and 9-item CATs correlated at 0.95 and 0.97, respectively. A CAT with a standard-error-based stopping rule produced scores that correlated at 0.95 with full scale scores. CONCLUSION: A CAT-based back pain-related disability measure may be a valuable tool for use in clinical and research contexts. Use of CAT for other common measures in back pain research, such as other functional scales or measures of psychological distress, may offer similar advantages.10a*Disability Evaluation10a*Health Status Indicators10aAdult10aAged10aAged, 80 and over10aBack Pain/*diagnosis/psychology10aCalibration10aComputer Simulation10aDiagnosis, Computer-Assisted/*standards10aHumans10aMiddle Aged10aModels, Psychological10aPredictive Value of Tests10aQuestionnaires/*standards10aReproducibility of Results1 aCook, KF1 aChoi, S W1 aCrane, P K1 aDeyo, R A1 aJohnson, K L1 aAmtmann, D uhttp://mail.iacat.org/content/letting-cat-out-bag-comparing-computer-adaptive-tests-and-11-item-short-form-roland-morris03429nas a2200385 4500008004100000020004100041245010600082210006900188250001500257260001200272300001000284490000700294520220300301653002702504653001502531653001002546653002102556653002402577653002802601653003802629653001102667653001102678653001102689653003902700653000902739653002402748653003102772653004002803100001802843700001502861700001302876700001702889700001402906856012302920 2008 eng d a0271-6798 (Print)0271-6798 (Linking)00aMeasuring physical functioning in children with spinal impairments with computerized adaptive testing0 aMeasuring physical functioning in children with spinal impairmen a2008/03/26 cApr-May a330-50 v283 aBACKGROUND: The purpose of this study was to assess the utility of measuring current physical functioning status of children with scoliosis and kyphosis by applying computerized adaptive testing (CAT) methods. Computerized adaptive testing uses a computer interface to administer the most optimal items based on previous responses, reducing the number of items needed to obtain a scoring estimate. METHODS: This was a prospective study of 77 subjects (0.6-19.8 years) who were seen by a spine surgeon during a routine clinic visit for progress spine deformity. Using a multidimensional version of the Pediatric Evaluation of Disability Inventory CAT program (PEDI-MCAT), we evaluated content range, accuracy and efficiency, known-group validity, concurrent validity with the Pediatric Outcomes Data Collection Instrument, and test-retest reliability in a subsample (n = 16) within a 2-week interval. RESULTS: We found the PEDI-MCAT to have sufficient item coverage in both self-care and mobility content for this sample, although most patients tended to score at the higher ends of both scales. Both the accuracy of PEDI-MCAT scores as compared with a fixed format of the PEDI (r = 0.98 for both mobility and self-care) and test-retest reliability were very high [self-care: intraclass correlation (3,1) = 0.98, mobility: intraclass correlation (3,1) = 0.99]. The PEDI-MCAT took an average of 2.9 minutes for the parents to complete. The PEDI-MCAT detected expected differences between patient groups, and scores on the PEDI-MCAT correlated in expected directions with scores from the Pediatric Outcomes Data Collection Instrument domains. CONCLUSIONS: Use of the PEDI-MCAT to assess the physical functioning status, as perceived by parents of children with complex spinal impairments, seems to be feasible and achieves accurate and efficient estimates of self-care and mobility function. Additional item development will be needed at the higher functioning end of the scale to avoid ceiling effects for older children. LEVEL OF EVIDENCE: This is a level II prospective study designed to establish the utility of computer adaptive testing as an evaluation method in a busy pediatric spine practice.10a*Disability Evaluation10aAdolescent10aChild10aChild, Preschool10aComputer Simulation10aCross-Sectional Studies10aDisabled Children/*rehabilitation10aFemale10aHumans10aInfant10aKyphosis/*diagnosis/rehabilitation10aMale10aProspective Studies10aReproducibility of Results10aScoliosis/*diagnosis/rehabilitation1 aMulcahey, M J1 aHaley, S M1 aDuffy, T1 aPengsheng, N1 aBetz, R R uhttp://mail.iacat.org/content/measuring-physical-functioning-children-spinal-impairments-computerized-adaptive-testing02142nas a2200289 4500008004100000020004600041245007200087210006400159250001500223260001100238300000700249490000700256520118400263653002901447653003501476653002601511653002601537653001101563653006101574653001801635653004501653653001301698100001601711700001301727700001801740856009401758 2008 eng d a1553-6467 (Electronic)0002-9459 (Linking)00aThe NAPLEX: evolution, purpose, scope, and educational implications0 aNAPLEX evolution purpose scope and educational implications a2008/05/17 cApr 15 a330 v723 aSince 2004, passing the North American Pharmacist Licensure Examination (NAPLEX) has been a requirement for earning initial pharmacy licensure in all 50 United States. The creation and evolution from 1952-2005 of the particular pharmacy competency testing areas and quantities of questions are described for the former paper-and-pencil National Association of Boards of Pharmacy Licensure Examination (NABPLEX) and the current candidate-specific computer adaptive NAPLEX pharmacy licensure examinations. A 40% increase in the weighting of NAPLEX Blueprint Area 2 in May 2005, compared to that in the preceding 1997-2005 Blueprint, has implications for candidates' NAPLEX performance and associated curricular content and instruction. New pharmacy graduates' scores on the NAPLEX are neither intended nor validated to serve as a criterion for assessing or judging the quality or effectiveness of pharmacy curricula and instruction. The newest cycle of NAPLEX Blueprint revision, a continual process to ensure representation of nationwide contemporary practice, began in early 2008. It may take up to 2 years, including surveying several thousand national pharmacists, to complete.10a*Educational Measurement10aEducation, Pharmacy/*standards10aHistory, 20th Century10aHistory, 21st Century10aHumans10aLicensure, Pharmacy/history/*legislation & jurisprudence10aNorth America10aPharmacists/*legislation & jurisprudence10aSoftware1 aNewton, D W1 aBoyle, M1 aCatizone, C A uhttp://mail.iacat.org/content/naplex-evolution-purpose-scope-and-educational-implications01948nas a2200277 4500008004100000020004100041245007300082210006900155250001500224260000800239300001000247490000700257520099700264653001601261653002901277653004801306653006201354653001101416653002401427653004601451653003101497653001301528100001401541700001501555856010001570 2008 eng d a0007-1102 (Print)0007-1102 (Linking)00aPredicting item exposure parameters in computerized adaptive testing0 aPredicting item exposure parameters in computerized adaptive tes a2008/05/17 cMay a75-910 v613 aThe purpose of this study is to find a formula that describes the relationship between item exposure parameters and item parameters in computerized adaptive tests by using genetic programming (GP) - a biologically inspired artificial intelligence technique. Based on the formula, item exposure parameters for new parallel item pools can be predicted without conducting additional iterative simulations. Results show that an interesting formula between item exposure parameters and item parameters in a pool can be found by using GP. The item exposure parameters predicted based on the found formula were close to those observed from the Sympson and Hetter (1985) procedure and performed well in controlling item exposure rates. Similar results were observed for the Stocking and Lewis (1998) multinomial model for item selection and the Sympson and Hetter procedure with content balancing. The proposed GP approach has provided a knowledge-based solution for finding item exposure parameters.10a*Algorithms10a*Artificial Intelligence10aAptitude Tests/*statistics & numerical data10aDiagnosis, Computer-Assisted/*statistics & numerical data10aHumans10aModels, Statistical10aPsychometrics/statistics & numerical data10aReproducibility of Results10aSoftware1 aChen, S-Y1 aDoong, S H uhttp://mail.iacat.org/content/predicting-item-exposure-parameters-computerized-adaptive-testing02175nas a2200301 4500008004100000020001400041245010200055210006900157250001500226300001200241490000700253520109000260653001501350653001501365653002101380653004801401653002401449653002801473653006201501653005701563653001101620653002701631653004601658100001701704700001201721700001401733856012601747 2008 eng d a1138-741600aRotating item banks versus restriction of maximum exposure rates in computerized adaptive testing0 aRotating item banks versus restriction of maximum exposure rates a2008/11/08 a618-6250 v113 aIf examinees were to know, beforehand, part of the content of a computerized adaptive test, their estimated trait levels would then have a marked positive bias. One of the strategies to avoid this consists of dividing a large item bank into several sub-banks and rotating the sub-bank employed (Ariel, Veldkamp & van der Linden, 2004). This strategy permits substantial improvements in exposure control at little cost to measurement accuracy, However, we do not know whether this option provides better results than using the master bank with greater restriction in the maximum exposure rates (Sympson & Hetter, 1985). In order to investigate this issue, we worked with several simulated banks of 2100 items, comparing them, for RMSE and overlap rate, with the same banks divided in two, three... up to seven sub-banks. By means of extensive manipulation of the maximum exposure rate in each bank, we found that the option of rotating banks slightly outperformed the option of restricting maximum exposure rate of the master bank by means of the Sympson-Hetter method.
10a*Character10a*Databases10a*Software Design10aAptitude Tests/*statistics & numerical data10aBias (Epidemiology)10aComputing Methodologies10aDiagnosis, Computer-Assisted/*statistics & numerical data10aEducational Measurement/*statistics & numerical data10aHumans10aMathematical Computing10aPsychometrics/statistics & numerical data1 aBarrada, J R1 aOlea, J1 aAbad, F J uhttp://mail.iacat.org/content/rotating-item-banks-versus-restriction-maximum-exposure-rates-computerized-adaptive-testing01833nas a2200229 4500008004100000020004100041245010800082210006900190250001500259300000900274490000600283520101700289653001601306653001501322653005701337653001101394653003001405653001801435100001401453700001401467856012201481 2008 eng d a1529-7713 (Print)1529-7713 (Linking)00aStrategies for controlling item exposure in computerized adaptive testing with the partial credit model0 aStrategies for controlling item exposure in computerized adaptiv a2008/01/09 a1-170 v93 aExposure control research with polytomous item pools has determined that randomization procedures can be very effective for controlling test security in computerized adaptive testing (CAT). The current study investigated the performance of four procedures for controlling item exposure in a CAT under the partial credit model. In addition to a no exposure control baseline condition, the Kingsbury-Zara, modified-within-.10-logits, Sympson-Hetter, and conditional Sympson-Hetter procedures were implemented to control exposure rates. The Kingsbury-Zara and the modified-within-.10-logits procedures were implemented with 3 and 6 item candidate conditions. The results show that the Kingsbury-Zara and modified-within-.10-logits procedures with 6 item candidates performed as well as the conditional Sympson-Hetter in terms of exposure rates, overlap rates, and pool utilization. These two procedures are strongly recommended for use with partial credit CATs due to their simplicity and strength of their results.10a*Algorithms10a*Computers10a*Educational Measurement/statistics & numerical data10aHumans10aQuestionnaires/*standards10aUnited States1 aDavis, LL1 aDodd, B G uhttp://mail.iacat.org/content/strategies-controlling-item-exposure-computerized-adaptive-testing-partial-credit-model03158nas a2200493 4500008004100000020002200041245008900063210006900152250001500221260000800236300001000244490000700254520169600261653003401957653002001991653001502011653001002026653000902036653002602045653003202071653003102103653001102134653001102145653000902156653003202165653001602197653002902213653004402242653002902286653003102315653003102346653001702377100001702394700001402411700001602425700001302441700001702454700002202471700001702493700001402510700001402524700001702538856010902555 2008 eng d a1075-2730 (Print)00aUsing computerized adaptive testing to reduce the burden of mental health assessment0 aUsing computerized adaptive testing to reduce the burden of ment a2008/04/02 cApr a361-80 v593 aOBJECTIVE: This study investigated the combination of item response theory and computerized adaptive testing (CAT) for psychiatric measurement as a means of reducing the burden of research and clinical assessments. METHODS: Data were from 800 participants in outpatient treatment for a mood or anxiety disorder; they completed 616 items of the 626-item Mood and Anxiety Spectrum Scales (MASS) at two times. The first administration was used to design and evaluate a CAT version of the MASS by using post hoc simulation. The second confirmed the functioning of CAT in live testing. RESULTS: Tests of competing models based on item response theory supported the scale's bifactor structure, consisting of a primary dimension and four group factors (mood, panic-agoraphobia, obsessive-compulsive, and social phobia). Both simulated and live CAT showed a 95% average reduction (585 items) in items administered (24 and 30 items, respectively) compared with administration of the full MASS. The correlation between scores on the full MASS and the CAT version was .93. For the mood disorder subscale, differences in scores between two groups of depressed patients--one with bipolar disorder and one without--on the full scale and on the CAT showed effect sizes of .63 (p<.003) and 1.19 (p<.001) standard deviation units, respectively, indicating better discriminant validity for CAT. CONCLUSIONS: Instead of using small fixed-length tests, clinicians can create item banks with a large item pool, and a small set of the items most relevant for a given individual can be administered with no loss of information, yielding a dramatic reduction in administration time and patient and clinician burden.10a*Diagnosis, Computer-Assisted10a*Questionnaires10aAdolescent10aAdult10aAged10aAgoraphobia/diagnosis10aAnxiety Disorders/diagnosis10aBipolar Disorder/diagnosis10aFemale10aHumans10aMale10aMental Disorders/*diagnosis10aMiddle Aged10aMood Disorders/diagnosis10aObsessive-Compulsive Disorder/diagnosis10aPanic Disorder/diagnosis10aPhobic Disorders/diagnosis10aReproducibility of Results10aTime Factors1 aGibbons, R D1 aWeiss, DJ1 aKupfer, D J1 aFrank, E1 aFagiolini, A1 aGrochocinski, V J1 aBhaumik, D K1 aStover, A1 aBock, R D1 aImmekus, J C uhttp://mail.iacat.org/content/using-computerized-adaptive-testing-reduce-burden-mental-health-assessment02309nas a2200301 4500008004100000020002200041245011900063210006900182250001500251260000800266300001000274490000700284520125000291653001501541653001001556653006201566653001101628653001101639653000901650653003801659653005601697653004601753653002101799653003101820100001601851700002001867856012001887 2007 eng d a1040-3590 (Print)00aComputerized adaptive personality testing: A review and illustration with the MMPI-2 Computerized Adaptive Version0 aComputerized adaptive personality testing A review and illustrat a2007/03/21 cMar a14-240 v193 aComputerized adaptive testing in personality assessment can improve efficiency by significantly reducing the number of items administered to answer an assessment question. Two approaches have been explored for adaptive testing in computerized personality assessment: item response theory and the countdown method. In this article, the authors review the literature on each and report the results of an investigation designed to explore the utility, in terms of item and time savings, and validity, in terms of correlations with external criterion measures, of an expanded countdown method-based research version of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2), the MMPI-2 Computerized Adaptive Version (MMPI-2-CA). Participants were 433 undergraduate college students (170 men and 263 women). Results indicated considerable item savings and corresponding time savings for the adaptive testing modalities compared with a conventional computerized MMPI-2 administration. Furthermore, computerized adaptive administration yielded comparable results to computerized conventional administration of the MMPI-2 in terms of both test scores and their validity. Future directions for computerized adaptive personality testing are discussed.10aAdolescent10aAdult10aDiagnosis, Computer-Assisted/*statistics & numerical data10aFemale10aHumans10aMale10aMMPI/*statistics & numerical data10aPersonality Assessment/*statistics & numerical data10aPsychometrics/statistics & numerical data10aReference Values10aReproducibility of Results1 aForbey, J D1 aBen-Porath, Y S uhttp://mail.iacat.org/content/computerized-adaptive-personality-testing-review-and-illustration-mmpi-2-computerized01478nas a2200241 4500008004100000020002200041245007800063210006900141250001500210260001100225300001200236490000700248520069100255653002300946653002500969653002100994653006201015653001101077653001601088100001701104700001401121856010101135 2007 eng d a0277-6715 (Print)00aComputerized adaptive testing for measuring development of young children0 aComputerized adaptive testing for measuring development of young a2006/11/30 cJun 15 a2629-380 v263 aDevelopmental indicators that are used for routine measurement in The Netherlands are usually chosen to optimally identify delayed children. Measurements on the majority of children without problems are therefore quite imprecise. This study explores the use of computerized adaptive testing (CAT) to monitor the development of young children. CAT is expected to improve the measurement precision of the instrument. We do two simulation studies - one with real data and one with simulated data - to evaluate the usefulness of CAT. It is shown that CAT selects developmental indicators that maximally match the individual child, so that all children can be measured to the same precision.10a*Child Development10a*Models, Statistical10aChild, Preschool10aDiagnosis, Computer-Assisted/*statistics & numerical data10aHumans10aNetherlands1 aJacobusse, G1 aBuuren, S uhttp://mail.iacat.org/content/computerized-adaptive-testing-measuring-development-young-children02419nas a2200325 4500008004100000020002200041245008700063210006900150250001500219300001100234490000700245520140100252653001901653653003001672653001901702653003801721653002101759653002001780653001401800653001501814653003301829653001101862653002401873653001801897100001701915700001501932700001501947700001501962856011601977 2007 eng d a0962-9343 (Print)00aDeveloping tailored instruments: item banking and computerized adaptive assessment0 aDeveloping tailored instruments item banking and computerized ad a2007/05/29 a95-1080 v163 aItem banks and Computerized Adaptive Testing (CAT) have the potential to greatly improve the assessment of health outcomes. This review describes the unique features of item banks and CAT and discusses how to develop item banks. In CAT, a computer selects the items from an item bank that are most relevant for and informative about the particular respondent; thus optimizing test relevance and precision. Item response theory (IRT) provides the foundation for selecting the items that are most informative for the particular respondent and for scoring responses on a common metric. The development of an item bank is a multi-stage process that requires a clear definition of the construct to be measured, good items, a careful psychometric analysis of the items, and a clear specification of the final CAT. The psychometric analysis needs to evaluate the assumptions of the IRT model such as unidimensionality and local independence; that the items function the same way in different subgroups of the population; and that there is an adequate fit between the data and the chosen item response models. Also, interpretation guidelines need to be established to help the clinical application of the assessment. Although medical research can draw upon expertise from educational testing in the development of item banks and CAT, the medical field also encounters unique opportunities and challenges.10a*Health Status10a*Health Status Indicators10a*Mental Health10a*Outcome Assessment (Health Care)10a*Quality of Life10a*Questionnaires10a*Software10aAlgorithms10aFactor Analysis, Statistical10aHumans10aModels, Statistical10aPsychometrics1 aBjorner, J B1 aChang, C-H1 aThissen, D1 aReeve, B B uhttp://mail.iacat.org/content/developing-tailored-instruments-item-banking-and-computerized-adaptive-assessment02876nas a2200313 4500008004100000020002200041245010100063210006900164250001500233260000800248300001200256490000700268520179500275653005102070653002002121653003702141653002602178653001902204653001102223653003002234653004602264653003502310653002802345653001302373100002102386700001702407700001502424856012302439 2007 eng d a0315-162X (Print)00aImproving patient reported outcomes using item response theory and computerized adaptive testing0 aImproving patient reported outcomes using item response theory a a2007/06/07 cJun a1426-310 v343 aOBJECTIVE: Patient reported outcomes (PRO) are considered central outcome measures for both clinical trials and observational studies in rheumatology. More sophisticated statistical models, including item response theory (IRT) and computerized adaptive testing (CAT), will enable critical evaluation and reconstruction of currently utilized PRO instruments to improve measurement precision while reducing item burden on the individual patient. METHODS: We developed a domain hierarchy encompassing the latent trait of physical function/disability from the more general to most specific. Items collected from 165 English-language instruments were evaluated by a structured process including trained raters, modified Delphi expert consensus, and then patient evaluation. Each item in the refined data bank will undergo extensive analysis using IRT to evaluate response functions and measurement precision. CAT will allow for real-time questionnaires of potentially smaller numbers of questions tailored directly to each individual's level of physical function. RESULTS: Physical function/disability domain comprises 4 subdomains: upper extremity, trunk, lower extremity, and complex activities. Expert and patient review led to consensus favoring use of present-tense "capability" questions using a 4- or 5-item Likert response construct over past-tense "performance"items. Floor and ceiling effects, attribution of disability, and standardization of response categories were also addressed. CONCLUSION: By applying statistical techniques of IRT through use of CAT, existing PRO instruments may be improved to reduce questionnaire burden on the individual patients while increasing measurement precision that may ultimately lead to reduced sample size requirements for costly clinical trials.10a*Rheumatic Diseases/physiopathology/psychology10aClinical Trials10aData Interpretation, Statistical10aDisability Evaluation10aHealth Surveys10aHumans10aInternational Cooperation10aOutcome Assessment (Health Care)/*methods10aPatient Participation/*methods10aResearch Design/*trends10aSoftware1 aChakravarty, E F1 aBjorner, J B1 aFries, J F uhttp://mail.iacat.org/content/improving-patient-reported-outcomes-using-item-response-theory-and-computerized-adaptive03105nas a2200445 4500008004100000020002200041245007100063210006900134250001500203300001200218490000700230520183100237653003802068653001902106653002102125653002002146653001402166653001102180653003002191653001102221653000902232653002502241653004602266653001802312653002602330100001302356700001402369700001702383700001302400700001502413700001502428700001702443700001402460700001802474700002302492700001602515700001602531700001502547856009702562 2007 eng d a0962-9343 (Print)00aIRT health outcomes data analysis project: an overview and summary0 aIRT health outcomes data analysis project an overview and summar a2007/03/14 a121-1320 v163 aBACKGROUND: In June 2004, the National Cancer Institute and the Drug Information Association co-sponsored the conference, "Improving the Measurement of Health Outcomes through the Applications of Item Response Theory (IRT) Modeling: Exploration of Item Banks and Computer-Adaptive Assessment." A component of the conference was presentation of a psychometric and content analysis of a secondary dataset. OBJECTIVES: A thorough psychometric and content analysis was conducted of two primary domains within a cancer health-related quality of life (HRQOL) dataset. RESEARCH DESIGN: HRQOL scales were evaluated using factor analysis for categorical data, IRT modeling, and differential item functioning analyses. In addition, computerized adaptive administration of HRQOL item banks was simulated, and various IRT models were applied and compared. SUBJECTS: The original data were collected as part of the NCI-funded Quality of Life Evaluation in Oncology (Q-Score) Project. A total of 1,714 patients with cancer or HIV/AIDS were recruited from 5 clinical sites. MEASURES: Items from 4 HRQOL instruments were evaluated: Cancer Rehabilitation Evaluation System-Short Form, European Organization for Research and Treatment of Cancer Quality of Life Questionnaire, Functional Assessment of Cancer Therapy and Medical Outcomes Study Short-Form Health Survey. RESULTS AND CONCLUSIONS: Four lessons learned from the project are discussed: the importance of good developmental item banks, the ambiguity of model fit results, the limits of our knowledge regarding the practical implications of model misfit, and the importance in the measurement of HRQOL of construct definition. With respect to these lessons, areas for future research are suggested. The feasibility of developing item banks for broad definitions of health is discussed.10a*Data Interpretation, Statistical10a*Health Status10a*Quality of Life10a*Questionnaires10a*Software10aFemale10aHIV Infections/psychology10aHumans10aMale10aNeoplasms/psychology10aOutcome Assessment (Health Care)/*methods10aPsychometrics10aStress, Psychological1 aCook, KF1 aTeal, C R1 aBjorner, J B1 aCella, D1 aChang, C-H1 aCrane, P K1 aGibbons, L E1 aHays, R D1 aMcHorney, C A1 aOcepek-Welikson, K1 aRaczek, A E1 aTeresi, J A1 aReeve, B B uhttp://mail.iacat.org/content/irt-health-outcomes-data-analysis-project-overview-and-summary01800nas a2200265 4500008004100000020004100041245010400082210006900186250001500255300001100270490001500281520085700296653001901153653003801172653002101210653001401231653002901245653005601274653001101330653002501341653001901366653001801385100001501403856011601418 2007 eng d a0962-9343 (Print)0962-9343 (Linking)00aPatient-reported outcomes measurement and management with innovative methodologies and technologies0 aPatientreported outcomes measurement and management with innovat a2007/05/29 a157-660 v16 Suppl 13 aSuccessful integration of modern psychometrics and advanced informatics in patient-reported outcomes (PRO) measurement and management can potentially maximize the value of health outcomes research and optimize the delivery of quality patient care. Unlike the traditional labor-intensive paper-and-pencil data collection method, item response theory-based computerized adaptive testing methodologies coupled with novel technologies provide an integrated environment to collect, analyze and present ready-to-use PRO data for informed and shared decision-making. This article describes the needs, challenges and solutions for accurate, efficient and cost-effective PRO data acquisition and dissemination means in order to provide critical and timely PRO information necessary to actively support and enhance routine patient care in busy clinical settings.10a*Health Status10a*Outcome Assessment (Health Care)10a*Quality of Life10a*Software10aComputer Systems/*trends10aHealth Insurance Portability and Accountability Act10aHumans10aPatient Satisfaction10aQuestionnaires10aUnited States1 aChang, C-H uhttp://mail.iacat.org/content/patient-reported-outcomes-measurement-and-management-innovative-methodologies-and02745nas a2200541 4500008004100000020002200041245017000063210006900233250001500302260000800317300001100325490000700336520116200343653001901505653002501524653002101549653002101570653001501591653001001606653000901616653001601625653002301641653003201664653001101696653001101707653000901718653001601727653004601743653001801789653002901807653001801836100001501854700001401869700001701883700001301900700001501913700001601928700001501944700001701959700001401976700001801990700001102008700001602019700001502035700001302050700001302063856012702076 2007 eng d a0025-7079 (Print)00aPsychometric evaluation and calibration of health-related quality of life item banks: plans for the Patient-Reported Outcomes Measurement Information System (PROMIS)0 aPsychometric evaluation and calibration of healthrelated quality a2007/04/20 cMay aS22-310 v453 aBACKGROUND: The construction and evaluation of item banks to measure unidimensional constructs of health-related quality of life (HRQOL) is a fundamental objective of the Patient-Reported Outcomes Measurement Information System (PROMIS) project. OBJECTIVES: Item banks will be used as the foundation for developing short-form instruments and enabling computerized adaptive testing. The PROMIS Steering Committee selected 5 HRQOL domains for initial focus: physical functioning, fatigue, pain, emotional distress, and social role participation. This report provides an overview of the methods used in the PROMIS item analyses and proposed calibration of item banks. ANALYSES: Analyses include evaluation of data quality (eg, logic and range checking, spread of response distribution within an item), descriptive statistics (eg, frequencies, means), item response theory model assumptions (unidimensionality, local independence, monotonicity), model fit, differential item functioning, and item calibration for banking. RECOMMENDATIONS: Summarized are key analytic issues; recommendations are provided for future evaluations of item banks in HRQOL assessment.10a*Health Status10a*Information Systems10a*Quality of Life10a*Self Disclosure10aAdolescent10aAdult10aAged10aCalibration10aDatabases as Topic10aEvaluation Studies as Topic10aFemale10aHumans10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods10aPsychometrics10aQuestionnaires/standards10aUnited States1 aReeve, B B1 aHays, R D1 aBjorner, J B1 aCook, KF1 aCrane, P K1 aTeresi, J A1 aThissen, D1 aRevicki, D A1 aWeiss, DJ1 aHambleton, RK1 aLiu, H1 aGershon, RC1 aReise, S P1 aLai, J S1 aCella, D uhttp://mail.iacat.org/content/psychometric-evaluation-and-calibration-health-related-quality-life-item-banks-plans-patient02043nas a2200253 4500008004100000020002200041245007400063210006900137250001500206300001100221490000700232520121000239653002201449653001101471653005501482653005101537100001501588700002001603700002301623700001801646700001301664700001701677856009501694 2007 eng d a0885-3924 (Print)00aA system for interactive assessment and management in palliative care0 asystem for interactive assessment and management in palliative c a2007/03/16 a745-550 v333 aThe availability of psychometrically sound and clinically relevant screening, diagnosis, and outcome evaluation tools is essential to high-quality palliative care assessment and management. Such data will enable us to improve patient evaluations, prognoses, and treatment selections, and to increase patient satisfaction and quality of life. To accomplish these goals, medical care needs more precise, efficient, and comprehensive tools for data acquisition, analysis, interpretation, and management. We describe a system for interactive assessment and management in palliative care (SIAM-PC), which is patient centered, model driven, database derived, evidence based, and technology assisted. The SIAM-PC is designed to reliably measure the multiple dimensions of patients' needs for palliative care, and then to provide information to clinicians, patients, and the patients' families to achieve optimal patient care, while improving our capacity for doing palliative care research. This system is innovative in its application of the state-of-the-science approaches, such as item response theory and computerized adaptive testing, to many of the significant clinical problems related to palliative care.10a*Needs Assessment10aHumans10aMedical Informatics/*organization & administration10aPalliative Care/*organization & administration1 aChang, C-H1 aBoni-Saenz, A A1 aDurazo-Arvizu, R A1 aDesHarnais, S1 aLau, D T1 aEmanuel, L L uhttp://mail.iacat.org/content/system-interactive-assessment-and-management-palliative-care02653nas a2200397 4500008004100000020002200041245013500063210006900198250001500267260000800282300001200290490000700302520140700309653002601716653003101742653001501773653001001788653000901798653002201807653002501829653003301854653001101887653001101898653000901909653001601918653004601934653003001980653003102010653001302041100001502054700001002069700001802079700001602097700001502113856012702128 2006 eng d a0895-4356 (Print)00aComputer adaptive testing improved accuracy and precision of scores over random item selection in a physical functioning item bank0 aComputer adaptive testing improved accuracy and precision of sco a2006/10/10 cNov a1174-820 v593 aBACKGROUND AND OBJECTIVE: Measuring physical functioning (PF) within and across postacute settings is critical for monitoring outcomes of rehabilitation; however, most current instruments lack sufficient breadth and feasibility for widespread use. Computer adaptive testing (CAT), in which item selection is tailored to the individual patient, holds promise for reducing response burden, yet maintaining measurement precision. We calibrated a PF item bank via item response theory (IRT), administered items with a post hoc CAT design, and determined whether CAT would improve accuracy and precision of score estimates over random item selection. METHODS: 1,041 adults were interviewed during postacute care rehabilitation episodes in either hospital or community settings. Responses for 124 PF items were calibrated using IRT methods to create a PF item bank. We examined the accuracy and precision of CAT-based scores compared to a random selection of items. RESULTS: CAT-based scores had higher correlations with the IRT-criterion scores, especially with short tests, and resulted in narrower confidence intervals than scores based on a random selection of items; gains, as expected, were especially large for low and high performing adults. CONCLUSION: The CAT design may have important precision and efficiency advantages for point-of-care functional assessment in rehabilitation practice settings.10a*Recovery of Function10aActivities of Daily Living10aAdolescent10aAdult10aAged10aAged, 80 and over10aConfidence Intervals10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods10aRehabilitation/*standards10aReproducibility of Results10aSoftware1 aHaley, S M1 aNi, P1 aHambleton, RK1 aSlavin, M D1 aJette, A M uhttp://mail.iacat.org/content/computer-adaptive-testing-improved-accuracy-and-precision-scores-over-random-item-selectio-003330nas a2200469 4500008004100000020002200041245011600063210006900179250001500248260000800263300001200271490000700283520189400290653003202184653003102216653002202247653002002269653001002289653000902299653002202308653002802330653003302358653001102391653001102402653002502413653000902438653001602447653004602463653002202509653002402531653003002555653002902585100001502614700001502629700001602644700001102660700002402671700001402695700001802709700001002727856012302737 2006 eng d a0003-9993 (Print)00aComputerized adaptive testing for follow-up after discharge from inpatient rehabilitation: I. Activity outcomes0 aComputerized adaptive testing for followup after discharge from a2006/08/01 cAug a1033-420 v873 aOBJECTIVE: To examine score agreement, precision, validity, efficiency, and responsiveness of a computerized adaptive testing (CAT) version of the Activity Measure for Post-Acute Care (AM-PAC-CAT) in a prospective, 3-month follow-up sample of inpatient rehabilitation patients recently discharged home. DESIGN: Longitudinal, prospective 1-group cohort study of patients followed approximately 2 weeks after hospital discharge and then 3 months after the initial home visit. SETTING: Follow-up visits conducted in patients' home setting. PARTICIPANTS: Ninety-four adults who were recently discharged from inpatient rehabilitation, with diagnoses of neurologic, orthopedic, and medically complex conditions. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Summary scores from AM-PAC-CAT, including 3 activity domains of movement and physical, personal care and instrumental, and applied cognition were compared with scores from a traditional fixed-length version of the AM-PAC with 66 items (AM-PAC-66). RESULTS: AM-PAC-CAT scores were in good agreement (intraclass correlation coefficient model 3,1 range, .77-.86) with scores from the AM-PAC-66. On average, the CAT programs required 43% of the time and 33% of the items compared with the AM-PAC-66. Both formats discriminated across functional severity groups. The standardized response mean (SRM) was greater for the movement and physical fixed form than the CAT; the effect size and SRM of the 2 other AM-PAC domains showed similar sensitivity between CAT and fixed formats. Using patients' own report as an anchor-based measure of change, the CAT and fixed length formats were comparable in responsiveness to patient-reported change over a 3-month interval. CONCLUSIONS: Accurate estimates for functional activity group-level changes can be obtained from CAT administrations, with a considerable reduction in administration time.10a*Activities of Daily Living10a*Adaptation, Physiological10a*Computer Systems10a*Questionnaires10aAdult10aAged10aAged, 80 and over10aChi-Square Distribution10aFactor Analysis, Statistical10aFemale10aHumans10aLongitudinal Studies10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods10aPatient Discharge10aProspective Studies10aRehabilitation/*standards10aSubacute Care/*standards1 aHaley, S M1 aSiebens, H1 aCoster, W J1 aTao, W1 aBlack-Schaffer, R M1 aGandek, B1 aSinclair, S J1 aNi, P uhttp://mail.iacat.org/content/computerized-adaptive-testing-follow-after-discharge-inpatient-rehabilitation-i-activity03167nas a2200361 4500008004100000020002200041245013600063210006900199250001500268260000800283300001200291490000700303520206800310653001502378653002402393653002102417653001002438653000902448653002902457653003402486653002402520653001102544653001102555653001302566653000902579653001602588100001602604700001302620700002302633700001202656700001102668856012602679 2006 eng d a0962-9343 (Print)00aComputerized adaptive testing of diabetes impact: a feasibility study of Hispanics and non-Hispanics in an active clinic population0 aComputerized adaptive testing of diabetes impact a feasibility s a2006/10/13 cNov a1503-180 v153 aBACKGROUND: Diabetes is a leading cause of death and disability in the US and is twice as common among Hispanic Americans as non-Hispanics. The societal costs of diabetes provide an impetus for developing tools that can improve patient care and delay or prevent diabetes complications. METHODS: We implemented a feasibility study of a Computerized Adaptive Test (CAT) to measure diabetes impact using a sample of 103 English- and 97 Spanish-speaking patients (mean age = 56.5, 66.5% female) in a community medical center with a high proportion of minority patients (28% African-American). The 37 items of the Diabetes Impact Survey were translated using forward-backward translation and cognitive debriefing. Participants were randomized to receive either the full-length tool or the Diabetes-CAT first, in the patient's native language. RESULTS: The number of items and the amount of time to complete the survey for the CAT was reduced to one-sixth the amount for the full-length tool in both languages, across disease severity. Confirmatory Factor Analysis confirmed that the Diabetes Impact Survey is unidimensional. The Diabetes-CAT demonstrated acceptable internal consistency reliability, construct validity, and discriminant validity in the overall sample, although subgroup analyses suggested that the English sample data evidenced higher levels of reliability and validity than the Spanish sample and issues with discriminant validity in the Spanish sample. Differential Item Function analysis revealed differences in responses tendencies by language group in 3 of the 37 items. Participant interviews suggested that the Spanish-speaking patients generally preferred the paper survey to the computer-assisted tool, and were twice as likely to experience difficulties understanding the items. CONCLUSIONS: While the Diabetes-CAT demonstrated clear advantages in reducing respondent burden as compared to the full-length tool, simplifying the item bank will be necessary for enhancing the feasibility of the Diabetes-CAT for use with low literacy patients.10a*Computers10a*Hispanic Americans10a*Quality of Life10aAdult10aAged10aData Collection/*methods10aDiabetes Mellitus/*psychology10aFeasibility Studies10aFemale10aHumans10aLanguage10aMale10aMiddle Aged1 aSchwartz, C1 aWelch, G1 aSantiago-Kelley, P1 aBode, R1 aSun, X uhttp://mail.iacat.org/content/computerized-adaptive-testing-diabetes-impact-feasibility-study-hispanics-and-non-hispanics02161nas a2200289 4500008004100000245010000041210006900141260000800210300001200218490000700230520127700237653003401514653002101548653000901569653001201578653002201590653001101612653001101623653000901634653001601643653002901659653001901688100001301707700001501720700001301735856012301748 2006 eng d00aFactor analysis techniques for assessing sufficient unidimensionality of cancer related fatigue0 aFactor analysis techniques for assessing sufficient unidimension cSep a1179-900 v153 aBACKGROUND: Fatigue is the most common unrelieved symptom experienced by people with cancer. The purpose of this study was to examine whether cancer-related fatigue (CRF) can be summarized using a single score, that is, whether CRF is sufficiently unidimensional for measurement approaches that require or assume unidimensionality. We evaluated this question using factor analysis techniques including the theory-driven bi-factor model. METHODS: Five hundred and fifty five cancer patients from the Chicago metropolitan area completed a 72-item fatigue item bank, covering a range of fatigue-related concerns including intensity, frequency and interference with physical, mental, and social activities. Dimensionality was assessed using exploratory and confirmatory factor analysis (CFA) techniques. RESULTS: Exploratory factor analysis (EFA) techniques identified from 1 to 17 factors. The bi-factor model suggested that CRF was sufficiently unidimensional. CONCLUSIONS: CRF can be considered sufficiently unidimensional for applications that require unidimensionality. One such application, item response theory (IRT), will facilitate the development of short-form and computer-adaptive testing. This may further enable practical and accurate clinical assessment of CRF.10a*Factor Analysis, Statistical10a*Quality of Life10aAged10aChicago10aFatigue/*etiology10aFemale10aHumans10aMale10aMiddle Aged10aNeoplasms/*complications10aQuestionnaires1 aLai, J-S1 aCrane, P K1 aCella, D uhttp://mail.iacat.org/content/factor-analysis-techniques-assessing-sufficient-unidimensionality-cancer-related-fatigue03120nas a2200277 4500008004100000020002200041245010900063210006900172250001500241260000800256300001200264490000700276520221700283653002902500653002002529653002502549653002102574653001502595653002802610653001102638653002502649100001702674700001502691700001202706856012402718 2006 eng d a0214-9915 (Print)00aMaximum information stratification method for controlling item exposure in computerized adaptive testing0 aMaximum information stratification method for controlling item e a2007/02/14 cFeb a156-1590 v183 aThe proposal for increasing the security in Computerized Adaptive Tests that has received most attention in recent years is the a-stratified method (AS - Chang and Ying, 1999): at the beginning of the test only items with low discrimination parameters (a) can be administered, with the values of the a parameters increasing as the test goes on. With this method, distribution of the exposure rates of the items is less skewed, while efficiency is maintained in trait-level estimation. The pseudo-guessing parameter (c), present in the three-parameter logistic model, is considered irrelevant, and is not used in the AS method. The Maximum Information Stratified (MIS) model incorporates the c parameter in the stratification of the bank and in the item-selection rule, improving accuracy by comparison with the AS, for item banks with a and b parameters correlated and uncorrelated. For both kinds of banks, the blocking b methods (Chang, Qian and Ying, 2001) improve the security of the item bank.Método de estratificación por máxima información para el control de la exposición en tests adaptativos informatizados. La propuesta para aumentar la seguridad en los tests adaptativos informatizados que ha recibido más atención en los últimos años ha sido el método a-estratificado (AE - Chang y Ying, 1999): en los momentos iniciales del test sólo pueden administrarse ítems con bajos parámetros de discriminación (a), incrementándose los valores del parámetro a admisibles según avanza el test. Con este método la distribución de las tasas de exposición de los ítems es más equilibrada, manteniendo una adecuada precisión en la medida. El parámetro de pseudoadivinación (c), presente en el modelo logístico de tres parámetros, se supone irrelevante y no se incorpora en el AE. El método de Estratificación por Máxima Información (EMI) incorpora el parámetro c a la estratificación del banco y a la regla de selección de ítems, mejorando la precisión en comparación con AE, tanto para bancos donde los parámetros a y b correlacionan como para bancos donde no. Para ambos tipos de bancos, los métodos de bloqueo de b (Chang, Qian y Ying, 2001) mejoran la seguridad del banco.10a*Artificial Intelligence10a*Microcomputers10a*Psychological Tests10a*Software Design10aAlgorithms10aChi-Square Distribution10aHumans10aLikelihood Functions1 aBarrada, J R1 aMazuela, P1 aOlea, J uhttp://mail.iacat.org/content/maximum-information-stratification-method-controlling-item-exposure-computerized-adaptive02568nas a2200349 4500008004100000020002200041245016600063210006900229250001500298260000800313300001100321490000700332520142900339653002701768653001601795653001501811653001001826653002101836653001401857653005201871653001501923653001101938653001101949653003701960653001801997653001402015100001502029700001002044700001602054700002502070856012302095 2006 eng d a0003-9993 (Print)00aMeasurement precision and efficiency of multidimensional computer adaptive testing of physical functioning using the pediatric evaluation of disability inventory0 aMeasurement precision and efficiency of multidimensional compute a2006/08/29 cSep a1223-90 v873 aOBJECTIVE: To compare the measurement efficiency and precision of a multidimensional computer adaptive testing (M-CAT) application to a unidimensional CAT (U-CAT) comparison using item bank data from 2 of the functional skills scales of the Pediatric Evaluation of Disability Inventory (PEDI). DESIGN: Using existing PEDI mobility and self-care item banks, we compared the stability of item calibrations and model fit between unidimensional and multidimensional Rasch models and compared the efficiency and precision of the U-CAT- and M-CAT-simulated assessments to a random draw of items. SETTING: Pediatric rehabilitation hospital and clinics. PARTICIPANTS: Clinical and normative samples. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Not applicable. RESULTS: The M-CAT had greater levels of precision and efficiency than the separate mobility and self-care U-CAT versions when using a similar number of items for each PEDI subdomain. Equivalent estimation of mobility and self-care scores can be achieved with a 25% to 40% item reduction with the M-CAT compared with the U-CAT. CONCLUSIONS: M-CAT applications appear to have both precision and efficiency advantages compared with separate U-CAT assessments when content subdomains have a high correlation. Practitioners may also realize interpretive advantages of reporting test score information for each subdomain when separate clinical inferences are desired.10a*Disability Evaluation10a*Pediatrics10aAdolescent10aChild10aChild, Preschool10aComputers10aDisabled Persons/*classification/rehabilitation10aEfficiency10aHumans10aInfant10aOutcome Assessment (Health Care)10aPsychometrics10aSelf Care1 aHaley, S M1 aNi, P1 aLudlow, L H1 aFragala-Pinkham, M A uhttp://mail.iacat.org/content/measurement-precision-and-efficiency-multidimensional-computer-adaptive-testing-physical02406nas a2200337 4500008004100000020002200041245010800063210006900171250001500240260000800255300001100263490000700274520139300281653002101674653002101695653001001716653001101726653001801737653001101755653000901766653001601775653003001791653002801821100001801849700001701867700001801884700001401902700001701916700001701933856011801950 2006 eng d a0962-9343 (Print)00aMultidimensional computerized adaptive testing of the EORTC QLQ-C30: basic developments and evaluations0 aMultidimensional computerized adaptive testing of the EORTC QLQC a2006/03/21 cApr a315-290 v153 aOBJECTIVE: Self-report questionnaires are widely used to measure health-related quality of life (HRQOL). Ideally, such questionnaires should be adapted to the individual patient and at the same time scores should be directly comparable across patients. This may be achieved using computerized adaptive testing (CAT). Usually, CAT is carried out for a single domain at a time. However, many HRQOL domains are highly correlated. Multidimensional CAT may utilize these correlations to improve measurement efficiency. We investigated the possible advantages and difficulties of multidimensional CAT. STUDY DESIGN AND SETTING: We evaluated multidimensional CAT of three scales from the EORTC QLQ-C30: the physical functioning, emotional functioning, and fatigue scales. Analyses utilised a database with 2958 European cancer patients. RESULTS: It was possible to obtain scores for the three domains with five to seven items administered using multidimensional CAT that were very close to the scores obtained using all 12 items and with no or little loss of measurement precision. CONCLUSION: The findings suggest that multidimensional CAT may significantly improve measurement precision and efficiency and encourage further research into multidimensional CAT. Particularly, the estimation of the model underlying the multidimensional CAT and the conceptual aspects need further investigations.10a*Quality of Life10a*Self Disclosure10aAdult10aFemale10aHealth Status10aHumans10aMale10aMiddle Aged10aQuestionnaires/*standards10aUser-Computer Interface1 aPetersen, M A1 aGroenvold, M1 aAaronson, N K1 aFayers, P1 aSprangers, M1 aBjorner, J B uhttp://mail.iacat.org/content/multidimensional-computerized-adaptive-testing-eortc-qlq-c30-basic-developments-and02517nas a2200265 4500008004100000020004100041245013200082210006900214250001500283260000800298300001100306490000700317520154000324653003101864653003701895653003301932653002401965653001101989653002402000653002702024653003302051653003002084100001602114856012102130 2006 eng d a0025-7079 (Print)0025-7079 (Linking)00aOverview of quantitative measurement methods. Equivalence, invariance, and differential item functioning in health applications0 aOverview of quantitative measurement methods Equivalence invaria a2006/10/25 cNov aS39-490 v443 aBACKGROUND: Reviewed in this article are issues relating to the study of invariance and differential item functioning (DIF). The aim of factor analyses and DIF, in the context of invariance testing, is the examination of group differences in item response conditional on an estimate of disability. Discussed are parameters and statistics that are not invariant and cannot be compared validly in crosscultural studies with varying distributions of disability in contrast to those that can be compared (if the model assumptions are met) because they are produced by models such as linear and nonlinear regression. OBJECTIVES: The purpose of this overview is to provide an integrated approach to the quantitative methods used in this special issue to examine measurement equivalence. The methods include classical test theory (CTT), factor analytic, and parametric and nonparametric approaches to DIF detection. Also included in the quantitative section is a discussion of item banking and computerized adaptive testing (CAT). METHODS: Factorial invariance and the articles discussing this topic are introduced. A brief overview of the DIF methods presented in the quantitative section of the special issue is provided together with a discussion of ways in which DIF analyses and examination of invariance using factor models may be complementary. CONCLUSIONS: Although factor analytic and DIF detection methods share features, they provide unique information and can be viewed as complementary in informing about measurement equivalence.10a*Cross-Cultural Comparison10aData Interpretation, Statistical10aFactor Analysis, Statistical10aGuidelines as Topic10aHumans10aModels, Statistical10aPsychometrics/*methods10aStatistics as Topic/*methods10aStatistics, Nonparametric1 aTeresi, J A uhttp://mail.iacat.org/content/overview-quantitative-measurement-methods-equivalence-invariance-and-differential-item02654nas a2200409 4500008004100000245013400041210006900175300001000244490000700254520123100261653002501492653003201517653003101549653001001580653000901590653002201599653003301621653001101654653001101665653000901676653001601685653002401701653003101725653004101756653004501797653006801842653006101910653003001971653002802001653002202029100001402051700001302065700001802078700001402096700001502110856011902125 2006 eng d00aSimulated computerized adaptive test for patients with shoulder impairments was efficient and produced valid measures of function0 aSimulated computerized adaptive test for patients with shoulder a290-80 v593 aBACKGROUND AND OBJECTIVE: To test unidimensionality and local independence of a set of shoulder functional status (SFS) items, develop a computerized adaptive test (CAT) of the items using a rating scale item response theory model (RSM), and compare discriminant validity of measures generated using all items (theta(IRT)) and measures generated using the simulated CAT (theta(CAT)). STUDY DESIGN AND SETTING: We performed a secondary analysis of data collected prospectively during rehabilitation of 400 patients with shoulder impairments who completed 60 SFS items. RESULTS: Factor analytic techniques supported that the 42 SFS items formed a unidimensional scale and were locally independent. Except for five items, which were deleted, the RSM fit the data well. The remaining 37 SFS items were used to generate the CAT. On average, 6 items were needed to estimate precise measures of function using the SFS CAT, compared with all 37 SFS items. The theta(IRT) and theta(CAT) measures were highly correlated (r = .96) and resulted in similar classifications of patients. CONCLUSION: The simulated SFS CAT was efficient and produced precise, clinically relevant measures of functional status with good discriminating ability.10a*Computer Simulation10a*Range of Motion, Articular10aActivities of Daily Living10aAdult10aAged10aAged, 80 and over10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aProspective Studies10aReproducibility of Results10aResearch Support, N.I.H., Extramural10aResearch Support, U.S. Gov't, Non-P.H.S.10aShoulder Dislocation/*physiopathology/psychology/rehabilitation10aShoulder Pain/*physiopathology/psychology/rehabilitation10aShoulder/*physiopathology10aSickness Impact Profile10aTreatment Outcome1 aHart, D L1 aCook, KF1 aMioduski, J E1 aTeal, C R1 aCrane, P K uhttp://mail.iacat.org/content/simulated-computerized-adaptive-test-patients-shoulder-impairments-was-efficient-and01470nas a2200217 4500008004100000245015400041210006900195260004600264300001200310520063100322653003000953653001100983653002500994653001601019653002201035653002301057100001801080700001501098700001401113856012501127 2005 eng d00aApplications of item response theory to improve health outcomes assessment: Developing item banks, linking instruments, and computer-adaptive testing0 aApplications of item response theory to improve health outcomes aCambridge, UKbCambridge University Press a445-4643 a(From the chapter) The current chapter builds on Reise's introduction to the basic concepts, assumptions, popular models, and important features of IRT and discusses the applications of item response theory (IRT) modeling to health outcomes assessment. In particular, we highlight the critical role of IRT modeling in: developing an instrument to match a study's population; linking two or more instruments measuring similar constructs on a common metric; and creating item banks that provide the foundation for tailored short-form instruments or for computerized adaptive assessments. (PsycINFO Database Record (c) 2005 APA )10aComputer Assisted Testing10aHealth10aItem Response Theory10aMeasurement10aTest Construction10aTreatment Outcomes1 aHambleton, RK1 aGotay, C C1 aSnyder, C uhttp://mail.iacat.org/content/applications-item-response-theory-improve-health-outcomes-assessment-developing-item-banks03124nas a2200385 4500008004100000020002200041245012900063210006900192250001500261260000800276300001000284490000700294520188400301653002502185653002702210653001502237653001002252653002102262653002802283653003802311653001102349653001102360653001102371653000902382653004602391653002702437653003002464653003202494100001502526700001602541700001602557700001502573700002502588856012502613 2005 eng d a0003-9993 (Print)00aAssessing mobility in children using a computer adaptive testing version of the pediatric evaluation of disability inventory0 aAssessing mobility in children using a computer adaptive testing a2005/05/17 cMay a932-90 v863 aOBJECTIVE: To assess score agreement, validity, precision, and response burden of a prototype computerized adaptive testing (CAT) version of the Mobility Functional Skills Scale (Mob-CAT) of the Pediatric Evaluation of Disability Inventory (PEDI) as compared with the full 59-item version (Mob-59). DESIGN: Computer simulation analysis of cross-sectional and longitudinal retrospective data; and cross-sectional prospective study. SETTING: Pediatric rehabilitation hospital, including inpatient acute rehabilitation, day school program, outpatient clinics, community-based day care, preschool, and children's homes. PARTICIPANTS: Four hundred sixty-nine children with disabilities and 412 children with no disabilities (analytic sample); 41 children without disabilities and 39 with disabilities (cross-validation sample). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Summary scores from a prototype Mob-CAT application and versions using 15-, 10-, and 5-item stopping rules; scores from the Mob-59; and number of items and time (in seconds) to administer assessments. RESULTS: Mob-CAT scores from both computer simulations (intraclass correlation coefficient [ICC] range, .94-.99) and field administrations (ICC=.98) were in high agreement with scores from the Mob-59. Using computer simulations of retrospective data, discriminant validity, and sensitivity to change of the Mob-CAT closely approximated that of the Mob-59, especially when using the 15- and 10-item stopping rule versions of the Mob-CAT. The Mob-CAT used no more than 15% of the items for any single administration, and required 20% of the time needed to administer the Mob-59. CONCLUSIONS: Comparable score estimates for the PEDI mobility scale can be obtained from CAT administrations, with losses in validity and precision for shorter forms, but with a considerable reduction in administration time.10a*Computer Simulation10a*Disability Evaluation10aAdolescent10aChild10aChild, Preschool10aCross-Sectional Studies10aDisabled Children/*rehabilitation10aFemale10aHumans10aInfant10aMale10aOutcome Assessment (Health Care)/*methods10aRehabilitation Centers10aRehabilitation/*standards10aSensitivity and Specificity1 aHaley, S M1 aRaczek, A E1 aCoster, W J1 aDumas, H M1 aFragala-Pinkham, M A uhttp://mail.iacat.org/content/assessing-mobility-children-using-computer-adaptive-testing-version-pediatric-evaluation-002243nas a2200205 4500008004100000020004600041245009500087210006900182260002700251300001200278490000700290520149400297653002701791653003001818653001701848653002501865100001901890700001001909856011801919 2005 eng d a1560-4292 (Print); 1560-4306 (Electronic)00aA Bayesian student model without hidden nodes and its comparison with item response theory0 aBayesian student model without hidden nodes and its comparison w bIOS Press: Netherlands a291-3230 v153 aThe Bayesian framework offers a number of techniques for inferring an individual's knowledge state from evidence of mastery of concepts or skills. A typical application where such a technique can be useful is Computer Adaptive Testing (CAT). A Bayesian modeling scheme, POKS, is proposed and compared to the traditional Item Response Theory (IRT), which has been the prevalent CAT approach for the last three decades. POKS is based on the theory of knowledge spaces and constructs item-to-item graph structures without hidden nodes. It aims to offer an effective knowledge assessment method with an efficient algorithm for learning the graph structure from data. We review the different Bayesian approaches to modeling student ability assessment and discuss how POKS relates to them. The performance of POKS is compared to the IRT two parameter logistic model. Experimental results over a 34 item Unix test and a 160 item French language test show that both approaches can classify examinees as master or non-master effectively and efficiently, with relatively comparable performance. However, more significant differences are found in favor of POKS for a second task that consists in predicting individual question item outcome. Implications of these results for adaptive testing and student modeling are discussed, as well as the limitations and advantages of POKS, namely the issue of integrating concepts into its structure. (PsycINFO Database Record (c) 2007 APA, all rights reserved)10aBayesian Student Model10acomputer adaptive testing10ahidden nodes10aItem Response Theory1 aDesmarais, M C1 aPu, X uhttp://mail.iacat.org/content/bayesian-student-model-without-hidden-nodes-and-its-comparison-item-response-theory01606nas a2200253 4500008004100000020002200041245003000063210003000093250001500123300001100138490000600149520095200155653001401107653002501121653002901146653001801175653001901193653001101212653001401223653001901237653002001256100001601276856006001292 2005 eng d a1529-7713 (Print)00aComputer adaptive testing0 aComputer adaptive testing a2005/02/11 a109-270 v63 aThe creation of item response theory (IRT) and Rasch models, inexpensive accessibility to high speed desktop computers, and the growth of the Internet, has led to the creation and growth of computerized adaptive testing or CAT. This form of assessment is applicable for both high stakes tests such as certification or licensure exams, as well as health related quality of life surveys. This article discusses the historical background of CAT including its many advantages over conventional (typically paper and pencil) alternatives. The process of CAT is then described including descriptions of the specific differences of using CAT based upon 1-, 2- and 3-parameter IRT and various Rasch models. Numerous specific topics describing CAT in practice are described including: initial item selection, content balancing, test difficulty, test length and stopping rules. The article concludes with the author's reflections regarding the future of CAT.10a*Internet10a*Models, Statistical10a*User-Computer Interface10aCertification10aHealth Surveys10aHumans10aLicensure10aMicrocomputers10aQuality of Life1 aGershon, RC uhttp://mail.iacat.org/content/computer-adaptive-testing02792nas a2200469 4500008004100000020002200041245010400063210006900167250001500236260000800251300001200259490000700271520132800278653002201606653003101628653001501659653001601674653001001690653003401700653002101734653002401755653002501779653001501804653001101819653005301830653002901883653001101912653001101923653002001934653000901954653003101963653004601994653003102040653001402071653003202085100001502117700001002132700002502142700001702167700001302184856012502197 2005 eng d a0012-1622 (Print)00aA computer adaptive testing approach for assessing physical functioning in children and adolescents0 acomputer adaptive testing approach for assessing physical functi a2005/02/15 cFeb a113-1200 v473 aThe purpose of this article is to demonstrate: (1) the accuracy and (2) the reduction in amount of time and effort in assessing physical functioning (self-care and mobility domains) of children and adolescents using computer-adaptive testing (CAT). A CAT algorithm selects questions directly tailored to the child's ability level, based on previous responses. Using a CAT algorithm, a simulation study was used to determine the number of items necessary to approximate the score of a full-length assessment. We built simulated CAT (5-, 10-, 15-, and 20-item versions) for self-care and mobility domains and tested their accuracy in a normative sample (n=373; 190 males, 183 females; mean age 6y 11mo [SD 4y 2m], range 4mo to 14y 11mo) and a sample of children and adolescents with Pompe disease (n=26; 21 males, 5 females; mean age 6y 1mo [SD 3y 10mo], range 5mo to 14y 10mo). Results indicated that comparable score estimates (based on computer simulations) to the full-length tests can be achieved in a 20-item CAT version for all age ranges and for normative and clinical samples. No more than 13 to 16% of the items in the full-length tests were needed for any one administration. These results support further consideration of using CAT programs for accurate and efficient clinical assessments of physical functioning.10a*Computer Systems10aActivities of Daily Living10aAdolescent10aAge Factors10aChild10aChild Development/*physiology10aChild, Preschool10aComputer Simulation10aConfidence Intervals10aDemography10aFemale10aGlycogen Storage Disease Type II/physiopathology10aHealth Status Indicators10aHumans10aInfant10aInfant, Newborn10aMale10aMotor Activity/*physiology10aOutcome Assessment (Health Care)/*methods10aReproducibility of Results10aSelf Care10aSensitivity and Specificity1 aHaley, S M1 aNi, P1 aFragala-Pinkham, M A1 aSkrinar, A M1 aCorzo, D uhttp://mail.iacat.org/content/computer-adaptive-testing-approach-assessing-physical-functioning-children-and-adolescents02220nas a2200241 4500008004100000020004100041245009600082210006900178250001500247260000800262300001100270490000700281520139600288653002701684653003701711653002701748653001101775653002601786100001501812700001901827700001301846856011901859 2005 eng d a0007-1102 (Print)0007-1102 (Linking)00aComputerized adaptive testing: a mixture item selection approach for constrained situations0 aComputerized adaptive testing a mixture item selection approach a2005/11/19 cNov a239-570 v583 aIn computerized adaptive testing (CAT), traditionally the most discriminating items are selected to provide the maximum information so as to attain the highest efficiency in trait (theta) estimation. The maximum information (MI) approach typically results in unbalanced item exposure and hence high item-overlap rates across examinees. Recently, Yi and Chang (2003) proposed the multiple stratification (MS) method to remedy the shortcomings of MI. In MS, items are first sorted according to content, then difficulty and finally discrimination parameters. As discriminating items are used strategically, MS offers a better utilization of the entire item pool. However, for testing with imposed non-statistical constraints, this new stratification approach may not maintain its high efficiency. Through a series of simulation studies, this research explored the possible benefits of a mixture item selection approach (MS-MI), integrating the MS and MI approaches, in testing with non-statistical constraints. In all simulation conditions, MS consistently outperformed the other two competing approaches in item pool utilization, while the MS-MI and the MI approaches yielded higher measurement efficiency and offered better conformity to the constraints. Furthermore, the MS-MI approach was shown to perform better than MI on all evaluation criteria when control of item exposure was imposed.10a*Computer-Aided Design10a*Educational Measurement/methods10a*Models, Psychological10aHumans10aPsychometrics/methods1 aLeung, C K1 aChang, Hua-Hua1 aHau, K T uhttp://mail.iacat.org/content/computerized-adaptive-testing-mixture-item-selection-approach-constrained-situations02124nas a2200253 4500008004100000245007900041210006900120300001200189490000700201520113300208653002701341653004601368653005201414653002901466653001101495653005601506653002501562653004101587653004501628653006201673100001501735700001501750856010501765 2005 eng d00aContemporary measurement techniques for rehabilitation outcomes assessment0 aContemporary measurement techniques for rehabilitation outcomes a339-3450 v373 aIn this article, we review the limitations of traditional rehabilitation functional outcome instruments currently in use within the rehabilitation field to assess Activity and Participation domains as defined by the International Classification of Function, Disability, and Health. These include a narrow scope of functional outcomes, data incompatibility across instruments, and the precision vs feasibility dilemma. Following this, we illustrate how contemporary measurement techniques, such as item response theory methods combined with computer adaptive testing methodology, can be applied in rehabilitation to design functional outcome instruments that are comprehensive in scope, accurate, allow for compatibility across instruments, and are sensitive to clinically important change without sacrificing their feasibility. Finally, we present some of the pressing challenges that need to be overcome to provide effective dissemination and training assistance to ensure that current and future generations of rehabilitation professionals are familiar with and skilled in the application of contemporary outcomes measurement.10a*Disability Evaluation10aActivities of Daily Living/classification10aDisabled Persons/classification/*rehabilitation10aHealth Status Indicators10aHumans10aOutcome Assessment (Health Care)/*methods/standards10aRecovery of Function10aResearch Support, N.I.H., Extramural10aResearch Support, U.S. Gov't, Non-P.H.S.10aSensitivity and Specificity computerized adaptive testing1 aJette, A M1 aHaley, S M uhttp://mail.iacat.org/content/contemporary-measurement-techniques-rehabilitation-outcomes-assessment02215nas a2200373 4500008004100000245011500041210006900156300001100225490000700236520104400243653002101287653002001308653001001328653000901338653002801347653001101375653003801386653000901424653001601433653004101449653001801490653003701508653003001545100001401575700001301589700001301602700001501615700001701630700001501647700001901662700001601681700001801697856012601715 2005 eng d00aData pooling and analysis to build a preliminary item bank: an example using bowel function in prostate cancer0 aData pooling and analysis to build a preliminary item bank an ex a142-590 v283 aAssessing bowel function (BF) in prostate cancer can help determine therapeutic trade-offs. We determined the components of BF commonly assessed in prostate cancer studies as an initial step in creating an item bank for clinical and research application. We analyzed six archived data sets representing 4,246 men with prostate cancer. Thirty-one items from validated instruments were available for analysis. Items were classified into domains (diarrhea, rectal urgency, pain, bleeding, bother/distress, and other) then subjected to conventional psychometric and item response theory (IRT) analyses. Items fit the IRT model if the ratio between observed and expected item variance was between 0.60 and 1.40. Four of 31 items had inadequate fit in at least one analysis. Poorly fitting items included bleeding (2), rectal urgency (1), and bother/distress (1). A fifth item assessing hemorrhoids was poorly correlated with other items. Our analyses supported four related components of BF: diarrhea, rectal urgency, pain, and bother/distress.10a*Quality of Life10a*Questionnaires10aAdult10aAged10aData Collection/methods10aHumans10aIntestine, Large/*physiopathology10aMale10aMiddle Aged10aProstatic Neoplasms/*physiopathology10aPsychometrics10aResearch Support, Non-U.S. Gov't10aStatistics, Nonparametric1 aEton, D T1 aLai, J S1 aCella, D1 aReeve, B B1 aTalcott, J A1 aClark, J A1 aMcPherson, C P1 aLitwin, M S1 aMoinpour, C M uhttp://mail.iacat.org/content/data-pooling-and-analysis-build-preliminary-item-bank-example-using-bowel-function-prostate02444nas a2200373 4500008004100000020004100041245008200082210006900164250001500233260000800248300001000256490000700266520136700273653001001640653000901650653002201659653003301681653003301714653001101747653001101758653000901769653001601778653004001794653001801834653001901852100001301871700001301884700001401897700001201911700001701923700001601940700001501956856009901971 2005 eng d a0895-4356 (Print)0895-4356 (Linking)00aAn item bank was created to improve the measurement of cancer-related fatigue0 aitem bank was created to improve the measurement of cancerrelate a2005/02/01 cFeb a190-70 v583 aOBJECTIVE: Cancer-related fatigue (CRF) is one of the most common unrelieved symptoms experienced by patients. CRF is underrecognized and undertreated due to a lack of clinically sensitive instruments that integrate easily into clinics. Modern computerized adaptive testing (CAT) can overcome these obstacles by enabling precise assessment of fatigue without requiring the administration of a large number of questions. A working item bank is essential for development of a CAT platform. The present report describes the building of an operational item bank for use in clinical settings with the ultimate goal of improving CRF identification and treatment. STUDY DESIGN AND SETTING: The sample included 301 cancer patients. Psychometric properties of items were examined by using Rasch analysis, an Item Response Theory (IRT) model. RESULTS AND CONCLUSION: The final bank includes 72 items. These 72 unidimensional items explained 57.5% of the variance, based on factor analysis results. Excellent internal consistency (alpha=0.99) and acceptable item-total correlation were found (range: 0.51-0.85). The 72 items covered a reasonable range of the fatigue continuum. No significant ceiling effects, floor effects, or gaps were found. A sample short form was created for demonstration purposes. The resulting bank is amenable to the development of a CAT platform.10aAdult10aAged10aAged, 80 and over10aFactor Analysis, Statistical10aFatigue/*etiology/psychology10aFemale10aHumans10aMale10aMiddle Aged10aNeoplasms/*complications/psychology10aPsychometrics10aQuestionnaires1 aLai, J-S1 aCella, D1 aDineen, K1 aBode, R1 aVon Roenn, J1 aGershon, RC1 aShevrin, D uhttp://mail.iacat.org/content/item-bank-was-created-improve-measurement-cancer-related-fatigue02903nas a2200409 4500008004100000245012300041210006900164260000800233300001000241490000700251520159700258653004701855653001001902653000901912653001901921653003101940653002601971653001101997653002902008653001102037653000902048653001602057653003902073653001402112653002502126653002702151653003002178653003202208653002802240653002202268100001502290700001602305700001702321700001602338700001502354856012402369 2005 eng d00aMeasuring physical function in patients with complex medical and postsurgical conditions: a computer adaptive approach0 aMeasuring physical function in patients with complex medical and cOct a741-80 v843 aOBJECTIVE: To examine whether the range of disability in the medically complex and postsurgical populations receiving rehabilitation is adequately sampled by the new Activity Measure--Post-Acute Care (AM-PAC), and to assess whether computer adaptive testing (CAT) can derive valid patient scores using fewer questions. DESIGN: Observational study of 158 subjects (mean age 67.2 yrs) receiving skilled rehabilitation services in inpatient (acute rehabilitation hospitals, skilled nursing facility units) and community (home health services, outpatient departments) settings for recent-onset or worsening disability from medical (excluding neurological) and surgical (excluding orthopedic) conditions. Measures were interviewer-administered activity questions (all patients) and physical functioning portion of the SF-36 (outpatients) and standardized chart items (11 Functional Independence Measure (FIM), 19 Standardized Outcome and Assessment Information Set (OASIS) items, and 22 Minimum Data Set (MDS) items). Rasch modeling analyzed all data and the relationship between person ability estimates and average item difficulty. CAT assessed the ability to derive accurate patient scores using a sample of questions. RESULTS: The 163-item activity item pool covered the range of physical movement and personal and instrumental activities. CAT analysis showed comparable scores between estimates using 10 items or the total item pool. CONCLUSION: The AM-PAC can assess a broad range of function in patients with complex medical illness. CAT achieves valid patient scores using fewer questions.10aActivities of Daily Living/*classification10aAdult10aAged10aCohort Studies10aContinuity of Patient Care10aDisability Evaluation10aFemale10aHealth Services Research10aHumans10aMale10aMiddle Aged10aPostoperative Care/*rehabilitation10aPrognosis10aRecovery of Function10aRehabilitation Centers10aRehabilitation/*standards10aSensitivity and Specificity10aSickness Impact Profile10aTreatment Outcome1 aSiebens, H1 aAndres, P L1 aPengsheng, N1 aCoster, W J1 aHaley, S M uhttp://mail.iacat.org/content/measuring-physical-function-patients-complex-medical-and-postsurgical-conditions-computer02721nas a2200373 4500008004100000245017500041210006900216300001100285490000700296520137800303653003001681653003101711653001501742653001001757653000901767653002201776653003201798653004201830653001101872653003001883653001101913653005101924653003101975653003702006653000902043653001602052653004102068653004102109653002602150100001402176700001802190700001902208856012002227 2005 eng d00aSimulated computerized adaptive tests for measuring functional status were efficient with good discriminant validity in patients with hip, knee, or foot/ankle impairments0 aSimulated computerized adaptive tests for measuring functional s a629-380 v583 aBACKGROUND AND OBJECTIVE: To develop computerized adaptive tests (CATs) designed to assess lower extremity functional status (FS) in people with lower extremity impairments using items from the Lower Extremity Functional Scale and compare discriminant validity of FS measures generated using all items analyzed with a rating scale Item Response Theory model (theta(IRT)) and measures generated using the simulated CATs (theta(CAT)). METHODS: Secondary analysis of retrospective intake rehabilitation data. RESULTS: Unidimensionality of items was strong, and local independence of items was adequate. Differential item functioning (DIF) affected item calibration related to body part, that is, hip, knee, or foot/ankle, but DIF did not affect item calibration for symptom acuity, gender, age, or surgical history. Therefore, patients were separated into three body part specific groups. The rating scale model fit all three data sets well. Three body part specific CATs were developed: each was 70% more efficient than using all LEFS items to estimate FS measures. theta(IRT) and theta(CAT) measures discriminated patients by symptom acuity, age, and surgical history in similar ways. theta(CAT) measures were as precise as theta(IRT) measures. CONCLUSION: Body part-specific simulated CATs were efficient and produced precise measures of FS with good discriminant validity.10a*Health Status Indicators10aActivities of Daily Living10aAdolescent10aAdult10aAged10aAged, 80 and over10aAnkle Joint/physiopathology10aDiagnosis, Computer-Assisted/*methods10aFemale10aHip Joint/physiopathology10aHumans10aJoint Diseases/physiopathology/*rehabilitation10aKnee Joint/physiopathology10aLower Extremity/*physiopathology10aMale10aMiddle Aged10aResearch Support, N.I.H., Extramural10aResearch Support, U.S. Gov't, P.H.S.10aRetrospective Studies1 aHart, D L1 aMioduski, J E1 aStratford, P W uhttp://mail.iacat.org/content/simulated-computerized-adaptive-tests-measuring-functional-status-were-efficient-good03708nas a2200481 4500008004100000245005200041210005200093300001200145490000700157520221100164653001902375653002902394653005802423653001002481653005302491653000902544653001102553653002502564653002602589653003302615653001102648653001002659653000902669653001602678653002402694653007402718653001802792653002902810653005802839653003102897653003202928653003602960653003202996100001503028700001603043700001603059700001603075700001003091700001403101700001803115700001503133856007803148 2004 eng d00aActivity outcome measurement for postacute care0 aActivity outcome measurement for postacute care aI49-1610 v423 aBACKGROUND: Efforts to evaluate the effectiveness of a broad range of postacute care services have been hindered by the lack of conceptually sound and comprehensive measures of outcomes. It is critical to determine a common underlying structure before employing current methods of item equating across outcome instruments for future item banking and computer-adaptive testing applications. OBJECTIVE: To investigate the factor structure, reliability, and scale properties of items underlying the Activity domains of the International Classification of Functioning, Disability and Health (ICF) for use in postacute care outcome measurement. METHODS: We developed a 41-item Activity Measure for Postacute Care (AM-PAC) that assessed an individual's execution of discrete daily tasks in his or her own environment across major content domains as defined by the ICF. We evaluated the reliability and discriminant validity of the prototype AM-PAC in 477 individuals in active rehabilitation programs across 4 rehabilitation settings using factor analyses, tests of item scaling, internal consistency reliability analyses, Rasch item response theory modeling, residual component analysis, and modified parallel analysis. RESULTS: Results from an initial exploratory factor analysis produced 3 distinct, interpretable factors that accounted for 72% of the variance: Applied Cognition (44%), Personal Care & Instrumental Activities (19%), and Physical & Movement Activities (9%); these 3 activity factors were verified by a confirmatory factor analysis. Scaling assumptions were met for each factor in the total sample and across diagnostic groups. Internal consistency reliability was high for the total sample (Cronbach alpha = 0.92 to 0.94), and for specific diagnostic groups (Cronbach alpha = 0.90 to 0.95). Rasch scaling, residual factor, differential item functioning, and modified parallel analyses supported the unidimensionality and goodness of fit of each unique activity domain. CONCLUSIONS: This 3-factor model of the AM-PAC can form the conceptual basis for common-item equating and computer-adaptive applications, leading to a comprehensive system of outcome instruments for postacute care settings.10a*Self Efficacy10a*Sickness Impact Profile10aActivities of Daily Living/*classification/psychology10aAdult10aAftercare/*standards/statistics & numerical data10aAged10aBoston10aCognition/physiology10aDisability Evaluation10aFactor Analysis, Statistical10aFemale10aHuman10aMale10aMiddle Aged10aMovement/physiology10aOutcome Assessment (Health Care)/*methods/statistics & numerical data10aPsychometrics10aQuestionnaires/standards10aRehabilitation/*standards/statistics & numerical data10aReproducibility of Results10aSensitivity and Specificity10aSupport, U.S. Gov't, Non-P.H.S.10aSupport, U.S. Gov't, P.H.S.1 aHaley, S M1 aCoster, W J1 aAndres, P L1 aLudlow, L H1 aNi, P1 aBond, T L1 aSinclair, S J1 aJette, A M uhttp://mail.iacat.org/content/activity-outcome-measurement-postacute-care01885nas a2200241 4500008004100000245006000041210005900101260004800160300001200208520112200220653001501342653003401357653002201391653002101413653001901434653001601453653001701469653001301486653002801499100001301527700001601540856008701556 2004 eng d00aAdaptive computerized educational systems: A case study0 aAdaptive computerized educational systems A case study aSan Diego, CA. USAbElsevier Academic Press a143-1693 a(Created by APA) Adaptive instruction describes adjustments typical of one-on-one tutoring as discussed in the college tutorial scenario. So computerized adaptive instruction refers to the use of computer software--almost always incorporating artificially intelligent services--which has been designed to adjust both the presentation of information and the form of questioning to meet the current needs of an individual learner. This chapter describes a system for Internet-delivered adaptive instruction. The author attempts to demonstrate a sharp difference between the teaching that takes place outside of the classroom in universities and the kind that is at least afforded, if not taken advantage of by many, students in a more personalized educational setting such as those in the small liberal arts colleges. The author describes a computer-based technology that allows that gap to be bridged with the advantage of at least having more highly prepared learners sitting in college classrooms. A limited range of emerging research that supports that proposition is cited. (PsycINFO Database Record (c) 2005 APA )10aArtificial10aComputer Assisted Instruction10aComputer Software10aHigher Education10aIndividualized10aInstruction10aIntelligence10aInternet10aUndergraduate Education1 aRay, R D1 aMalott, R W uhttp://mail.iacat.org/content/adaptive-computerized-educational-systems-case-study02777nas a2200421 4500008004100000020004600041245011200087210006900199250001500268260001000283300000700293490000600300520144100306653002701747653003001774653004701804653001001851653000901861653002201870653002801892653001101920653001101931653002001942653000901962653001601971653001601987653001902003653001602022653003502038653002902073653004002102653003002142100001402172700001702186700001702203700001402220856012102234 2004 eng d a1477-7525 (Electronic)1477-7525 (Linking)00aThe AMC Linear Disability Score project in a population requiring residential care: psychometric properties0 aAMC Linear Disability Score project in a population requiring re a2004/08/05 cAug 3 a420 v23 aBACKGROUND: Currently there is a lot of interest in the flexible framework offered by item banks for measuring patient relevant outcomes, including functional status. However, there are few item banks, which have been developed to quantify functional status, as expressed by the ability to perform activities of daily life. METHOD: This paper examines the psychometric properties of the AMC Linear Disability Score (ALDS) project item bank using an item response theory model and full information factor analysis. Data were collected from 555 respondents on a total of 160 items. RESULTS: Following the analysis, 79 items remained in the item bank. The remaining 81 items were excluded because of: difficulties in presentation (1 item); low levels of variation in response pattern (28 items); significant differences in measurement characteristics for males and females or for respondents under or over 85 years old (26 items); or lack of model fit to the data at item level (26 items). CONCLUSIONS: It is conceivable that the item bank will have different measurement characteristics for other patient or demographic populations. However, these results indicate that the ALDS item bank has sound psychometric properties for respondents in residential care settings and could form a stable base for measuring functional status in a range of situations, including the implementation of computerised adaptive testing of functional status.10a*Disability Evaluation10a*Health Status Indicators10aActivities of Daily Living/*classification10aAdult10aAged10aAged, 80 and over10aData Collection/methods10aFemale10aHumans10aLogistic Models10aMale10aMiddle Aged10aNetherlands10aPilot Projects10aProbability10aPsychometrics/*instrumentation10aQuestionnaires/standards10aResidential Facilities/*utilization10aSeverity of Illness Index1 aHolman, R1 aLindeboom, R1 aVermeulen, M1 aHaan, R J uhttp://mail.iacat.org/content/amc-linear-disability-score-project-population-requiring-residential-care-psychometric01802nas a2200361 4500008004100000020002200041245009500063210006900158250001500227260001100242300001000253490000700263520066300270653002500933653002900958653001000987653000900997653002201006653004501028653003701073653001101110653001101121653000901132653001601141653003601157653003001193653003401223100001601257700002401273700001001297700001501307856011801322 2004 eng d a1074-9357 (Print)00aComputer adaptive testing: a strategy for monitoring stroke rehabilitation across settings0 aComputer adaptive testing a strategy for monitoring stroke rehab a2004/05/01 cSpring a33-390 v113 aCurrent functional assessment instruments in stroke rehabilitation are often setting-specific and lack precision, breadth, and/or feasibility. Computer adaptive testing (CAT) offers a promising potential solution by providing a quick, yet precise, measure of function that can be used across a broad range of patient abilities and in multiple settings. CAT technology yields a precise score by selecting very few relevant items from a large and diverse item pool based on each individual's responses. We demonstrate the potential usefulness of a CAT assessment model with a cross-sectional sample of persons with stroke from multiple rehabilitation settings.10a*Computer Simulation10a*User-Computer Interface10aAdult10aAged10aAged, 80 and over10aCerebrovascular Accident/*rehabilitation10aDisabled Persons/*classification10aFemale10aHumans10aMale10aMiddle Aged10aMonitoring, Physiologic/methods10aSeverity of Illness Index10aTask Performance and Analysis1 aAndres, P L1 aBlack-Schaffer, R M1 aNi, P1 aHaley, S M uhttp://mail.iacat.org/content/computer-adaptive-testing-strategy-monitoring-stroke-rehabilitation-across-settings01905nas a2200217 4500008004100000245010000041210006900141260000800210300001100218490000700229520117300236653002201409653001501431653003701446653003101483653001101514653001901525100001201544700001501556856011601571 2004 eng d00aA computerized adaptive knowledge test as an assessment tool in general practice: a pilot study0 acomputerized adaptive knowledge test as an assessment tool in ge cMar a178-830 v263 aAdvantageous to assessment in many fields, CAT (computerized adaptive testing) use in general practice has been scarce. In adapting CAT to general practice, the basic assumptions of item response theory and the case specificity must be taken into account. In this context, this study first evaluated the feasibility of converting written extended matching tests into CAT. Second, it questioned the content validity of CAT. A stratified sample of students was invited to participate in the pilot study. The items used in this test, together with their parameters, originated from the written test. The detailed test paths of the students were retained and analysed thoroughly. Using the predefined pass-fail standard, one student failed the test. There was a positive correlation between the number of items and the candidate's ability level. The majority of students were presented with questions in seven of the 10 existing domains. Although proved to be a feasible test format, CAT cannot substitute for the existing high-stakes large-scale written test. It may provide a reliable instrument for identifying candidates who are at risk of failing in the written test.10a*Computer Systems10aAlgorithms10aEducational Measurement/*methods10aFamily Practice/*education10aHumans10aPilot Projects1 aRoex, A1 aDegryse, J uhttp://mail.iacat.org/content/computerized-adaptive-knowledge-test-assessment-tool-general-practice-pilot-study02594nas a2200469 4500008004100000245007200041210006900113300001000182490000600192520108600198653002501284653001001309653001501319653002101334653002201355653005901377653007001436653003301506653001101539653001101550653001301561653000901574653002701583653002201610653005501632653001901687653001501706653006601721653001801787653003701805653004101842653003001883653001301913100001501926700001301941700001801954700001501972700001401987700001402001700001302015856009602028 2004 eng d00aComputerized adaptive measurement of depression: A simulation study0 aComputerized adaptive measurement of depression A simulation stu a13-230 v43 aBackground: Efficient, accurate instruments for measuring depression are increasingly importantin clinical practice. We developed a computerized adaptive version of the Beck DepressionInventory (BDI). We examined its efficiency and its usefulness in identifying Major DepressiveEpisodes (MDE) and in measuring depression severity.Methods: Subjects were 744 participants in research studies in which each subject completed boththe BDI and the SCID. In addition, 285 patients completed the Hamilton Depression Rating Scale.Results: The adaptive BDI had an AUC as an indicator of a SCID diagnosis of MDE of 88%,equivalent to the full BDI. The adaptive BDI asked fewer questions than the full BDI (5.6 versus 21items). The adaptive latent depression score correlated r = .92 with the BDI total score and thelatent depression score correlated more highly with the Hamilton (r = .74) than the BDI total scoredid (r = .70).Conclusions: Adaptive testing for depression may provide greatly increased efficiency withoutloss of accuracy in identifying MDE or in measuring depression severity.10a*Computer Simulation10aAdult10aAlgorithms10aArea Under Curve10aComparative Study10aDepressive Disorder/*diagnosis/epidemiology/psychology10aDiagnosis, Computer-Assisted/*methods/statistics & numerical data10aFactor Analysis, Statistical10aFemale10aHumans10aInternet10aMale10aMass Screening/methods10aPatient Selection10aPersonality Inventory/*statistics & numerical data10aPilot Projects10aPrevalence10aPsychiatric Status Rating Scales/*statistics & numerical data10aPsychometrics10aResearch Support, Non-U.S. Gov't10aResearch Support, U.S. Gov't, P.H.S.10aSeverity of Illness Index10aSoftware1 aGardner, W1 aShear, K1 aKelleher, K J1 aPajer, K A1 aMammen, O1 aBuysse, D1 aFrank, E uhttp://mail.iacat.org/content/computerized-adaptive-measurement-depression-simulation-study02844nas a2200349 4500008004100000020004600041245011400087210006900201250001500270260001100285300000700296490000600303520169400309653002702003653002002030653002102050653002002071653004702091653003702138653001802175653001102193653001902204653001602223653002002239653003002259100001402289700001402303700001702317700002002334700001402354856012602368 2004 eng d a1477-7525 (Electronic)1477-7525 (Linking)00aPractical methods for dealing with 'not applicable' item responses in the AMC Linear Disability Score project0 aPractical methods for dealing with not applicable item responses a2004/06/18 cJun 16 a290 v23 aBACKGROUND: Whenever questionnaires are used to collect data on constructs, such as functional status or health related quality of life, it is unlikely that all respondents will respond to all items. This paper examines ways of dealing with responses in a 'not applicable' category to items included in the AMC Linear Disability Score (ALDS) project item bank. METHODS: The data examined in this paper come from the responses of 392 respondents to 32 items and form part of the calibration sample for the ALDS item bank. The data are analysed using the one-parameter logistic item response theory model. The four practical strategies for dealing with this type of response are: cold deck imputation; hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. RESULTS: The item and respondent population parameter estimates were very similar for the strategies involving hot deck imputation; treating the missing responses as if these items had never been offered to those individual patients; and using a model which takes account of the 'tendency to respond to items'. The estimates obtained using the cold deck imputation method were substantially different. CONCLUSIONS: The cold deck imputation method was not considered suitable for use in the ALDS item bank. The other three methods described can be usefully implemented in the ALDS item bank, depending on the purpose of the data analysis to be carried out. These three methods may be useful for other data sets examining similar constructs, when item response theory based methods are used.10a*Disability Evaluation10a*Health Surveys10a*Logistic Models10a*Questionnaires10aActivities of Daily Living/*classification10aData Interpretation, Statistical10aHealth Status10aHumans10aPilot Projects10aProbability10aQuality of Life10aSeverity of Illness Index1 aHolman, R1 aGlas, C A1 aLindeboom, R1 aZwinderman, A H1 aHaan, R J uhttp://mail.iacat.org/content/practical-methods-dealing-not-applicable-item-responses-amc-linear-disability-score-project01775nas a2200217 4500008004100000245007700041210006900118300001100187490000600198520108400204653001501288653002501303653001601328653001001344653001801354653002101372653003101393100001901424700001501443856009901458 2004 eng d00aPre-equating: a simulation study based on a large scale assessment model0 aPreequating a simulation study based on a large scale assessment a301-180 v53 aAlthough post-equating (PE) has proven to be an acceptable method in the scaling and equating of items and forms, there are times when the turn-around period for equating and converting raw scores to scale scores is so small that PE cannot be undertaken within the prescribed time frame. In such cases, pre-equating (PrE) could be considered as an acceptable alternative. Assessing the feasibility of using item calibrations from the item bank (as in PrE) is conditioned on the equivalency of the calibrations and the errors associated with it vis a vis the results obtained via PE. This paper creates item banks over three periods of item introduction into the banks and uses the Rasch model in examining data with respect to the recovery of item parameters, the measurement error, and the effect cut-points have on examinee placement in both the PrE and PE situations. Results indicate that PrE is a viable solution to PE provided the stability of the item calibrations are enhanced by using large sample sizes (perhaps as large as full-population) in populating the item bank.10a*Databases10a*Models, Theoretical10aCalibration10aHuman10aPsychometrics10aReference Values10aReproducibility of Results1 aTaherbhai, H M1 aYoung, M J uhttp://mail.iacat.org/content/pre-equating-simulation-study-based-large-scale-assessment-model04033nas a2200433 4500008004100000245012300041210006900164260000800233300001200241490000700253520252400260653001902784653002902803653005802832653001002890653000902900653002202909653002602931653003302957653001102990653001103001653000903012653001603021653007403037653003003111653003603141653005803177653003103235653004503266653004103311653003203352100001603384700001503400700001603415700001603431700001403447700001203461856012603473 2004 eng d00aRefining the conceptual basis for rehabilitation outcome measurement: personal care and instrumental activities domain0 aRefining the conceptual basis for rehabilitation outcome measure cJan aI62-1720 v423 aBACKGROUND: Rehabilitation outcome measures routinely include content on performance of daily activities; however, the conceptual basis for item selection is rarely specified. These instruments differ significantly in format, number, and specificity of daily activity items and in the measurement dimensions and type of scale used to specify levels of performance. We propose that a requirement for upper limb and hand skills underlies many activities of daily living (ADL) and instrumental activities of daily living (IADL) items in current instruments, and that items selected based on this definition can be placed along a single functional continuum. OBJECTIVE: To examine the dimensional structure and content coverage of a Personal Care and Instrumental Activities item set and to examine the comparability of items from existing instruments and a set of new items as measures of this domain. METHODS: Participants (N = 477) from 3 different disability groups and 4 settings representing the continuum of postacute rehabilitation care were administered the newly developed Activity Measure for Post-Acute Care (AM-PAC), the SF-8, and an additional setting-specific measure: FIM (in-patient rehabilitation); MDS (skilled nursing facility); MDS-PAC (postacute settings); OASIS (home care); or PF-10 (outpatient clinic). Rasch (partial-credit model) analyses were conducted on a set of 62 items covering the Personal Care and Instrumental domain to examine item fit, item functioning, and category difficulty estimates and unidimensionality. RESULTS: After removing 6 misfitting items, the remaining 56 items fit acceptably along the hypothesized continuum. Analyses yielded different difficulty estimates for the maximum score (eg, "Independent performance") for items with comparable content from different instruments. Items showed little differential item functioning across age, diagnosis, or severity groups, and 92% of the participants fit the model. CONCLUSIONS: ADL and IADL items from existing rehabilitation outcomes instruments that depend on skilled upper limb and hand use can be located along a single continuum, along with the new personal care and instrumental items of the AM-PAC addressing gaps in content. Results support the validity of the proposed definition of the Personal Care and Instrumental Activities dimension of function as a guide for future development of rehabilitation outcome instruments, such as linked, setting-specific short forms and computerized adaptive testing approaches.10a*Self Efficacy10a*Sickness Impact Profile10aActivities of Daily Living/*classification/psychology10aAdult10aAged10aAged, 80 and over10aDisability Evaluation10aFactor Analysis, Statistical10aFemale10aHumans10aMale10aMiddle Aged10aOutcome Assessment (Health Care)/*methods/statistics & numerical data10aQuestionnaires/*standards10aRecovery of Function/physiology10aRehabilitation/*standards/statistics & numerical data10aReproducibility of Results10aResearch Support, U.S. Gov't, Non-P.H.S.10aResearch Support, U.S. Gov't, P.H.S.10aSensitivity and Specificity1 aCoster, W J1 aHaley, S M1 aAndres, P L1 aLudlow, L H1 aBond, T L1 aNi, P S uhttp://mail.iacat.org/content/refining-conceptual-basis-rehabilitation-outcome-measurement-personal-care-and-instrumental02889nas a2200301 4500008004100000020002200041245013700063210006900200250001500269260000800284300001000292490000700302520186600309653001102175653003302186653001102219653004602230653002402276653002902300653003002329653002902359100001502388700001602403700001602419700001602435700001002451856012602461 2004 eng d a0003-9993 (Print)00aScore comparability of short forms and computerized adaptive testing: Simulation study with the activity measure for post-acute care0 aScore comparability of short forms and computerized adaptive tes a2004/04/15 cApr a661-60 v853 aOBJECTIVE: To compare simulated short-form and computerized adaptive testing (CAT) scores to scores obtained from complete item sets for each of the 3 domains of the Activity Measure for Post-Acute Care (AM-PAC). DESIGN: Prospective study. SETTING: Six postacute health care networks in the greater Boston metropolitan area, including inpatient acute rehabilitation, transitional care units, home care, and outpatient services. PARTICIPANTS: A convenience sample of 485 adult volunteers who were receiving skilled rehabilitation services. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Inpatient and community-based short forms and CAT applications were developed for each of 3 activity domains (physical & mobility, personal care & instrumental, applied cognition) using item pools constructed from new items and items from existing postacute care instruments. RESULTS: Simulated CAT scores correlated highly with score estimates from the total item pool in each domain (4- and 6-item CAT r range,.90-.95; 10-item CAT r range,.96-.98). Scores on the 10-item short forms constructed for inpatient and community settings also provided good estimates of the AM-PAC item pool scores for the physical & movement and personal care & instrumental domains, but were less consistent in the applied cognition domain. Confidence intervals around individual scores were greater in the short forms than for the CATs. CONCLUSIONS: Accurate scoring estimates for AM-PAC domains can be obtained with either the setting-specific short forms or the CATs. The strong relationship between CAT and item pool scores can be attributed to the CAT's ability to select specific items to match individual responses. The CAT may have additional advantages over short forms in practicality, efficiency, and the potential for providing more precise scoring estimates for individuals.10aBoston10aFactor Analysis, Statistical10aHumans10aOutcome Assessment (Health Care)/*methods10aProspective Studies10aQuestionnaires/standards10aRehabilitation/*standards10aSubacute Care/*standards1 aHaley, S M1 aCoster, W J1 aAndres, P L1 aKosinski, M1 aNi, P uhttp://mail.iacat.org/content/score-comparability-short-forms-and-computerized-adaptive-testing-simulation-study-activity02655nas a2200385 4500008004100000245014400041210006900185300001200254490000700266520138100273653002101654653003301675653002901708653001501737653001001752653000901762653002201771653002601793653003301819653002501852653001901877653001001896653002501906653001601931653002401947653002601971653002701997653003202024653001302056653002802069100001702097700001602114700001402130856012502144 2003 eng d00aCalibration of an item pool for assessing the burden of headaches: an application of item response theory to the Headache Impact Test (HIT)0 aCalibration of an item pool for assessing the burden of headache a913-9330 v123 aBACKGROUND: Measurement of headache impact is important in clinical trials, case detection, and the clinical monitoring of patients. Computerized adaptive testing (CAT) of headache impact has potential advantages over traditional fixed-length tests in terms of precision, relevance, real-time quality control and flexibility. OBJECTIVE: To develop an item pool that can be used for a computerized adaptive test of headache impact. METHODS: We analyzed responses to four well-known tests of headache impact from a population-based sample of recent headache sufferers (n = 1016). We used confirmatory factor analysis for categorical data and analyses based on item response theory (IRT). RESULTS: In factor analyses, we found very high correlations between the factors hypothesized by the original test constructers, both within and between the original questionnaires. These results suggest that a single score of headache impact is sufficient. We established a pool of 47 items which fitted the generalized partial credit IRT model. By simulating a computerized adaptive health test we showed that an adaptive test of only five items had a very high concordance with the score based on all items and that different worst-case item selection scenarios did not lead to bias. CONCLUSION: We have established a headache impact item pool that can be used in CAT of headache impact.10a*Cost of Illness10a*Decision Support Techniques10a*Sickness Impact Profile10aAdolescent10aAdult10aAged10aComparative Study10aDisability Evaluation10aFactor Analysis, Statistical10aHeadache/*psychology10aHealth Surveys10aHuman10aLongitudinal Studies10aMiddle Aged10aMigraine/psychology10aModels, Psychological10aPsychometrics/*methods10aQuality of Life/*psychology10aSoftware10aSupport, Non-U.S. Gov't1 aBjorner, J B1 aKosinski, M1 aWare, Jr. uhttp://mail.iacat.org/content/calibration-item-pool-assessing-burden-headaches-application-item-response-theory-headache01805nas a2200277 4500008004100000245007600041210006900117300001100186490000600197520090300203653001501106653002901121653003001150653002001180653001101200653005001211653001801261653003201279653004101311653001801352100001401370700001301384700001301397700001901410856009801429 2003 eng d00aDeveloping an initial physical function item bank from existing sources0 aDeveloping an initial physical function item bank from existing a124-360 v43 aThe objective of this article is to illustrate incremental item banking using health-related quality of life data collected from two samples of patients receiving cancer treatment. The kinds of decisions one faces in establishing an item bank for computerized adaptive testing are also illustrated. Pre-calibration procedures include: identifying common items across databases; creating a new database with data from each pool; reverse-scoring "negative" items; identifying rating scales used in items; identifying pivot points in each rating scale; pivot anchoring items at comparable rating scale categories; and identifying items in each instrument that measure the construct of interest. A series of calibrations were conducted in which a small proportion of new items were added to the common core and misfitting items were identified and deleted until an initial item bank has been developed.10a*Databases10a*Sickness Impact Profile10aAdaptation, Psychological10aData Collection10aHumans10aNeoplasms/*physiopathology/psychology/therapy10aPsychometrics10aQuality of Life/*psychology10aResearch Support, U.S. Gov't, P.H.S.10aUnited States1 aBode, R K1 aCella, D1 aLai, J S1 aHeinemann, A W uhttp://mail.iacat.org/content/developing-initial-physical-function-item-bank-existing-sources01817nas a2200253 4500008004100000245013100041210006900172300001000241490000600251520095700257653001501214653002901229653002501258653001501283653002001298653001101318653003101329100001401360700001601374700001401390700001401404700002101418856012401439 2003 eng d00aAn examination of exposure control and content balancing restrictions on item selection in CATs using the partial credit model0 aexamination of exposure control and content balancing restrictio a24-420 v43 aThe purpose of the present investigation was to systematically examine the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system (CAT) based on the partial credit model. A series of simulated fixed and variable length CATs were run using two data sets generated to multiple content areas for three sizes of item pools. The 2 (exposure control) X 2 (content rotation) X 2 (test length) X 3 (item pool size) X 2 (data sets) yielded a total of 48 conditions. Results show that while both procedures can be used with no deleterious effect on measurement precision, the gains in exposure control, pool utilization, and item overlap appear quite modest. Difficulties involved with setting the exposure control parameters in small item pools make questionable the utility of the Sympson-Hetter technique with similar item pools.10a*Computers10a*Educational Measurement10a*Models, Theoretical10aAutomation10aDecision Making10aHumans10aReproducibility of Results1 aDavis, LL1 aPastor, D A1 aDodd, B G1 aChiang, C1 aFitzpatrick, S J uhttp://mail.iacat.org/content/examination-exposure-control-and-content-balancing-restrictions-item-selection-cats-using03167nas a2200361 4500008004100000245012500041210006900166300001200235490000700247520200400254653002902258653001502287653001002302653000902312653002202321653002002343653003302363653002402396653001102420653001002431653000902441653001602450653002502466653002602491653004302517653003202560653001902592653002802611100001702639700001602656700001402672856011902686 2003 eng d00aThe feasibility of applying item response theory to measures of migraine impact: a re-analysis of three clinical studies0 afeasibility of applying item response theory to measures of migr a887-9020 v123 aBACKGROUND: Item response theory (IRT) is a powerful framework for analyzing multiitem scales and is central to the implementation of computerized adaptive testing. OBJECTIVES: To explain the use of IRT to examine measurement properties and to apply IRT to a questionnaire for measuring migraine impact--the Migraine Specific Questionnaire (MSQ). METHODS: Data from three clinical studies that employed the MSQ-version 1 were analyzed by confirmatory factor analysis for categorical data and by IRT modeling. RESULTS: Confirmatory factor analyses showed very high correlations between the factors hypothesized by the original test constructions. Further, high item loadings on one common factor suggest that migraine impact may be adequately assessed by only one score. IRT analyses of the MSQ were feasible and provided several suggestions as to how to improve the items and in particular the response choices. Out of 15 items, 13 showed adequate fit to the IRT model. In general, IRT scores were strongly associated with the scores proposed by the original test developers and with the total item sum score. Analysis of response consistency showed that more than 90% of the patients answered consistently according to a unidimensional IRT model. For the remaining patients, scores on the dimension of emotional function were less strongly related to the overall IRT scores that mainly reflected role limitations. Such response patterns can be detected easily using response consistency indices. Analysis of test precision across score levels revealed that the MSQ was most precise at one standard deviation worse than the mean impact level for migraine patients that are not in treatment. Thus, gains in test precision can be achieved by developing items aimed at less severe levels of migraine impact. CONCLUSIONS: IRT proved useful for analyzing the MSQ. The approach warrants further testing in a more comprehensive item pool for headache impact that would enable computerized adaptive testing.10a*Sickness Impact Profile10aAdolescent10aAdult10aAged10aComparative Study10aCost of Illness10aFactor Analysis, Statistical10aFeasibility Studies10aFemale10aHuman10aMale10aMiddle Aged10aMigraine/*psychology10aModels, Psychological10aPsychometrics/instrumentation/*methods10aQuality of Life/*psychology10aQuestionnaires10aSupport, Non-U.S. Gov't1 aBjorner, J B1 aKosinski, M1 aWare, Jr. uhttp://mail.iacat.org/content/feasibility-applying-item-response-theory-measures-migraine-impact-re-analysis-three02749nas a2200349 4500008004100000245015800041210006900199260000800268300001200276490000700288520160300295653003001898653002001928653001001948653003201958653001101990653001102001653000902012653001602021653002802037653001802065653003702083653004102120653002802161100001302189700001502202700001302217700001502230700001402245700001902259856012102278 2003 eng d00aItem banking to improve, shorten and computerized self-reported fatigue: an illustration of steps to create a core item bank from the FACIT-Fatigue Scale0 aItem banking to improve shorten and computerized selfreported fa cAug a485-5010 v123 aFatigue is a common symptom among cancer patients and the general population. Due to its subjective nature, fatigue has been difficult to effectively and efficiently assess. Modern computerized adaptive testing (CAT) can enable precise assessment of fatigue using a small number of items from a fatigue item bank. CAT enables brief assessment by selecting questions from an item bank that provide the maximum amount of information given a person's previous responses. This article illustrates steps to prepare such an item bank, using 13 items from the Functional Assessment of Chronic Illness Therapy Fatigue Subscale (FACIT-F) as the basis. Samples included 1022 cancer patients and 1010 people from the general population. An Item Response Theory (IRT)-based rating scale model, a polytomous extension of the Rasch dichotomous model was utilized. Nine items demonstrating acceptable psychometric properties were selected and positioned on the fatigue continuum. The fatigue levels measured by these nine items along with their response categories covered 66.8% of the general population and 82.6% of the cancer patients. Although the operational CAT algorithms to handle polytomously scored items are still in progress, we illustrated how CAT may work by using nine core items to measure level of fatigue. Using this illustration, a fatigue measure comparable to its full-length 13-item scale administration was obtained using four items. The resulting item bank can serve as a core to which will be added a psychometrically sound and operational item bank covering the entire fatigue continuum.10a*Health Status Indicators10a*Questionnaires10aAdult10aFatigue/*diagnosis/etiology10aFemale10aHumans10aMale10aMiddle Aged10aNeoplasms/complications10aPsychometrics10aResearch Support, Non-U.S. Gov't10aResearch Support, U.S. Gov't, P.H.S.10aSickness Impact Profile1 aLai, J-S1 aCrane, P K1 aCella, D1 aChang, C-H1 aBode, R K1 aHeinemann, A W uhttp://mail.iacat.org/content/item-banking-improve-shorten-and-computerized-self-reported-fatigue-illustration-steps01802nas a2200241 4500008004100000245009300041210006900134300001200203490000700215520098200222653001801204653002101222653003001243653001901273653004601292653001801338653002501356653001501381100001401396700001901410700001501429856011601444 2003 eng d00aThe relationship between item exposure and test overlap in computerized adaptive testing0 arelationship between item exposure and test overlap in computeri a129-1450 v403 aThe purpose of this article is to present an analytical derivation for the mathematical form of an average between-test overlap index as a function of the item exposure index, for fixed-length computerized adaptive tests (CATs). This algebraic relationship is used to investigate the simultaneous control of item exposure at both the item and test levels. The results indicate that, in fixed-length CATs, control of the average between-test overlap is achieved via the mean and variance of the item exposure rates of the items that constitute the CAT item pool. The mean of the item exposure rates is easily manipulated. Control over the variance of the item exposure rates can be achieved via the maximum item exposure rate (r-sub(max)). Therefore, item exposure control methods which implement a specification of r-sub(max) (e.g., J. B. Sympson and R. D. Hetter, 1985) provide the most direct control at both the item and test levels. (PsycINFO Database Record (c) 2005 APA )10a(Statistical)10aAdaptive Testing10aComputer Assisted Testing10aHuman Computer10aInteraction computerized adaptive testing10aItem Analysis10aItem Analysis (Test)10aTest Items1 aChen, S-Y1 aAnkenmann, R D1 aSpray, J A uhttp://mail.iacat.org/content/relationship-between-item-exposure-and-test-overlap-computerized-adaptive-testing01602nas a2200205 4500008004100000245009400041210006900135260001000204300001200214490000800226520086400234653003001098653000901128653003401137653001101171653003501182653004501217100001801262856011601280 2003 eng d00aTen recommendations for advancing patient-centered outcomes measurement for older persons0 aTen recommendations for advancing patientcentered outcomes measu cSep 2 a403-4090 v1393 aThe past 50 years have seen great progress in the measurement of patient-based outcomes for older populations. Most of the measures now used were created under the umbrella of a set of assumptions and procedures known as classical test theory. A recent alternative for health status assessment is item response theory. Item response theory is superior to classical test theory because it can eliminate test dependency and achieve more precise measurement through computerized adaptive testing. Computerized adaptive testing reduces test administration times and allows varied and precise estimates of ability. Several key challenges must be met before computerized adaptive testing becomes a productive reality. I discuss these challenges for the health assessment of older persons in the form of 10 "Ds": things we need to deliberate, debate, decide, and do.10a*Health Status Indicators10aAged10aGeriatric Assessment/*methods10aHumans10aPatient-Centered Care/*methods10aResearch Support, U.S. Gov't, Non-P.H.S.1 aMcHorney, C A uhttp://mail.iacat.org/content/ten-recommendations-advancing-patient-centered-outcomes-measurement-older-persons02860nas a2200265 4500008004100000245006600041210006600107260000800173300000900181490000700190520208800197653002102285653002902306653003002335653001202365653001102377653001302388653003102401653001902432100001302451700001502464700001302479700001502492856008702507 2002 eng d00aAdvances in quality of life measurements in oncology patients0 aAdvances in quality of life measurements in oncology patients cJun a60-80 v293 aAccurate assessment of the quality of life (QOL) of patients can provide important clinical information to physicians, especially in the area of oncology. Changes in QOL are important indicators of the impact of a new cytotoxic therapy, can affect a patient's willingness to continue treatment, and may aid in defining response in the absence of quantifiable endpoints such as tumor regression. Because QOL is becoming an increasingly important aspect in the management of patients with malignant disease, it is vital that the instruments used to measure QOL are reliable and accurate. Assessment of QOL involves a multidimensional approach that includes physical, functional, social, and emotional well-being, and the most comprehensive instruments measure at least three of these domains. Instruments to measure QOL can be generic (eg, the Nottingham Health Profile), targeted toward specific illnesses (eg, Functional Assessment of Cancer Therapy - Lung), or be a combination of generic and targeted. Two of the most widely used examples of the combination, or hybrid, instruments are the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 Items and the Functional Assessment of Chronic Illness Therapy. A consequence of the increasing international collaboration in clinical trials has been the growing necessity for instruments that are valid across languages and cultures. To assure the continuing reliability and validity of QOL instruments in this regard, item response theory can be applied. Techniques such as item response theory may be used in the future to construct QOL item banks containing large sets of validated questions that represent various levels of QOL domains. As QOL becomes increasingly important in understanding and approaching the overall management of cancer patients, the tools available to clinicians and researchers to assess QOL will continue to evolve. While the instruments currently available provide reliable and valid measurement, further improvements in precision and application are anticipated.10a*Quality of Life10a*Sickness Impact Profile10aCross-Cultural Comparison10aCulture10aHumans10aLanguage10aNeoplasms/*physiopathology10aQuestionnaires1 aCella, D1 aChang, C-H1 aLai, J S1 aWebster, K uhttp://mail.iacat.org/content/advances-quality-life-measurements-oncology-patients01983nas a2200289 4500008004100000245007600041210006900117260000800186300001200194490000700206520114900213653002401362653001301386653002101399653002001420653001501440653001001455653001001465653001101475653001101486653000901497653002401506653002601530100001601556700001501572856010601587 2002 eng d00aAssessing tobacco beliefs among youth using item response theory models0 aAssessing tobacco beliefs among youth using item response theory cNov aS21-S390 v683 aSuccessful intervention research programs to prevent adolescent smoking require well-chosen, psychometrically sound instruments for assessing smoking prevalence and attitudes. Twelve thousand eight hundred and ten adolescents were surveyed about their smoking beliefs as part of the Teenage Attitudes and Practices Survey project, a prospective cohort study of predictors of smoking initiation among US adolescents. Item response theory (IRT) methods are used to frame a discussion of questions that a researcher might ask when selecting an optimal item set. IRT methods are especially useful for choosing items during instrument development, trait scoring, evaluating item functioning across groups, and creating optimal item subsets for use in specialized applications such as computerized adaptive testing. Data analytic steps for IRT modeling are reviewed for evaluating item quality and differential item functioning across subgroups of gender, age, and smoking status. Implications and challenges in the use of these methods for tobacco onset research and for assessing the developmental trajectories of smoking among youth are discussed.10a*Attitude to Health10a*Culture10a*Health Behavior10a*Questionnaires10aAdolescent10aAdult10aChild10aFemale10aHumans10aMale10aModels, Statistical10aSmoking/*epidemiology1 aPanter, A T1 aReeve, B B uhttp://mail.iacat.org/content/assessing-tobacco-beliefs-among-youth-using-item-response-theory-models02895nas a2200349 4500008004100000245008300041210006900124260000800193300001100201490000700212520176600219653003001985653002802015653001502043653001002058653000902068653002202077653001102099653001902110653001102129653000902140653001602149653006202165653006102227653003302288653003602321653003102357653002602388100001402414700001602428856010102444 2002 eng d00aDevelopment of an index of physical functional health status in rehabilitation0 aDevelopment of an index of physical functional health status in cMay a655-650 v833 aOBJECTIVE: To describe (1) the development of an index of physical functional health status (FHS) and (2) its hierarchical structure, unidimensionality, reproducibility of item calibrations, and practical application. DESIGN: Rasch analysis of existing data sets. SETTING: A total of 715 acute, orthopedic outpatient centers and 62 long-term care facilities in 41 states participating with Focus On Therapeutic Outcomes, Inc. PATIENTS: A convenience sample of 92,343 patients (40% male; mean age +/- standard deviation [SD], 48+/-17y; range, 14-99y) seeking rehabilitation between 1993 and 1999. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Patients completed self-report health status surveys at admission and discharge. The Medical Outcomes Study 36-Item Short-Form Health Survey's physical functioning scale (PF-10) is the foundation of the physical FHS. The Oswestry Low Back Pain Disability Questionnaire, Neck Disability Index, Lysholm Knee Questionnaire, items pertinent to patients with upper-extremity impairments, and items pertinent to patients with more involved neuromusculoskeletal impairments were cocalibrated into the PF-10. RESULTS: The final FHS item bank contained 36 items (patient separation, 2.3; root mean square measurement error, 5.9; mean square +/- SD infit, 0.9+/-0.5; outfit, 0.9+/-0.9). Analyses supported empirical item hierarchy, unidimensionality, reproducibility of item calibrations, and content and construct validity of the FHS-36. CONCLUSIONS: Results support the reliability and validity of FHS-36 measures in the present sample. Analyses show the potential for a dynamic, computer-controlled, adaptive survey for FHS assessment applicable for group analysis and clinical decision making for individual patients.10a*Health Status Indicators10a*Rehabilitation Centers10aAdolescent10aAdult10aAged10aAged, 80 and over10aFemale10aHealth Surveys10aHumans10aMale10aMiddle Aged10aMusculoskeletal Diseases/*physiopathology/*rehabilitation10aNervous System Diseases/*physiopathology/*rehabilitation10aPhysical Fitness/*physiology10aRecovery of Function/physiology10aReproducibility of Results10aRetrospective Studies1 aHart, D L1 aWright, B D uhttp://mail.iacat.org/content/development-index-physical-functional-health-status-rehabilitation02416nas a2200277 4500008004100000245012200041210006900163260000800232300001000240490000700250520148700257653002101744653002101765653002001786653001001806653002201816653002901838653001101867653001801878653001901896653004101915653003201956100001301988700001802001856011902019 2002 eng d00aMeasuring quality of life in chronic illness: the functional assessment of chronic illness therapy measurement system0 aMeasuring quality of life in chronic illness the functional asse cDec aS10-70 v833 aWe focus on quality of life (QOL) measurement as applied to chronic illness. There are 2 major types of health-related quality of life (HRQOL) instruments-generic health status and targeted. Generic instruments offer the opportunity to compare results across patient and population cohorts, and some can provide normative or benchmark data from which to interpret results. Targeted instruments ask questions that focus more on the specific condition or treatment under study and, as a result, tend to be more responsive to clinically important changes than generic instruments. Each type of instrument has a place in the assessment of HRQOL in chronic illness, and consideration of the relative advantages and disadvantages of the 2 options best drives choice of instrument. The Functional Assessment of Chronic Illness Therapy (FACIT) system of HRQOL measurement is a hybrid of the 2 approaches. The FACIT system combines a core general measure with supplemental measures targeted toward specific diseases, conditions, or treatments. Thus, it capitalizes on the strengths of each type of measure. Recently, FACIT questionnaires were administered to a representative sample of the general population with results used to derive FACIT norms. These normative data can be used for benchmarking and to better understand changes in HRQOL that are often seen in clinical trials. Future directions in HRQOL assessment include test equating, item banking, and computerized adaptive testing.10a*Chronic Disease10a*Quality of Life10a*Rehabilitation10aAdult10aComparative Study10aHealth Status Indicators10aHumans10aPsychometrics10aQuestionnaires10aResearch Support, U.S. Gov't, P.H.S.10aSensitivity and Specificity1 aCella, D1 aNowinski, C J uhttp://mail.iacat.org/content/measuring-quality-life-chronic-illness-functional-assessment-chronic-illness-therapy03063nas a2200325 4500008004100000020004100041245008100082210006900163250001500232260000800247300001100255490000700266520201300273653001502286653001002301653004002311653005702351653003302408653001102441653001102452653001802463653000902481653002802490653001202518653005502530100001502585700001802600700001502618856010402633 2002 eng d a0025-7079 (Print)0025-7079 (Linking)00aMultidimensional adaptive testing for mental health problems in primary care0 aMultidimensional adaptive testing for mental health problems in a2002/09/10 cSep a812-230 v403 aOBJECTIVES: Efficient and accurate instruments for assessing child psychopathology are increasingly important in clinical practice and research. For example, screening in primary care settings can identify children and adolescents with disorders that may otherwise go undetected. However, primary care offices are notorious for the brevity of visits and screening must not burden patients or staff with long questionnaires. One solution is to shorten assessment instruments, but dropping questions typically makes an instrument less accurate. An alternative is adaptive testing, in which a computer selects the items to be asked of a patient based on the patient's previous responses. This research used a simulation to test a child mental health screen based on this technology. RESEARCH DESIGN: Using half of a large sample of data, a computerized version was developed of the Pediatric Symptom Checklist (PSC), a parental-report psychosocial problem screen. With the unused data, a simulation was conducted to determine whether the Adaptive PSC can reproduce the results of the full PSC with greater efficiency. SUBJECTS: PSCs were completed by parents on 21,150 children seen in a national sample of primary care practices. RESULTS: Four latent psychosocial problem dimensions were identified through factor analysis: internalizing problems, externalizing problems, attention problems, and school problems. A simulated adaptive test measuring these traits asked an average of 11.6 questions per patient, and asked five or fewer questions for 49% of the sample. There was high agreement between the adaptive test and the full (35-item) PSC: only 1.3% of screening decisions were discordant (kappa = 0.93). This agreement was higher than that obtained using a comparable length (12-item) short-form PSC (3.2% of decisions discordant; kappa = 0.84). CONCLUSIONS: Multidimensional adaptive testing may be an accurate and efficient technology for screening for mental health problems in primary care settings.10aAdolescent10aChild10aChild Behavior Disorders/*diagnosis10aChild Health Services/*organization & administration10aFactor Analysis, Statistical10aFemale10aHumans10aLinear Models10aMale10aMass Screening/*methods10aParents10aPrimary Health Care/*organization & administration1 aGardner, W1 aKelleher, K J1 aPajer, K A uhttp://mail.iacat.org/content/multidimensional-adaptive-testing-mental-health-problems-primary-care02102nas a2200337 4500008004100000245014400041210006900185300001200254490000700266520096600273653002501239653003601264653002501300653001001325653003001335653001101365653001001376653000901386653003101395653003201426653003601458653003401494653002001528100001601548700001401564700001601578700001901594700001301613700001501626856012301641 2001 eng d00aAn examination of the comparative reliability, validity, and accuracy of performance ratings made using computerized adaptive rating scales0 aexamination of the comparative reliability validity and accuracy a965-9730 v863 aThis laboratory research compared the reliability, validity, and accuracy of a computerized adaptive rating scale (CARS) format and 2 relatively common and representative rating formats. The CARS is a paired-comparison rating task that uses adaptive testing principles to present pairs of scaled behavioral statements to the rater to iteratively estimate a ratee's effectiveness on 3 dimensions of contextual performance. Videotaped vignettes of 6 office workers were prepared, depicting prescripted levels of contextual performance, and 112 subjects rated these vignettes using the CARS format and one or the other competing format. Results showed 23%-37% lower standard errors of measurement for the CARS format. In addition, validity was significantly higher for the CARS format (d = .18), and Cronbach's accuracy coefficients showed significantly higher accuracy, with a median effect size of .08. The discussion focuses on possible reasons for the results.10a*Computer Simulation10a*Employee Performance Appraisal10a*Personnel Selection10aAdult10aAutomatic Data Processing10aFemale10aHuman10aMale10aReproducibility of Results10aSensitivity and Specificity10aSupport, U.S. Gov't, Non-P.H.S.10aTask Performance and Analysis10aVideo Recording1 aBorman, W C1 aBuck, D E1 aHanson, M A1 aMotowidlo, S J1 aStark, S1 aDrasgow, F uhttp://mail.iacat.org/content/examination-comparative-reliability-validity-and-accuracy-performance-ratings-made-using02032nas a2200253 4500008004100000245007700041210006900118260001200187300001200199490000700211520125800218653003901476653002901515653001501544653001001559653001101569653001101580653000901591653003001600653001301630100001601643700002001659856009901679 2001 eng d00aNCLEX-RN performance: predicting success on the computerized examination0 aNCLEXRN performance predicting success on the computerized exami cJul-Aug a158-1650 v173 aSince the adoption of the Computerized Adaptive Testing (CAT) format of the National Certification Licensure Examination for Registered Nurses (NCLEX-RN), no studies have been reported in the literature on predictors of successful performance by baccalaureate nursing graduates on the licensure examination. In this study, a discriminant analysis was used to identify which of 21 variables can be significant predictors of success on the CAT NCLEX-RN. The convenience sample consisted of 289 individuals who graduated from a baccalaureate nursing program between 1995 and 1998. Seven significant predictor variables were identified. The total number of C+ or lower grades earned in nursing theory courses was the best predictor, followed by grades in several individual nursing courses. More than 93 per cent of graduates were correctly classified. Ninety-four per cent of NCLEX "passes" were correctly classified, as were 92 per cent of NCLEX failures. This degree of accuracy in classifying CAT NCLEX-RN failures represents a marked improvement over results reported in previous studies of licensure examinations, and suggests the discriminant function will be helpful in identifying future students in danger of failure. J Prof Nurs 17:158-165, 2001.10a*Education, Nursing, Baccalaureate10a*Educational Measurement10a*Licensure10aAdult10aFemale10aHumans10aMale10aPredictive Value of Tests10aSoftware1 aBeeman, P B1 aWaterhouse, J K uhttp://mail.iacat.org/content/nclex-rn-performance-predicting-success-computerized-examination01588nas a2200241 4500008004100000245005800041210005800099300001200157490000600169520081000175653001400985653001400999653004801013653005701061653001101118653001801129653003101147653003701178100001301215700001701228700001601245856008501261 2000 eng d00aCAT administration of language placement examinations0 aCAT administration of language placement examinations a292-3020 v13 aThis article describes the development of a computerized adaptive test for Cegep de Jonquiere, a community college located in Quebec, Canada. Computerized language proficiency testing allows the simultaneous presentation of sound stimuli as the question is being presented to the test-taker. With a properly calibrated bank of items, the language proficiency test can be offered in an adaptive framework. By adapting the test to the test-taker's level of ability, an assessment can be made with significantly fewer items. We also describe our initial attempt to detect instances in which "cheating low" is occurring. In the "cheating low" situation, test-takers deliberately answer questions incorrectly, questions that they are fully capable of answering correctly had they been taking the test honestly.10a*Language10a*Software10aAptitude Tests/*statistics & numerical data10aEducational Measurement/*statistics & numerical data10aHumans10aPsychometrics10aReproducibility of Results10aResearch Support, Non-U.S. Gov't1 aStahl, J1 aBergstrom, B1 aGershon, RC uhttp://mail.iacat.org/content/cat-administration-language-placement-examinations01564nas a2200229 4500008004100000245006400041210006300105300001100168490000600179520083800185653002701023653001501050653001501065653004201080653001101122653002601133653002601159653003101185100001501216700001601231856008701247 2000 eng d00aComputerization and adaptive administration of the NEO PI-R0 aComputerization and adaptive administration of the NEO PIR a347-640 v73 aThis study asks, how well does an item response theory (IRT) based computerized adaptive NEO PI-R work? To explore this question, real-data simulations (N = 1,059) were used to evaluate a maximum information item selection computerized adaptive test (CAT) algorithm. Findings indicated satisfactory recovery of full-scale facet scores with the administration of around four items per facet scale. Thus, the NEO PI-R could be reduced in half with little loss in precision by CAT administration. However, results also indicated that the CAT algorithm was not necessary. We found that for many scales, administering the "best" four items per facet scale would have produced similar results. In the conclusion, we discuss the future of computerized personality assessment and describe the role IRT methods might play in such assessments.10a*Personality Inventory10aAlgorithms10aCalifornia10aDiagnosis, Computer-Assisted/*methods10aHumans10aModels, Psychological10aPsychometrics/methods10aReproducibility of Results1 aReise, S P1 aHenson, J M uhttp://mail.iacat.org/content/computerization-and-adaptive-administration-neo-pi-r00717nas a2200193 4500008004100000245008400041210006900125300001400194490000700208653003000215653001100245653002500256653001600281653005500297653002200352653002300374100001800397856010800415 2000 eng d00aEmergence of item response modeling in instrument development and data analysis0 aEmergence of item response modeling in instrument development an aII60-II650 v3810aComputer Assisted Testing10aHealth10aItem Response Theory10aMeasurement10aStatistical Validity computerized adaptive testing10aTest Construction10aTreatment Outcomes1 aHambleton, RK uhttp://mail.iacat.org/content/emergence-item-response-modeling-instrument-development-and-data-analysis01775nas a2200289 4500008004100000245007700041210006900118300001400187490000700201520080000208653002501008653003101033653003701064653003801101653001901139653001001158653002701168653004601195653002001241653002801261653003201289653001801321100001401339700001701353700001501370856010001385 2000 eng d00aItem response theory and health outcomes measurement in the 21st century0 aItem response theory and health outcomes measurement in the 21st aII28-II420 v383 aItem response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods.10a*Models, Statistical10aActivities of Daily Living10aData Interpretation, Statistical10aHealth Services Research/*methods10aHealth Surveys10aHuman10aMathematical Computing10aOutcome Assessment (Health Care)/*methods10aResearch Design10aSupport, Non-U.S. Gov't10aSupport, U.S. Gov't, P.H.S.10aUnited States1 aHays, R D1 aMorales, L S1 aReise, S P uhttp://mail.iacat.org/content/item-response-theory-and-health-outcomes-measurement-21st-century01773nas a2200265 4500008004100000245004900041210004800090300001000138490000600148520092600154653002501080653005701105653001501162653001201177653001001189653002101199653006401220653001101284653002201295653001101317653000901328653007801337100001701415856007501432 1999 eng d00aCompetency gradient for child-parent centers0 aCompetency gradient for childparent centers a35-520 v33 aThis report describes an implementation of the Rasch model during the longitudinal evaluation of a federally-funded early childhood preschool intervention program. An item bank is described for operationally defining a psychosocial construct called community life-skills competency, an expected teenage outcome of the preschool intervention. This analysis examined the position of teenage students on this scale structure, and investigated a pattern of cognitive operations necessary for students to pass community life-skills test items. Then this scale structure was correlated with nationally standardized reading and math achievement scores, teacher ratings, and school records to assess its validity as a measure of the community-related outcome goal for this intervention. The results show a functional relationship between years of early intervention and magnitude of effect on the life-skills competency variable.10a*Models, Statistical10aActivities of Daily Living/classification/psychology10aAdolescent10aChicago10aChild10aChild, Preschool10aEarly Intervention (Education)/*statistics & numerical data10aFemale10aFollow-Up Studies10aHumans10aMale10aOutcome and Process Assessment (Health Care)/*statistics & numerical data1 aBezruczko, N uhttp://mail.iacat.org/content/competency-gradient-child-parent-centers02156nas a2200277 4500008004100000020002200041245009600063210006900159250001500228260000800243300001100251490000700262520122800269653001601497653003901513653003701552653001101589653003301600653002501633653002701658653003101685100001701716700001601733700001701749856011201766 1999 eng d a1040-2446 (Print)00aEvaluating the usefulness of computerized adaptive testing for medical in-course assessment0 aEvaluating the usefulness of computerized adaptive testing for m a1999/10/28 cOct a1125-80 v743 aPURPOSE: This study investigated the feasibility of converting an existing computer-administered, in-course internal medicine test to an adaptive format. METHOD: A 200-item internal medicine extended matching test was used for this research. Parameters were estimated with commercially available software with responses from 621 examinees. A specially developed simulation program was used to retrospectively estimate the efficiency of the computer-adaptive exam format. RESULTS: It was found that the average test length could be shortened by almost half with measurement precision approximately equal to that of the full 200-item paper-and-pencil test. However, computer-adaptive testing with this item bank provided little advantage for examinees at the upper end of the ability continuum. An examination of classical item statistics and IRT item statistics suggested that adding more difficult items might extend the advantage to this group of examinees. CONCLUSIONS: Medical item banks presently used for incourse assessment might be advantageously employed in adaptive testing. However, it is important to evaluate the match between the items and the measurement objective of the test before implementing this format.10a*Automation10a*Education, Medical, Undergraduate10aEducational Measurement/*methods10aHumans10aInternal Medicine/*education10aLikelihood Functions10aPsychometrics/*methods10aReproducibility of Results1 aKreiter, C D1 aFerguson, K1 aGruppen, L D uhttp://mail.iacat.org/content/evaluating-usefulness-computerized-adaptive-testing-medical-course-assessment01911nas a2200229 4500008004100000245008600041210006900127300001000196490000700206520111400213653003201327653003701359653001001396653003401406653003001440653002901470653003201499100001601531700001801547700001301565856010301578 1999 eng d00aThe use of Rasch analysis to produce scale-free measurement of functional ability0 ause of Rasch analysis to produce scalefree measurement of functi a83-900 v533 aInnovative applications of Rasch analysis can lead to solutions for traditional measurement problems and can produce new assessment applications in occupational therapy and health care practice. First, Rasch analysis is a mechanism that translates scores across similar functional ability assessments, thus enabling the comparison of functional ability outcomes measured by different instruments. This will allow for the meaningful tracking of functional ability outcomes across the continuum of care. Second, once the item-difficulty order of an instrument or item bank is established by Rasch analysis, computerized adaptive testing can be used to target items to the patient's ability level, reducing assessment length by as much as one half. More importantly, Rasch analysis can provide the foundation for "equiprecise" measurement or the potential to have precise measurement across all levels of functional ability. The use of Rasch analysis to create scale-free measurement of functional ability demonstrates how this methodlogy can be used in practical applications of clinical and outcome assessment.10a*Activities of Daily Living10aDisabled Persons/*classification10aHuman10aOccupational Therapy/*methods10aPredictive Value of Tests10aQuestionnaires/standards10aSensitivity and Specificity1 aVelozo, C A1 aKielhofner, G1 aLai, J-S uhttp://mail.iacat.org/content/use-rasch-analysis-produce-scale-free-measurement-functional-ability01754nas a2200217 4500008004100000245015600041210006900197300001100266490000600277520090700283653004001190653002201230653002401252653002301276653003701299653001001336653002401346653002701370100001801397856012101415 1998 eng d00aThe effect of item pool restriction on the precision of ability measurement for a Rasch-based CAT: comparisons to traditional fixed length examinations0 aeffect of item pool restriction on the precision of ability meas a97-1220 v23 aThis paper describes a method for examining the precision of a computerized adaptive test with a limited item pool. Standard errors of measurement ascertained in the testing of simulees with a CAT using a restricted pool were compared to the results obtained in a live paper-and-pencil achievement testing of 4494 nursing students on four versions of an examination of calculations of drug administration. CAT measures of precision were considered when the simulated examine pools were uniform and normal. Precision indices were also considered in terms of the number of CAT items required to reach the precision of the traditional tests. Results suggest that regardless of the size of the item pool, CAT provides greater precision in measurement with a smaller number of items administered even when the choice of items is limited but fails to achieve equiprecision along the entire ability continuum.10a*Decision Making, Computer-Assisted10aComparative Study10aComputer Simulation10aEducation, Nursing10aEducational Measurement/*methods10aHuman10aModels, Statistical10aPsychometrics/*methods1 aHalkitis, P N uhttp://mail.iacat.org/content/effect-item-pool-restriction-precision-ability-measurement-rasch-based-cat-comparisons02025nas a2200289 4500008004100000245012700041210006900168300001300237490000800250520105800258653003401316653003301350653002301383653001501406653001001421653002601431653001001457653001501467653001801482653003101500100001501531700001601546700001401562700001701576700001601593856012601609 1997 eng d00aA computerized adaptive testing system for speech discrimination measurement: The Speech Sound Pattern Discrimination Test0 acomputerized adaptive testing system for speech discrimination m a2289-2980 v1013 aA computerized, adaptive test-delivery system for the measurement of speech discrimination, the Speech Sound Pattern Discrimination Test, is described and evaluated. Using a modified discrimination task, the testing system draws on a pool of 130 items spanning a broad range of difficulty to estimate an examinee's location along an underlying continuum of speech processing ability, yet does not require the examinee to possess a high level of English language proficiency. The system is driven by a mathematical measurement model which selects only test items which are appropriate in difficulty level for a given examinee, thereby individualizing the testing experience. Test items were administered to a sample of young deaf adults, and the adaptive testing system evaluated in terms of respondents' sensory and perceptual capabilities, acoustic and phonetic dimensions of speech, and theories of speech perception. Data obtained in this study support the validity, reliability, and efficiency of this test as a measure of speech processing ability.10a*Diagnosis, Computer-Assisted10a*Speech Discrimination Tests10a*Speech Perception10aAdolescent10aAdult10aAudiometry, Pure-Tone10aHuman10aMiddle Age10aPsychometrics10aReproducibility of Results1 aBochner, J1 aGarrison, W1 aPalmer, L1 aMacKenzie, D1 aBraveman, A uhttp://mail.iacat.org/content/computerized-adaptive-testing-system-speech-discrimination-measurement-speech-sound-pattern02399nas a2200253 4500008004100000020002200041245012400063210006900187250001500256260000800271300001200279490000600291520152400297653001901821653003001840653002101870653003301891653002401924653001101948653002701959100001701986700001502003856012702018 1997 eng d a0962-9343 (Print)00aHealth status assessment for the twenty-first century: item response theory, item banking and computer adaptive testing0 aHealth status assessment for the twentyfirst century item respon a1997/08/01 cAug a595-6000 v63 aHealth status assessment is frequently used to evaluate the combined impact of human immunodeficiency virus (HIV) disease and its treatment on functioning and well-being from the patient's perspective. No single health status measure can efficiently cover the range of problems in functioning and well-being experienced across HIV disease stages. Item response theory (IRT), item banking and computer adaptive testing (CAT) provide a solution to measuring health-related quality of life (HRQoL) across different stages of HIV disease. IRT allows us to examine the response characteristics of individual items and the relationship between responses to individual items and the responses to each other item in a domain. With information on the response characteristics of a large number of items covering a HRQoL domain (e.g. physical function, and psychological well-being), and information on the interrelationships between all pairs of these items and the total scale, we can construct more efficient scales. Item banks consist of large sets of questions representing various levels of a HRQoL domain that can be used to develop brief, efficient scales for measuring the domain. CAT is the application of IRT and item banks to the tailored assessment of HRQoL domains specific to individual patients. Given the results of IRT analyses and computer-assisted test administration, more efficient and brief scales can be used to measure multiple domains of HRQoL for clinical trials and longitudinal observational studies.10a*Health Status10a*HIV Infections/diagnosis10a*Quality of Life10aDiagnosis, Computer-Assisted10aDisease Progression10aHumans10aPsychometrics/*methods1 aRevicki, D A1 aCella, D F uhttp://mail.iacat.org/content/health-status-assessment-twenty-first-century-item-response-theory-item-banking-and-computer01319nas a2200265 4500008004100000245005500041210005400096300001200150490000600162520053000168653003800698653002000736653001400756653003500770653003100805653001100836653001900847653001800866653002800884100001300912700001500925700001700940700001400957856008200971 1997 eng d00aOn-line performance assessment using rating scales0 aOnline performance assessment using rating scales a173-1910 v13 aThe purpose of this paper is to report on the development of the on-line performance assessment instrument--the Assessment of Motor and Process Skills (AMPS). Issues that will be addressed in the paper include: (a) the establishment of the scoring rubric and its implementation in an extended Rasch model, (b) training of raters, (c) validation of the scoring rubric and procedures for monitoring the internal consistency of raters, and (d) technological implementation of the assessment instrument in a computerized program.10a*Outcome Assessment (Health Care)10a*Rehabilitation10a*Software10a*Task Performance and Analysis10aActivities of Daily Living10aHumans10aMicrocomputers10aPsychometrics10aPsychomotor Performance1 aStahl, J1 aShumway, R1 aBergstrom, B1 aFisher, A uhttp://mail.iacat.org/content/line-performance-assessment-using-rating-scales00865nas a2200205 4500008004100000245004600041210004600087260001200133300000800145490000600153520029600159653002900455653001500484653001100499653001800510653002400528653001800552100001700570856007200587 1996 eng d00aDispelling myths about the new NCLEX exam0 aDispelling myths about the new NCLEX exam cJan-Feb a6-70 v93 aThe new computerized NCLEX system is working well. Most new candidates, employers, and board of nursing representatives like the computerized adaptive testing system and the fast report of results. But, among the candidates themselves some myths have grown which cause them needless anxiety.10a*Educational Measurement10a*Licensure10aHumans10aNursing Staff10aPersonnel Selection10aUnited States1 aJohnson, S H uhttp://mail.iacat.org/content/dispelling-myths-about-new-nclex-exam01529nas a2200229 4500008004100000020004100041245010500082210006900187250001500256260001200271300000900283490000700292520069800299653002500997653002501022653004301047653003701090653001101127100001601138700001801154856012701172 1996 eng d a0363-3624 (Print)0363-3624 (Linking)00aMethodologic trends in the healthcare professions: computer adaptive and computer simulation testing0 aMethodologic trends in the healthcare professions computer adapt a1996/07/01 cJul-Aug a13-40 v213 aAssessing knowledge and performance on computer is rapidly becoming a common phenomenon in testing and measurement. Computer adaptive testing presents an individualized test format in accordance with the examinee's ability level. The efficiency of the testing process enables a more precise estimate of performance, often with fewer items than traditional paper-and-pencil testing methodologies. Computer simulation testing involves performance-based, or authentic, assessment of the examinee's clinical decision-making abilities. The authors discuss the trends in assessing performance through computerized means and the application of these methodologies to community-based nursing practice.10a*Clinical Competence10a*Computer Simulation10aComputer-Assisted Instruction/*methods10aEducational Measurement/*methods10aHumans1 aForker, J E1 aMcDonald, M E uhttp://mail.iacat.org/content/methodologic-trends-healthcare-professions-computer-adaptive-and-computer-simulation-testing01698nas a2200229 4500008004100000020004100041245006400082210006200146250001500208260000800223300001100231490000700242520102000249653003101269653002501300653001001325653001101335653001101346653000901357100001601366856008601382 1995 jpn d a0021-5236 (Print)0021-5236 (Linking)00aA study of psychologically optimal level of item difficulty0 astudy of psychologically optimal level of item difficulty a1995/02/01 cFeb a446-530 v653 aFor the purpose of selecting items in a test, this study presented a viewpoint of psychologically optimal difficulty level, as well as measurement efficiency, of items. A paper-and-pencil test (P & P) composed of hard, moderate and easy subtests was administered to 298 students at a university. A computerized adaptive test (CAT) was also administered to 79 students. The items of both tests were selected from Shiba's Word Meaning Comprehension Test, for which the estimates of parameters of two-parameter item response model were available. The results of P & P research showed that the psychologically optimal success level would be such that the proportion of right answers is somewhere between .75 and .85. A similar result was obtained from CAT research, where the proportion of about .8 might be desirable. Traditionally a success rate of .5 has been recommended in adaptive testing. In this study, however, it was suggested that the items of such level would be too hard psychologically for many examinees.10a*Adaptation, Psychological10a*Psychological Tests10aAdult10aFemale10aHumans10aMale1 aFujimori, S uhttp://mail.iacat.org/content/study-psychologically-optimal-level-item-difficulty00734nas a2200241 4500008004100000020002200041245005700063210005600120250001500176260000800191300001100199490000700210653003500217653002400252653002900276653001900305653001100324653002700335653001800362100001800380700001500398856007900413 1993 eng d a0276-5284 (Print)00aComputerized adaptive testing: the future is upon us0 aComputerized adaptive testing the future is upon us a1993/09/01 cSep a378-850 v1410a*Computer-Assisted Instruction10a*Education, Nursing10a*Educational Measurement10a*Reaction Time10aHumans10aPharmacology/education10aPsychometrics1 aHalkitis, P N1 aLeahy, J M uhttp://mail.iacat.org/content/computerized-adaptive-testing-future-upon-us00615nas a2200205 4500008004100000020002200041245004700063210004600110250001500156260000800171300000900179490000700188653001500195653002800210653003700238653001100275653003400286100001600320856007300336 1992 eng d a0022-3867 (Print)00aComputerized adaptive testing for NCLEX-PN0 aComputerized adaptive testing for NCLEXPN a1992/06/01 cJun a8-100 v4210a*Licensure10a*Programmed Instruction10aEducational Measurement/*methods10aHumans10aNursing, Practical/*education1 aFields, F A uhttp://mail.iacat.org/content/computerized-adaptive-testing-nclex-pn