Anglia Ruskin Research Online (ARRO)
Browse
- No file added yet -

Comparison of methods for estimating premorbid intelligence

Download (1.73 MB)
journal contribution
posted on 2023-07-26, 14:16 authored by Peter Bright, Ian van der Linde
To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed on the full Wechsler Adult Intelligence Scale – Fourth Edition (WAIS-IV; Wechsler, 2008) and on two widely used word reading tests: National Adult Reading Test (NART; Nelson, 1982; Nelson & Willison, 1991) and Wechsler Test of Adult Reading (WTAR; Wechsler, 2001). Our findings indicate that reading tests provide the most reliable and precise estimates of WAIS-IV full-scale IQ, although the addition of demographic data provides modest improvement. Nevertheless, we observed considerable variability in correlations between NART/WTAR scores and individual WAIS-IV indices, which indicated particular usefulness in estimating more crystallised premorbid abilities (as represented by the verbal comprehension and general ability indices) relative to fluid abilities (working memory and perceptual reasoning indices). We discuss and encourage the development of new methods for improving premorbid estimates of cognitive abilities in neurological patients.

History

Refereed

  • Yes

Volume

30

Issue number

1

Page range

1-14

Publication title

Neuropsychological Rehabilitation

ISSN

1464-0694

Publisher

Taylor & Francis

File version

  • Published version

Language

  • eng

Legacy posted date

2018-02-26

Legacy creation date

2018-02-21

Legacy Faculty/School/Department

ARCHIVED Faculty of Science & Technology (until September 2018)

Usage metrics

    ARU Outputs

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC