+1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Mini ReviewOpen Access

Assessment of Cognitive Impairment in Older Research Participants Volume 1 - Issue 3

Robbie Ingram1*, Robert Fieo1 and Stephen D Anton1,2

  • 1Department of Aging and Geriatric Research, University of Florida, USA
  • 2Department of Clinical and Health Psychology, University of Florida, USA

Received: August 15, 2017;   Published: August 31, 2017

Corresponding author: Robbie Ingram, Department of Aging and Geriatric Research, Institute on Aging, Department of Clinical and Health Psychology,University of Florida, Gainesville, FL 32610, USA

DOI: 10.26717/BJSTR.2017.01.000316

Abstract PDF


In any clinical trial or observational study, a certain level of cognitive function is required in order to ensure participant safety and ability to comply with the study protocol. Participants should adequately understand what is involved in their participation, as well as any potential risks or benefits of participation. The costs of using a specific screening method, however, need to be weighed against the information obtained. This prevents the inefficient use of available resources, such as avoiding assessments that require an extremely long administration time and unnecessarily increase participant burden. Due to the inherent challenges in obtaining a valid assessment of cognitive function in a time efficient manner, all potential assessment measures should be evaluated with a “best fit” approach. Keeping this “best fit” approach in mind, we reviewed and evaluated a number of screening tests for assessing cognitive impairment based on recommendations provided by experts in the field. Based on our review, we provide recommendations about existing methods for screening cognitive function in older research participants. Using the “best fit” approach, we would recommend the Mini-Mental State Examination (MMSE) to screen for cognitive impairment in older research participants. Other measures, such as the Cognitive Abilities Screening Instrument (CASI) and Modified Mini-Mental State (3MS), are indeed promising, but the MMSE can be administered in a time efficient manner and has the highest amount of supporting literature.

Keywords: Cognitive Impairment; Screening; Elderly; Research; Participation

Abbreviations MMSE: Mini-Mental State Examination; CASI: Cognitive Abilities Screening Instrument; 3MS: Modified Mini-Mental State; SASSI: Short and Sweet Screening Instrument; STMS: Short Test of Mental Status; ACE-R: Addenbrooke’s Cognitive Examination—Revised; MCI: Mild Cognitive Impairment; DIF: Differential Item Functioning


In any clinical trial or observational study, a certain level of cognitive function is required in order to ensure participant safety and ability to comply with the study protocol. Participants should adequately understand what is involved in their participation, as well as any potential risks or benefits of participation. Although studies involving simple surveys or questionnaires may not require as careful screening as potentially higher risk studies(e.g., pharmacological interventions), [1] it is always necessary to ensure the participant fully understands the potential risks involved before agreeing to participate in the study.

Age has been found to be significantly associated with cognitive decline [2,3] thus cognitive function and understanding of study risks and procedures should be carefully assessed in studies involving adults who are 60 years and older [4]. Careful assessment of cognitive function in older research participants, as well as screening for mild cognitive impairment, could help to prevent incidents such as an accidental overdose resulting from a previously forgotten dose being taken. Ensuring that the participant is capable of adhering to study protocols can prevent inappropriate requirements from being placed on the participant. Additionally, screening for cognitive impairment benefits the scientific community at large; ensuring that participants fully understand what they are agreeing to is vital for ethically sound science.

In a research setting, a number of considerations determine what information an investigator will need to obtain from screening assessments. At the same time participant burden should be reduced as much as is reasonably possible. Therefore, being able to effectively assess cognitive function in a time efficient manner is an important consideration in the evaluation of potential screening tools. More specifically, the costs of using a specific screening method should be weighed against the information obtained. This prevents the inefficient use of available resources, such as avoiding assessments that require an extremely long time to administer and unnecessarily increase participant burden.

The challenge of cognitive assessment, particularly in older adults, is to strike a balance between comprehensiveness and short administrative time, aimed at reducing subject burden and time constraints−more succinctly, administration time vs. accuracy. Here we attempt to outline the most valid but brief instruments that can be used to ensure the safety of older research participants through appropriate cognitive screening. Due to the inherent challenges in obtaining a valid assessment of cognitive function in a time efficient manner, all potential assessment measures should be evaluated with a “best fit” approach. Keeping this “best fit” approach in mind, we reviewed and evaluated screening tests for assessing cognitive impairment proposed in “A Review of Screening Tests for Cognitive Impairment” [5]. Based on evaluation of empirical findings, we provide recommendations about existing methods for screening cognitive function in older research participants.

The review by Cullen et al. [5] evaluated screening tests for their overall effectiveness in three areas: brief assessments in a physician’s office, large-scale screening in the community, and guiding differential diagnoses with more specific needs. With respect to screening for cognitive function in older research participants, evaluations for brief assessments in a physician’s office and large scale screening in the community share relevant features and similar requirements, including the need to minimize time involved yet obtain accurate (valid) and consistent (reliable) results. Thus, we selected those screening tools that were reviewed favorably in these two categories, and discuss their potential for being used to evaluate cognitive function in older research participants. Specifically, we review the Modified Mini-Mental State (3MS), Mini-Mental State Examination (MMSE), Cognitive Abilities Screening Instrument (CASI), Short Test of Mental Status (STMS), Short and Sweet Screening Instrument (SASSI), and Addenbrooke’s Cognitive Examination-Revised (ACE-R). These screening tools showed the most promise in a recent review with respect to brief assessments in physician’s offices and screening for clinical trials, including studies conducted in the community [5].

The 3MS, a modified version of the MMSE, has an administration time of about 10-15 minutes [5] and covers the six key cognitive domains of verbal fluency, reasoning/judgment, expressive language, visual construction, immediate and delayed free verbal recall, and cued verbal recall. This makes the 3MS and the CASI the most comprehensive (i.e., covering 6 cognitive domains). The 3MS is modified for increased specificity, as it has a score range of 1-100 as opposed to the MMSE’s 30-point range. While among the most complete in terms of its scope, the 3MS was reported to exhibit wide variation in scores without an accompanying clinical change [5,6].

For example, test-retest (one-month period) differed by as much as 20 points [6]. The observed confidence interval was -16 to 16, which suggests the possibility of wide variability in scores among individuals with no expected change in cognitive status. Another study corroborated these findings, stating that, “…test-retest correlations and evidence from regression analyses showed that considerable variability existed among the participants” [7]. The 3MS is attractive in its coverage of the six key domains of cognitive function, [5] but the variance in score merits further study before it can be widely recommended for use in a research setting. Despite these concerns related to score variance, the rate of false negatives for cognitive impairment was found to be quite low (1.4%; CSHA) [8].

Another possible tool to screen for cognitive impairment in older research participants is the MMSE. The MMSE is primarily a written examination, which is scored on a scale of 1-30. With an administration time of 8-13 minutes and coverage of four of the six cognitive domains assessed by the 3MS, [5] the MMSE has an administration time and level of specificity that lies near the middle of the potential screening tools discussed in this paper. Due to its long history as a cognitive screener (introduced in 1975), the MMSE is perhaps the best studied in terms of reliability and validity. For example, a 2008 study evaluated the validity of the MMSE, administered by trained general practitioners, in a public health setting [9]. The study team referred patients who were determined to be cognitively impaired to Alzheimer’s Evaluation Units for diagnosis. With a calculated inter-rater agreement of 0.86 between the MMSE scores of the general practitioner and that of the Alzheimer’s Evaluation Units, the MMSE as utilized by the general practitioners was determined to be sufficiently accurate to detect patients with cognitive impairment as determined by the Alzheimer’s Evaluation Units. This is especially relevant, as the documented implementation of the MMSE by non-specialized personnel provides actual evidence to evaluate its potential usefulness with respect to screening for cognitive impairment in older research participants.

The MMSE has also been found to be a useful tool for investigating mild cognitive impairment (MCI). For example, MMSE scores falling between 23 and 26 (the range thought to represent MCI) indicated MCI in terms of the prediction of future dementia as well as more detailed definitions/methods (Mayo Clinic-defined amnestic, nonamnestic, multiple, and revised MCI) [10]. There are, however, some common critiques of the MMSE. First, the test has been shown to exhibit floor and ceiling effects in some populations (poorly and highly educated), [11] due to its narrow score range of 30 points [12]. The MMSE has also been copyrighted since 2001, which makes it more expensive to implement than some other screening methods.

The CASI is the only other screening test, in addition to the 3MS, that assessed all six cognitive areas discussed; however, it also has the longest administration time of approximately 15-20 minutes [5]. The CASI was listed as a possible compromise between tools with broader coverage and tools used more as a brief assessment, [5] which may be related to its larger score range of 0-100. In its original publication, the CASI was cited as having considerable cross-cultural applicability, which may appeal to investigators who anticipate a more culturally diverse study population and wish to minimize potential biases [13]. Also discussed in the Cullen et al. review, [5] however, was an example of low specificity in some random samples [14]. While there is currently not a significant amount of literature devoted solely to its evaluation relative to the other screening tools discussed, [5,13-16] those investigators anticipating the need for cross-cultural applicability may wish to consider utilizing the CASI. It is worth noting that the CASI incorporates elements of the MMSE and the 3MS, thus the convergent validity between these measures can be quite high. For example, Graves et al. [17] reported a correlation coefficient of .92 between the CASI and MMSE in a sample of 57 probable Alzheimer’s disease cases.

The STMS has the smallest time requirement (approximately 5 minutes) of the screening tools evaluated here [5]. Decreasing the time required to screen for cognitive function can serve to lessen the length of the overall study visit. Unfortunately, there is limited supporting literature available on the STMS [18-20]. Looked for correlations with standardized psychometric testing, and the 2003 study done by Tang-Wai et al. [20] compared the STMS with the MMSE in the detection of mild cognitive impairment. Despite the STMS being developed to outperform the MMSE, Tang-Wai et al. [20] reported only modest evidence of increased sensitivity. The STMS also only covers four of the cognitive domains evaluated (reasoning/ judgment, visual construction, and immediate and delayed free verbal recall). Any advantage over the STMS is likely to be observed in the verbal recall domain, which has been described as more complex and nuanced than those found in the MMSE Mansbach et al. [21]. As such, the STMS should only appeal to investigators conducting studies for which time is the primary concern.

The SASSI, with an average administration time of about 10-15 minutes, has a similar time requirement to that of the previously discussed 3MS. In the Cullen et al. [5] review, the SASSI was evaluated as a potential tool for screening cognitive function in the community. It covered all of the key cognitive domains aside from reasoning/judgment and cued verbal recall, and was listed as one of the most promising candidates for brief assessments of cognitive function. However, there is a dearth of literature on the SASSI [22]. While preliminary evaluations do indicate that the SASSI shows promise [22] (compared to a full cognitive battery that took over 30 minutes to administer, it was 4% more sensitive and only 1% less specific), the lack of additional supporting literature makes it difficult to recommend it for use in any given study over another potential screening tool that has far more supporting literature.

The last potential method for screening cognitive function in older research participants reviewed is the ACE-R [5]. With an administration time of about 16 minutes, the ACE-R was listed as covering the cognitive domains of verbal fluency, expressive language, visual construction, and immediate and delayed free verbal recall. Although the ACE-R was listed as a promising test [23- 25] Larner & Mitchell reported a sensitivity of 95.7% and a specificity of 87.5%), [24] it has not yet been validated in community samples. It was described by Cullen et al. [5] as a tool more orientated towards differential diagnoses for clinicians in secondary and tertiary practice settings. While screening for cognitive function in a research setting does share requirements with brief assessments in physicians’ offices or large scale community evaluations (minimizing time requirements, the need for accurate and generalizable results, etc.), it is difficult to recommend a specialized tool such as the ACE-R for a broad assessment of cognitive function. Similar to the previous discussion of the SASSI, the ACE-R may be an effective tool to screen for cognitive function on a smaller scale, but it would not be recommended over other possible screening methods that have a much larger body of supporting literature.

Given the currently available measures to screen cognitive function in older research participants, and based on the available evidence, we believe the MMSE is a useful, efficient, and balanced screening measure of cognitive function for most studies. It is important to note that other tools such as the 3MS and CASI also show promise, but the MMSE currently provides the investigator with the largest body of supporting literature. Further confidence for the selection of these three screeners, in contrast to the other measures reviewed, comes from exposure to additional psychometric scrutiny relating to item bias. The CASI, 3MS, and the MMSE have been previously examined with item response theory methodology, with the aim of improving validity. Bias is a serious problem in psychometric tests. Item bias or differential item functioning (DIF) is said to be present when examinees from different groups have differing probabilities of success on an item, after controlling for overall ability, or total score [26]. That is, group membership may influence the use of, and familiarity with, certain phrases, thus affecting the interpretation of and response to items, and ultimately, test scores. Common areas investigated for item bias or DIF include education, age, race, etc. If an item is free of bias, responses to that item will be related only to the level of the underlying trait (e.g., cognitive ability) that the item is trying to measure. The presence of large numbers of items with DIF is a severe threat to the construct validity of tests and the conclusions based on test scores derived from such items [27].

Ramirez et al. [28] reported that most MMSE items have been found to perform differently for individuals of different educational levels and/or for members of different racial/ethnic groups. However, the authors acknowledge that an examination of the magnitude of item bias has rarely been considered. This point should not be understated, as the impact of individual item bias on the scale’s total score may be trivial often; some items favor one group while others favor the other, and the effect on the total score is canceled out. Jones & Gallo [29] also found evidence of item bias by education on the MMSE: Significant sex and education group DIF was detected. Those with low education were more likely to err on the first serial subtraction, spell world backwards, and repeat phrase, write, name season, and copy design tasks. However, Jones and Gallo also reported magnitude, and found that it was small and not a threat to the validity of total scores. More generally, Crane et al. [30] reported that the MMSE, as well as the 3MS and CASI, did present with poor precision for high scores, which is more relevant for change score assessment and not as a screener of cognitive impairment. However, relatively low score on the MMSE (i.e., <15) will indicate that some impairment is most likely present [31].

An examination of item bias in the CASI revealed that gender and race had little impact on CASI scores, and that items were biased to some degree by age, but more so by education level [27]. In a separate investigation into the CASI and DIF, it was also demonstrated that the CASI was not biased by gender, and just a few items related age. As with Krane et al. [32] DIF related to education was the most problematic. However, further analysis into the impact of education-related DIF was minimal; only 1.5% of subjects (sample 1) and .2% subjects in sample 2 presented with salient DIF related to education.

The 3MS has also been shown to present with DIF due to education. Mukherjee et al. [33] examined DIF in the 3MS and reported that some of the differences in total scores across education groups were due to DIF. However, rates of cognitive decline were negligible when accounting for or ignoring DIF in education. Similarly, Wiest et al. [34] reported that 40 of the 46 3MS items had education DIF. Despite the widespread presence of DIF in this study, the authors did not examine the magnitude of such DIF, i.e., the impact of item level bias on the 3MS total score.


While no screening method is without its limitations, the considerable amount of empirical support for the MMSE allows the investigator to be aware of potential drawbacks pertaining to their particular study and account for them in the appropriate manner. As noted above, the MMSE can be administered in 8-13 minutes and covers four of the six key cognitive domains identified (expressive language, visual construction, immediate free verbal recall, and delayed free verbal recall). Overall, the MMSE has a substantial body of supporting literature, and a relatively minor weakness, which suggests it would be an appropriate choice for most studies.

Investigators, however, should always remain cognizant of newly presented findings, and adjust their screening methods accordingly. For example, item bias in the MMSE seems most pronounced when examining education status. More specifically, persons with low education can potentially be incorrectly identified as at risk for dementia [35]. Potential solutions for obtaining more accurate MMSE assessment in participants with low education include the use of different cut-scores, adjusting scores for education, or the use of different domains of cognition within the MMSE. We recommend the latter, that is, for subjects who present with low education (at or below the standard total score cut-point), that physicians should further examine the memory sub domain of the MMSE [36] The memory domain of the MMSE has been shown to less biased by education, in contrast to total score [35].

Based on our review of several screening methods, we would recommend the MMSE to screen for cognitive impairment in older research participants. Other measures, such as the CASI and 3MS, are indeed promising, but the MMSE can be administered in a time efficient manner and has the highest amount of supporting literature. The extensive study and intimate knowledge of the MMSE’s shortcomings allow the investigator to understand where potential issues may arise and account for them accordingly. We believe that the MMSE provides the “best fit” approach to screening for cognitive impairment in older research participants in a time efficient manner, and that a large body of literature allows investigators to benefit from the assessment’s strengths and understand its potential weaknesses.


  1. Simpson C (2010) Decision-making capacity and informed consent to participate in research by cognitively impaired individuals. Appl Nurs Res 23(4): 221-226.
  2. Rashedi V, Rezaei M, Gharib M (2014) Prevalence of cognitive impairment in community-dwelling older adults. Basic Clin Neurosci 5(1): 28-30.
  3. Salthouse TA (2009) When does age-related cognitive decline begin? Neurobiol Aging 30(4): 507-514.
  4. Locke DE, Ivnik RJ, Cha RH, David S Knopman, Eric G Tangalos, et al. (2009) Age, family history, and memory and future risk for cognitive impairment. J Clin Exp Neuropsychol 31(1): 111-116.
  5. Cullen B, O’Neill B, Evans JJ, Coen RF, Lawlor BA (2007) A review of screening tests for cognitive impairment. J Neurol Neurosurg Psychiatry 78(8): 790-799.
  6. Correa JA, Perrault A, Wolfson C (2001) Reliable individual change scores on the 3MS in older persons with dementia: results from the Canadian Study of Health and Aging. Int Psychogeriatr 13(1): 71-78.
  7. Tombaugh TN (2005) Test-retest reliable coefficients and 5-year change scores for the MMSE and 3MS. Arch Clin Neuropsychol 20(4): 485-503.
  8. Canadian study of health and aging: study methods and prevalence of dementia (1994) CMAJ 150(6): 899-913.
  9. Pezzotti P, Scalmana S, Mastromattei A, Di LD (2008) The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: a prospective observational study. BMC Fam Pract 9: 29.
  10. Stephan BC, Savva GM, Brayne C, Bond J, McKeith IG, et al. (2010) Optimizing mild cognitive impairment for discriminating dementia risk in the general older population. Am J Geriatr Psychiatry 18(8): 662-673.
  11. Franco-Marina F, Garcia-Gonzalez JJ, Wagner-Echeagaray Fetal (2010) The Mini-mental State Examination revisited: ceiling and floor effects after score adjustment for educational level in an aging Mexican population. Int Psychogeriatr 22(1): 72-81.
  12. Nieuwenhuis-Mark RE (2010) The death knoll for the MMSE: has it outlived its purpose? J Geriatr Psychiatry Neurol 23(3): 151-157.
  13. Teng EL, Hasegawa K, Homma A, Imai Y, Larson E, et al. (1994) The Cognitive Abilities Screening Instrument (CASI): a practical test for cross-cultural epidemiological studies of dementia. Int Psychogeriatr 6(1): 45-58.
  14. Fujii D, Hishinuma E, Masaki K, Petrovich H, Ross GW, et al. (2003) Dementia screening: can a second administration reduce the number of false positives? Am J Geriatr Psychiatry 11(4): 462-465.
  15. Damasceno A, Delicio AM, Mazo DF, João FD, Zullo, Patricia Scherer, et al. (2005) Validation of the Brazilian version of mini-test CASI-S. Arq Neuropsiquiatr 63(2): 416-421.
  16. de Oliveira GM, Yokomizo JE, Vinholi e Silva Ldos, Saran LF, Bottino CM, et al. (2016) The applicability of the cognitive abilities screening instrument-short (CASI-S) in primary care in Brazil. Int Psychogeriatr 28(1): 93-99.
  17. Graves AB, Larson EB, White LR, Teng EL (1993) Screening for dementia in the community in cross-national studies: Comparison of the performance of a new instrument with the mini-mental state examination. In: Corain B, Iqbal K NM, eds. Alzheimer’s Disease: Advances in Clinical and Basic Research, Wiley, Newyork, USA, 13(1): 113-119.
  18. Kokmen E, Naessens JM, Offord KP (1987) A short test of mental status: description and preliminary results. Mayo Clin Proc 62(4): 281-288.
  19. Kokmen E, Smith GE, Petersen RC, Tangalos E, Ivnik RC (1991) The short test of mental status. Correlations with standardized psychometric testing. Arch Neurol 48(7): 725-728.
  20. Tang-Wai DF, Knopman DS, Geda YE, Edland SD, Smith GE et al. (2003) Comparison of the short test of mental status and the mini-mental state examination in mild cognitive impairment. Arch Neurol 60(12): 1777- 1781.
  21. Mansbach WE, MacDougall EE, Rosenzweig AS (2012) The Brief Cognitive Assessment Tool (BCAT): a new test emphasizing contextual memory, executive functions, attentional capacity, and the prediction of instrumental activities of daily living. J Clin Exp Neuropsychol 34(2): 183-194.
  22. Belle SH, Mendelsohn AB, Seaberg EC, Ratcliff G (2000) A brief cognitive screening battery for dementia in the community. Neuroepidemiology 19(1): 43-50.
  23. Larner AJ (2007) Addenbrooke’s Cognitive Examination-Revised (ACE-R) in day-to-day clinical practice. Age Ageing 36(6): 685-686.
  24. Larner AJ, Mitchell AJ (2014) A meta-analysis of the accuracy of the Addenbrooke’s Cognitive Examination (ACE) and the Addenbrooke’s Cognitive Examination-Revised (ACE-R) in the detection of dementia. Int Psychogeriatr 26(4): 555-563.
  25. Mioshi E, Dawson K, Mitchell J, Arnold R, Hodges JR (2006) The Addenbrooke’s Cognitive Examination Revised (ACE-R): a brief cognitive test battery for dementia screening. Int J Geriatr Psychiatry 21(11): 1078-1085.
  26. Clauser BE, Mazor KM (1998) Using Statistical Procedures to Indentify Differentially Functioning Test Items. Educational Measurement: Issues and Practice 17(1): 31-44.
  27. Crane PK, van BG, Larson EB (2004) Test bias in a cognitive test: differential item functioning in the CASI. Stat Med 23(2): 241-256.
  28. Ramirez M, Teresi JA, Holmes D, Gurland B, Lantigua R (2006) Differential item functioning (DIF) and the Mini-Mental State Examination (MMSE). Overview, sample, and issues of translation. Med Care 44: S95-S106.
  29. Jones RN, Gallo JJ (2002) Education and sex differences in the minimental state examination: effects of differential item functioning. J Gerontol B Psychol Sci Soc Sci 57(6): 548-558.
  30. Crane PK, Narasimhalu K, Gibbons LE, Mungas DM, Haneuse S, et al. (2008) Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline. J Clin Epidemiol 61(10): 1018-1027.
  31. Teresi JA (2007) Mini-Mental State Examination (MMSE): scaling the MMSE using item response theory (IRT). J Clin Epidemiol 60(3): 256-259.
  32. Gibbons LE, McCurry S, Rhoads K, Kamal Masaki, Lon White, et al. (2009) Japanese-English language equivalence of the Cognitive Abilities Screening Instrument among Japanese-Americans. Int Psychogeriatr 21(1): 129-137.
  33. Mukherjee S, Gibbons LE, Kristjansson E, Crane PK (2013) Extension of an iterative hybrid ordinal logistic regression/item response theory approach to detect and account for differential item functioning in longitudinal data. Psychol Test Assess Model 55(2): 127-147.
  34. Wiest SM, Chen JJ, McDowell I, Kristjansson E, Crane PK (2005) Score adjustments for differential item functioning in screening for dementia: Case of the CSHA study. Journal of Investigative Medicine 53: S93.
  35. Matallana D, de SC, Cano C, Reyes P, Samper-Ternent R, et al. (2011) The relationship between education level and mini-mental state examination domains among older Mexican Americans. J Geriatr Psychiatry Neurol 24(1): 9-18.
  36. Verghese J, Wang C, Lipton RB, Holtzer R, Xue X (2007) Quantitative gait dysfunction and risk of cognitive decline and dementia. J Neurol Neurosurg Psychiatry 78(9): 929-935.