+1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Research ArticleOpen Access

Heads-Up-Displays (HUDs) and their Impact on Cognitive Load during Task Performance: A Protocol for Systematic Review Volume 2 - Issue 4

Adam Bystrzycki*1,2,3, Yesul Kim2, Mark Fitzgerald2,3,4, Lorena Romero5 and Steven Clare3

  • 1Emergency and Trauma Centre, Alfred Hospital, Australia
  • 2National Trauma Research Institute, the Alfred Hospital, Australia
  • 3Central Clinical School, Faculty of Medicine, Monash University, Australia
  • 4Trauma Service, the Alfred Hospital, Australia
  • 5Ian Potter Library, the Alfred Hospital, Australia

Received: Junuary 30, 2018;   Published: February 19, 2018;

*Corresponding author: Adam Bystrzycki, National Trauma Research Institute, Level 4, 89 Commercial Rd, Melbourne, Victoria, 3004

DOI: 10.26717/BJSTR.2018.02.000775

Abstract PDF


Background: Heads-up-displays (HUDs) and similar projection technologies are increasingly used in complex environments. Their impact on cognitive load is an important consideration before introduction of such technology into complex healthcare environments.

Methods: We will search the available literature for studies reporting cognitive load during the use of HUDs for task performance. All study designs will be included as long as a comparison group (no-HUD) was included in the design.

Population: Adult humans, who were not novices, performing a complex task. Complex task was defined as any task involving simultaneous performance and processing of information.

Intervention: Heads-up display; Head-mounted display; Projection glasses; Other projection displays including augmented reality and virtual reality displays

Comparator: No display technology; Heads-down display (as is the status quo for automobile controls and avionics)

Outcome: Broadly defined cognitive impacts including fatigue, cognitive tunneling, task errors, and response times. Data will be synthesized and where possible a meta-analysis will be performed using a random-effects model.

Discussion: This systematic review will be informative for future implementation of HUDs and similar display technologies for use in complex environments. The implications are of importance to groups implementing HUDs for decision support and as an adjunct to complex tasks.

Registration: PROSPERO - CRD42017058910

Keywords: Heads Up Displays; Head-Mounted-Displays; Projection Glasses; Complex Task; Decision Support; Cognitive Load; Fatigue; Human Factors; Task Errors

Abbreviations: HUD: Head's Up Display; HMD: Head Mounted Display; ORs: Odds Ratios; RRs: Risk Ratios; CIs: Confidence Intervals; MD: Mean Differences; SMD: Standardized Mean Differences


Heads-up-displays (HUDs) and similar projection technologies are increasingly used in complex environments such as aeronautics, manufacturing, road traffic and the military. In these arenas,in its infancy. Before HUDs can be adopted for widespread application for clinician use in complex healthcare environments, their impact on cognitive load, decision making and task performance needs to be understood. The objective of this systematic review is to assess the effect of HUDs and similar projection display technologies on cognitive load during the performance of a complex task. The intervention being studied is the use of a HUD in order to augment or assist task-performance. For the purpose of this study we use the term HUD to include all projection display technologies including heads-up displays, head-mounted displays, monocular displays, projection glasses and screen-projection technologies.

The authors define complex tasks as those involving greater than two discrete simultaneous or near-simultaneous steps. Furthermore, the review will explore whether particular design features of HUDs or their implementation have greater or lesser impact on cognitive load and task performance. For the purpose of this systematic review, participants are defined as users of HUDs for task augmentation or decision assistance, regardless of domain of use. Comparators are non-intervention groups, (i.e. same task performed without use of the HUD). Outcome measures are both descriptive and measured impacts on cognitive load and task performance. The intrinsic cognitive load is considered to be the same in both intervention and comparator groups as the task being performed has not changed. As a result, information related to extraneous and germane types of cognitive load requires scrutiny.


Criteria for Considering Studies for this Review:

Types of Studies: We will include randomised and quasirandomised trials, as well as case-control, crossover, cohort and other study designs if there was a comparison group of taskperformance without use of the intervention.

Types of Participants: Adult, human participants who were not novices at the performance of the complex task will be included. Tasks included driving an automobile, piloting an aircraft, military aeronautical and ground deployment, manufacturing, healthcare use such as performance of medical procedures and surgical interventions and simulations of any of the complex tasks mentioned.

Types of Interventions: We plan to investigate the following comparisons of intervention versus control/comparator:

    a. Intervention:

    i. Heads-up display

    ii. Head-mounted display

    iii. Projection glasses

    iv. Other projection displays including augmented reality and virtual reality displays

    b. Comparator:

    i. No display technology

    ii. Heads-down display (as is the status quo for automobile controls and avionics).

Types of Outcome Measures: Outcome measures are broad but will specifically focus on cognitive impacts of HUD use, including: fatigue, cognitive tunneling, objective measures of cognitive load (e.g. task-invoked pupillary response, heart rate blood pressure product), task errors and response or task-completion times.

Summary of Findings Table

We will presenta "summary offindings" table, including the study design, population, task, intervention and comparator, as well as cognitive outcomes.

Search Methods for Identification of Studies

Literature Search Strategy: We will search the following sources from inception to the present:

    a) MEDLINE.

    b) MEDLINE in process and epubs ahead of print

    c) EMBASE


    e) Cochrane CENTRAL

    f) EMCARE

    g) BIOSIS

    h) Transport database

    i) Scopus

    j) Web of Science

    k) Proquest

    l) Compendex

    m) Inspec

    n) EP Patents

    o) US Patents

The preferred reporting item for systematic reviews and metaanalysis (PRISMA statement) will be used to guide the conduct of this review. A literature search strategy will be developed to identify relevant studies that involve cognitive impacts of HUDs and other projection displays during task performance in humans, without limitation to publication dates or language. Key search terms will include MeSH and free text terms relevant to the topics of HUDs and cognitive load and task performance. The search strategy will first be conducted in Medline (Appendix 1), then adapted to run across the databases mentioned above in order to capture literature from Engineering and non-medical domains. Grey literature resources, reference lists of existing systematic reviews will also be searched to identify relevant studies. If we detect additional relevant key words during any of the electronic or other searches, we will modify the electronic search strategies to incorporate these terms and document the changes. We will place no restrictions on the language of publication when searching the electronic databases or reviewing reference lists in identified studies. We will try to identify other potentially eligible trials or ancillary publications by searching the reference lists of retrieved included trials, (systematic) reviews, meta-analyses and health technology assessment reports (Appendix 2).

Data Collection and Analysis

Selection of Studies: Two review authors will independently scan the abstract, title, or both, of every record retrieved, to determine which studies should be assessed further. We will investigate all potentially relevant articles as full text. We will resolve any discrepancies through consensus or recourse to a third review author (MF). We will present a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart of study selection.

Data Extraction and Management: For studies that fulfil inclusion criteria, two review authors will independently extract key participant and intervention characteristics and report data on efficacy outcomes and adverse events using standard data extraction templates, with any disagreements to be resolved by discussion, or if required by a third author.

Dealing with Duplicate and Companion Publications: In the event of duplicate publications, companion documents or multiple reports of a primary study, we will maximize yield of information by collating all available data and use the most complete dataset aggregated across all known publications.

Assessment of Risk of bias in Included Studies: Two review authors will assess the risk of bias of each included study independently. We will resolve disagreements by consensus, or by consultation with a third author (MF).We will assess risk of bias using the Cochrane Collaboration's tools for assessment of risk of bias [1,2]. We will assess the following criteria in this assessment:

    I. Random sequence generation (selection bias).

    II. Allocation concealment (selection bias).

    III. Blinding (performance bias and detection bias), blinding of participants and personnel assessed separately from blinding

    of outcome assessment.

    IV. Incomplete outcome data (attrition bias).

    V. Selective reporting (reporting bias).

    VI. Other bias.

We will judge 'Risk of bias criteria' as 'low risk', 'high risk' or 'unclear risk' and evaluate individual bias items. We will present a 'Risk of bias’ graph and a 'Risk of bias summary' figure. We will assess the impact of individual bias domains on study results at the endpoint and study levels. For blinding of participants and personnel (performance bias), detection bias (blinding of outcome assessors) and attrition bias (incomplete outcome data) we intend to evaluate risk of bias separately for subjective and objective outcomes [3]. We will consider the implications of missing outcome data from individual participants.

Measures of Treatment Effect: We will express dichotomous data as Odds Ratios (ORs) or Risk Ratios (RRs) with 95% confidence intervals (Cls).We will express continuous data as Mean Differences (MD) or Standardized Mean Differences (SMD) with 95% CIs.

Unit of Analysis Issues: We will take into account the level at which randomization occurred, such as cross-over trials, clusterrandomised trials and multiple observations for the same outcome.

Dealing with Missing Data: We will investigate attrition rates, e.g. drop-outs, losses to follow up and withdrawals, and critically apprise issues of missing data. Where standard deviations for outcomes are not reported, we will impute these values by assuming the standard deviation of the missing outcome to be the average of the standard deviations from those studies where this information was reported. We will investigate the impact of imputation on meta-analyses by means of sensitivity analysis.

Assessment of Heterogeneity: In the event of substantial clinical, methodological or statistical heterogeneity, we will not report study results as the pooled effect estimate in ameta-analysis. We will identify heterogeneity by visual inspection of the forest plots and by using the Q statistic with a significance level of 0.1, in view of the low power of this test. We will consider heterogeneity to exist if Q >df. We will calculate Tau (T), being the standard deviation of true effects and use this to compute prediction intervals. When we find heterogeneity, we will attempt to determine potential reasons for it by examining individual study and subgroup characteristics, and performing subgroup analyses if we find consistency in effects within subgroups. We expect the following characteristics to introduce clinical heterogeneity:

    I. Different implementation of technology.

    II. Differences in populations.

    III. Differences in tasks being performed.

Assessment of Reporting Biases: If literature inclusion results in10 or more studies that investigate a particular outcome, we will use funnel plots to assess small study effects. Owing to several possible explanations for funnel plot asymmetry, we will interpret results carefully [4].

Data Synthesis: Unless there is good evidence for homogeneous effects across studies, we will summarise primarily low risk of bias data by means of a random-effects model [5]. We will interpret random-effects meta-analyses with due consideration of the whole distribution of effects, ideally by presenting a prediction interval [6]. A prediction interval specifies a predicted range for the true treatment effect in an individual study [7]. In addition, we will perform statistical analyses according to the statistical guidelines contained in the latest version of the Meta-Analysis Concepts and Applications [8].

Subgroup Analysis and Investigation of Heterogeneity: We will carry out the following subgroup analyses and plan to investigate interaction:

    a) Grouping by different technology (HUD, HMD, Projection display)

    b) Grouping by task

    c) Grouping by population

    Sensitivity Analysis

We will perform sensitivity analyses in order to explore the influence of the following factors (when applicable) on effect sizes.

    a) Restricting the analysis by taking into account risk of bias, as specified in the section 'Assessment of risk of bias in included studies'.

    b) Restricting the analysis to very long or large studies to establish the extent to which they dominate the results.

    c) Restricting the analysis to studies using the following filters: diagnostic criteria, imputation, language of publication, source of funding (industry versus other), and country. We will also test the robustness of the results by repeating the analysis using different measures of effect size (RR, OR etc.).


    a) Ethics Approval and Consent to Participate: Studies included in this Systematic Review included declarations of ethics approval or waivers as applicable, reported by the primary authors of the study. Ethics approval was not sought for this Systematic Review, as it reports and synthesizes previously published data.

    b) Consent for Publication: Not Applicable

    c) Availability of Data and Materials: Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

    d) Competing Interests: The authors declare that they have no competing interests

    e) Funding: This Systematic Review Protocol and subsequent Review were supported by Sabbatical leave funding from Alfred Health, provided to the Corresponding Author. The funding body had no role in the design of the study, nor in the collection, analysis or interpretation of data.

    f) Authors' Contributions: AB conceived the study question, formulated the systematic review protocol and wrote the manuscript. MF supervised the project and reviewed the manuscript during all phases of the project. YK reviewed the manuscript and contributed to the systematic review protocol. LR reviewed the manuscript and was instrumental in formulating the literature search strategies. SC reviewed the manuscript.


  1. Higgins JPT, Green S (2011) Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1. The Cochrane Collaboration.
  2. Higgins JPT (2011) The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 343: d5928.
  3. Hróbjartsson A (2013) Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and non blinded assessors. CMAJ 185(4): E201-E211.
  4. Sterne JAC (2011) Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 343: d4002.
  5. Wood SN (2007) Fast stable direct fitting and smoothness selection for generalized additive models. Journal of the Royal Statistical Society 70 (3): 495-518.
  6. Higgins JPT, Thompson SG, Spiegelhalter DJ (2009) A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser 172: 137-159.
  7. Riley Richard D, Higgins Julian PT, Deeks Jonathan J (2011) Interpretation of random effects meta-analyses. BMJ 342: d549.
  8. Borenstein M (2017) Meta-analysis concepts and applications. Wiley online library.