info@biomedres.us   +1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Review ArticleOpen Access

Measuring the Strength of the Evidence

Volume 6 - Issue 4

David Trafimow1* and Michiel de Boer2

  • Author Information Open or Close
    • 1Department of Psychology, New Mexico State University, Mexico
    • 2Department of Health Sciences, VrijeUniversiteit Amsterdam, Netherlands

    *Corresponding author: David Trafimow, Department of Psychology,New Mexico State University, Mexico, Las Cruces, NM 88003-8001

Received: June 02, 2018;   Published: July 11, 2018

DOI: 10.26717/BJSTR.2018.06.001384

Full Text PDF

To view the Full Article   Peer-reviewed Article PDF

Abstract

Many proponents of p-values assert that they measure the strength of the evidence with respect to a hypothesis. Many proponents of Bayes Factors assert that they measure the relative strength of the evidence with respect to competing hypotheses. From a philosophical perspective, both assertions are problematic because the strength of the evidence depends on auxiliary assumptions, whose worth is not quantifiable by p-values or Bayes Factors. In addition, from a measurement perspective, p-values and Bayes Factors fail to fulfill a basic measurement criterion for validity. For both classes of reasons, p-values and Bayes Factors do not validly measure the strength of the evidence.

Keywords: p-value;Bayes Factor; Strength of the Evidence; Auxiliary Assumption; Reliability; validity

Abstract| Introduction| Philosophical Considerations| Basic Criteria for Valid Measurement| Attenuation of Validity Due to Unreliability| Increased Statistical Regression Due to Unreliability| The Open Science Collaboration Reproducibility Project and the Reliability of p.| Conclusion| References|