Statistical tests are used in science in order to support research hypotheses (theory, model). The Bayes Factor (BF) is a method that weighs evidence and shows which out of two hypotheses is better supported. Adopting the BF in statistical inference, we can show whether data provided stronger support for the null hypothesis, the alternative hypothesis or whether it is inconclusive and more data needs to be collected to provide more decisive evidence. Such a symmetry in interpretation is an advantage of the Bayes
Factor over classical null hypothesis signifi cance testing (NHST). Using NHST, a researcher draws conclusions indirectly, by rejecting or not rejecting the null hypothesis. The discrepancy between these decisions and the researcher’s needs, often leads to misinterpretation of signifi cance test results, e.g. by concluding that non-signifi cant p-values are evidence for the absence of differences between groups or that variables are independent. In this work we show the main differences between the Bayesian and
the frequential approach to the understanding of probability and statistical inference. We demonstrate how to verify hypotheses using the BF in practice and provide concrete examples of how it modifi es conclusions about empirical fi ndings based on the NHST procedure and the interpretation of p-values. We discuss the advantages of the BF – particularly the validation of a null hypothesis. Additionally, we provide some guidelines on how to do Bayesian statistics using the freeware statistical program JASP 0.8.