Input parameters and press 'Calculate'



Explanation

This Shiny app accompanies Maier & Lakens (2021). Justify Your Alpha: A Primer on Two Practical Approaches. For a full explanation on how to use this software, read the paper or the vignettes.

Power functions

The trickiest thing of using this Shiny app is entering the correct power function. You can provide an analytic power function, either programmed yourself, or from an existing package loading on the server. Then, make sure the alpha value is not set, but specified as x, and that the function itself returns a single value, the power of the test. Finally, if you use existing power functions the shiny app needs to know which package this function is from, and thus the call to the function needs to be precended by the package and '::', so 'pwr::' or 'TOSTER::'. Some examples that work are provided below.

pwr::pwr.anova.test(n = 100, k = 2, f = 0.171875, sig.level = x)$power

TOSTER::powerTOSTtwo(alpha=x, N=200, low_eqbound_d=-0.4, high_eqbound_d=0.4)

For a more challenging power function, we can use the Superpower package by Daniel Lakens and Aaron Caldwell. The power function in the ANOVAexact function is based on a simulation, which takes a while to perform. The optimization function used in this Shiny app needs to perform the power calculation multiple times. Thus, the result takes a minutes to calculate. Press calculate, and check the results 5 to 10 minutes later. Furthermore, the output of the ANOVA_exact function prints power as 80%, not 0.8, and thus we actually have to divide the power value by 100 for the Shiny app to return the correct results. Nevertheless, it works if you are very patient.

Superpower::ANOVA_exact( (Superpower::ANOVA_design(design = '2b', n = 64, mu = c(0, 0.5), sd = 1, plot = FALSE)), alpha_level = x, verbose = FALSE)$main_results$power/100

Input parameters and press 'Calculate'



Explanation

Cohen (1988) considered a Type 1 error rate of 5% and a Type 2 error rate of 20% balanced. The reason for this was that instead of weighing both types of errors equally, he felt 'Type I errors are of the order of four times as serious as Type II errors.' This situation is illustrated in the default settings of the app. If the cost of a Type 1 error is 4 times as large as the cost of a Type 2 error, and we collect 64 participants in each condition of a two-sided t-test, that alpha is 0.05 and power is 0.80.

If we design 2000 studies like this, the number of Type 1 and Type 2 errors we make depend on how often the null hypothesis is true, and how often the alternative hypothesis is true. Let's assume both are equally likely for now. This means that in 1000 studies the null hypothesis is true, and we will make 50 Type 1 errors. In 1000 studies the alternative hypothesis is true, and we will make 100%-80% = 20% Type 2 errors, so in 200 studies we will not observe a significant result even if there is a true effect. Combining Type 1 and Type 2 errors, in the long run, we should expect 250 of our 2000 studies to yield an error.

Power functions

The trickiest thing of using this Shiny app is entering the correct power function. You can provide an analytic power function, either programmed yourself, or from an existing package loading on the server. Then, make sure the alpha value is not set, but specified as x, and that the sample is not set but specified as 'sample_n'. In additino, make sure that the function itself returns a single value, the power of the test. Finally, if you use existing power functions the shiny app needs to know which package this function is from, and thus the call to the function needs to be precended by the package and '::', so 'pwr::' or 'TOSTER::' or 'ANOVApower::'. Some examples that work are provided below.

TOSTER::powerTOSTtwo(alpha=x, N=200, low_eqbound_d=-0.4, high_eqbound_d=0.4)

pwr::pwr.anova.test(n = sample_n, k = 2, f = 0.171875, sig.level = x)$power

Input parameters


Explanation

The idea behind this recommendation is discussed most extensively by Leamer, 1978. He writes 'The rule of thumb quite popular now, that is, setting the significance level arbitrarily to .05, is shown to be deficient in the sense that from every reasonable viewpoint the significance level should be a decreasing function of sample size.' This was already recognized by Jeffreys (1939), who discusses ways to set the alpha level in Neyman-Pearson statistics: 'We should therefore get the best result, with any distribution of alpha, by some form that makes the ratio of the critical value to the standard error increase with n. It appears then that whatever the distribution may be, the use of a fixed P limit cannot be the one that will make the smallest number of mistakes.'

The goal is to prevent Lindley's paradox (https://en.wikipedia.org/wiki/Lindley%27s_paradox). This is explained in more detail in week 1 of Daniel's MOOC (https://www.coursera.org/learn/statistical-inferences).

To prevent Lindley's paradox, one would need to lower the alpha level as a function of the statistical power. Good (1992) notes: 'we have empirical evidence that sensible P values are related to weights of evidence and, therefore, that P values are not entirely without merit. The real objection to P values is not that they usually are utter nonsense, but rather that they can be highly misleading, especially if the value of N is not also taken into account and is large.' Therefore, we justify the alpha level as a function of sample size by relating it to Bayes Factors. A Bayes factor compares the likelihood of the data under the alternative hypothesis and under the null hypothesis. Therefore, setting the alpha level to always correspond to at least Bayes factor 1 avoids the Lindley paradox. However, in Bayesian statistics, a Bayes factor of 1 or large is only regarded as weak evidence and we might wish to, for example, achieve at least moderate evidence if the p-value is significant. Therefore, we can adjust the desired evidence by using the slider.

Input parameters


Explanation

The idea behind this recommendation is discussed most extensively by Leamer, 1978. He writes 'The rule of thumb quite popular now, that is, setting the significance level arbitrarily to .05, is shown to be deficient in the sense that from every reasonable viewpoint the significance level should be a decreasing function of sample size.' This was already recognized by Jeffreys (1939), who discusses ways to set the alpha level in Neyman-Pearson statistics: 'We should therefore get the best result, with any distribution of alpha, by some form that makes the ratio of the critical value to the standard error increase with n. It appears then that whatever the distribution may be, the use of a fixed P limit cannot be the one that will make the smallest number of mistakes.'

The goal is to prevent Lindley's paradox (https://en.wikipedia.org/wiki/Lindley%27s_paradox). This is explained in more detail in week 1 of Daniel's MOOC (https://www.coursera.org/learn/statistical-inferences).

To prevent Lindley's paradox, one would need to lower the alpha level as a function of the statistical power. Good (1992) notes: 'we have empirical evidence that sensible P values are related to weights of evidence and, therefore, that P values are not entirely without merit. The real objection to P values is not that they usually are utter nonsense, but rather that they can be highly misleading, especially if the value of N is not also taken into account and is large.' Therefore, we justify the alpha level as a function of sample size by relating it to Bayes Factors. A Bayes factor compares the likelihood of the data under the alternative hypothesis and under the null hypothesis. Therefore, setting the alpha level to always correspond to at least Bayes factor 1 avoids the Lindley paradox. However, in Bayesian statistics, a Bayes factor of 1 or large is only regarded as weak evidence and we might wish to, for example, achieve at least moderate evidence if the p-value is significant. Therefore, we can adjust the desired evidence by using the slider.

Justify Your Alpha: A Practical Guide

For an explanation why researchers should justify their alpha levels, see:

Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168–171. https://doi.org/10.1038/s41562-018-0311-x

You can download the pre-print of this article at PsyArXiV

For a short introduction in why to lower your alpha level as a function of the sample size, see my blog post . For a short introduction on why and how to balance or minimize error rates, see my other blog post .

Get the code at GitHub

The best way to cite this app and the explanations of how to justify alpha levels in practice is through the preprint:

Maier & Lakens (2021). Justify Your Alpha: A Primer on Two Practical Approaches