This Shiny app accompanies the paper 'Sample Size Justification' by Daniël Lakens. You can download the pre-print of this article at
PsyArXiV
and any sections in this online form that are unclear are explained in the paper. You can help to improve this app by providing feedback or suggest additions by filling out
this feedback form
. Note that this app will not store the information you enter if you close or refresh you browser. You might want to write down answers in a local text file first. For a completed example, see
here
.
The main goal of this app and the accompanying paper is to guide you through an evaluation of the informational value of a planned study. After filling out this form you can download a report of your sample size justification.
More Info on the Informational Value of Studies.
The informational value of a study depends on the inferential goal, which could be testing a hypothesis, obtaining an accurate estimate, or seeing what you can learn from all the data you have the resources to collect.
- It is possible that your resource constraints allow you to perform a study that has:
- a desired statistical power, or
- a desired accuracy of the estimated effect
and your resource constraints are not the primary reason to collect a specific sample size (even though resource constraints are always a secondary reason to collect a certain sample size, as without resource constraints, one would for example choose a very low alpha level and design a study that has incredibly high statistical power). In these cases, you would:
- perform an a-priori power analysis for the smallest effect sizes of interest or, if that can not be specified, for an a-priori power analysis for an expected effect size.
- determine the sample size using an accuracy in parameter estimates perspective, based on the desired accuracy and the expected effect size.
- It is also possible that the calculations based on power and accuracy yield a sample size that is larger than you have the resources to collect. In these situations, you can:
- not draw any inferences, and collect the data so they can be included in a future meta-analysis.
- justify the sample size because a decision needs to be made, even if data is scarce, and design a study based on a compromise power analysis that allows you to sufficiently reduce the relative probability of Type 1 and Type II error rates based on a cost-benefit analysis
- If you still want to perform a hypothesis test, perform a sensitivity power analysis, justify the sample size based on the information it will provide about the expected effect size or other effect sizes of interest, such as effects previously observed in the literature.
If you plan to perform a hypothesis test, examine if the minimal statistically detectable effect is small enough to warrant a hypothesis test, and evaluate whether the Type 1 error rate and the Type II error rate make it possible to draw useful conclusions based on the p-value, or not.
- If you want to estimate an effect size, interpret the width of the confidence interval around the estimate, and specify what an estimate with this accuracy is useful for.
More Info on the Informational Value of Studies.
The informational value of a study depends on the inferential goal, which could be testing a hypothesis, obtaining an accurate estimate, or seeing what you can learn from all the data you have the resources to collect.
- It is possible that your resource constraints allow you to perform a study that has:
- a desired statistical power, or
- a desired accuracy of the estimated effect
and your resource constraints are not the primary reason to collect a specific sample size (even though resource constraints are always a secondary reason to collect a certain sample size, as without resource constraints, one would for example choose a very low alpha level and design a study that has incredibly high statistical power). In these cases, you would:
- perform an a-priori power analysis for the smallest effect sizes of interest or, if that can not be specified, for an a-priori power analysis for an expected effect size.
- determine the sample size using an accuracy in parameter estimates perspective, based on the desired accuracy and the expected effect size.
- a desired statistical power, or
- a desired accuracy of the estimated effect
- perform an a-priori power analysis for the smallest effect sizes of interest or, if that can not be specified, for an a-priori power analysis for an expected effect size.
- determine the sample size using an accuracy in parameter estimates perspective, based on the desired accuracy and the expected effect size.
- It is also possible that the calculations based on power and accuracy yield a sample size that is larger than you have the resources to collect. In these situations, you can:
- not draw any inferences, and collect the data so they can be included in a future meta-analysis.
- justify the sample size because a decision needs to be made, even if data is scarce, and design a study based on a compromise power analysis that allows you to sufficiently reduce the relative probability of Type 1 and Type II error rates based on a cost-benefit analysis
- If you still want to perform a hypothesis test, perform a sensitivity power analysis, justify the sample size based on the information it will provide about the expected effect size or other effect sizes of interest, such as effects previously observed in the literature.
If you plan to perform a hypothesis test, examine if the minimal statistically detectable effect is small enough to warrant a hypothesis test, and evaluate whether the Type 1 error rate and the Type II error rate make it possible to draw useful conclusions based on the p-value, or not.
- If you want to estimate an effect size, interpret the width of the confidence interval around the estimate, and specify what an estimate with this accuracy is useful for.
- not draw any inferences, and collect the data so they can be included in a future meta-analysis.
- justify the sample size because a decision needs to be made, even if data is scarce, and design a study based on a compromise power analysis that allows you to sufficiently reduce the relative probability of Type 1 and Type II error rates based on a cost-benefit analysis
- If you still want to perform a hypothesis test, perform a sensitivity power analysis, justify the sample size based on the information it will provide about the expected effect size or other effect sizes of interest, such as effects previously observed in the literature. If you plan to perform a hypothesis test, examine if the minimal statistically detectable effect is small enough to warrant a hypothesis test, and evaluate whether the Type 1 error rate and the Type II error rate make it possible to draw useful conclusions based on the p-value, or not.
- If you want to estimate an effect size, interpret the width of the confidence interval around the estimate, and specify what an estimate with this accuracy is useful for.