Statistical power is defined as the probability of obtaining a statistically significant result when the null-hypothesis is false which is complementary to avoiding a type-II error (i.e., obtaining a non-significant result when a false null-hypothesis hypothesis is not rejected). For example, to examine whether a coin is fair, we flip the coin 400 times. We get 210 heads and 190 tails. A binomial, two-sided test returns a p-value of .34, which is not statistically significant at the conventional criterion value of .05 to reject a null-hypothesis. Thus, we cannot reject the hypothesis that the coin is fair and produces 50 times heads and 50 times tails if the experiment were continued indefinitely.
binom.test(210,400,p=.5,alternative=”two.sided”)
A non-significant result is typically described as inconclusive. We can neither reject nor accept the null hypothesis. Inconclusive results like this create problems for researchers because we do not seem to know more about the research question than we did before we conducted the study.
Before: Is the coin fair? I don’t know. Let’s do a study.
After: Is the coin fair? I don’t know. Let’s collect more data.
The problem of collecting more data until a null hypothesis is rejected is fairly obvious. At some point, we will either reject any null hypothesis or run out of resources to continue the study. When we reject the null hypothesis, however, the multiple testing invalidates our significance test, and we might even reject a true null hypothesis. In practice, inconclusive results often just remain unpublished, which leads to publication bias. If only significant results are published, we do not know which significant results rejected a true or false null hypothesis (Sterling, 1959).
What we need is a method that makes it possible to draw conclusions from statistically non-significant results. Some people have proposed Bayesian Hypothesis Testing as a way to provide evidence for a true null hypothesis. However, this method confuses evidence against a false alternative hypothesis (the effect this is large) with evidence for the null hypothesis (the effect size is zero; Schimmack, 2020).
Another flawed approach is to compute post-hoc power with the effect size estimate of the study that produced a non-significant result. In the current example, a power analysis suggests that the study had only a 15% chance of obtaining a significant result if the coin is biased to produce 52.5% (210 / 400) heads over 48.5% (190 / 400) tails.
Another way to estimate power is to conduct a simulation study.
nsim = 100000
res = c()
x = rbinom(nsim,400,.525)
for (i in 1:nsim) res = c(res,binom.test(x[i],400,p = .5)$”p.value”)
table(res < .05)
What is the problem with post-hoc power analysis that use the results of a study to estimate the population effect size? After all, aren’t the data more informative about the population effect size than any guesses about the population effect size without data? Is there some deep philosophical problem (an ontological error) that is overlooked in computation of post-hoc power (Pek et al., 2024)? No. There is nothing wrong with using the results of a study to estimate an effect size and use this estimate as the most plausible value for the population effect size. The problem is that point-estimates of effect sizes are imprecise estimates of the population effect size, and that power analysis should take the uncertainty in the effect size estimate into account.
Let’s see what happens when we do this. The binomal test in R conveniently provides us with the 95% confidence interval around the point estimate of 52.5 % (210 / 400) which ranges from 47.5% to 57.5%, which translates into 190/400 to 230/400 heads. We see again that the observed point estimate of 210/400 heads is not statistically significant because the confidence interval includes the value predicted by the null hypothesis, 200/400 heads.
The boundaries of the confidence interval allow us to compute two more power analyses; one for the lower bound and one for the upper bound of the confidence interval. The results give us a confidence interval for the true power. That is, we can be 95% confident that the true power of the study is in this 95% interval. This follows directly from the 95% confidence in the effect size estimates because power is directly related to the effect size estimates.
The respective power values are 15% and 83%. This finding shows the real problem of post-hoc power calculations based on a single study. The range of plausible power values is very large. This finding is not specific to the present example or a specific sample size. Sample sizes of original studies increase the point estimate of power, but they do not decrease the range of power estimates.
A notable exception are cases when power is very high. Let’s change the example and test a biased coin that produced 300 heads. The point estimate of power with a proportion of 75% (300 / 400) heads is 100%. Now we can compute the confidence interval around the point estimate of 300 heads and get a range from 280 heads to 315 heads. When we compute post-hoc power with these values we still get 100% power. The reason is simple. The observed effect (bias of the coin) is so extreme that even a population effect size that matches the lowest bound of the confidence interval would give 100% power to reject the null hypothesis that this is a fair coin that produces an equal number of heads and tails in the long run and the 300 to 100 ratio was just a statistical fluke.
In sum, the main problem with post-hoc power calculations is that they often provide no meaningful information about the true power of a study because the 95% confidence interval is around the point estimate of power that is implied by the 95% confidence interval for the effect size is so wide that it provides little valuable information. There are no other valid criticisms of post-hoc power because post-hoc power is not fundamentally different from any other power calculations. All power calculations make assumptions about the population effect size that is typically unknown. Therefore, all power calculations are hypothetical, but power calculations based on researchers’ beliefs before a study are more hypothetical than those based on actual data. For example, if researchers assumed their study had 95% power based on an overly optimistic guess about the population effect size, but the post-hoc power analysis suggests that power ranges from 15% to 80%, the data refute the researchers’ a priori power calculations because the effect size of the a priori power analysis falls outside the 95% confidence interval in the actual study.
Averaging Post-Hoc Power
It is even more absurd to suggest that we should not compute power based on observed data when multiple prior studies are available to estimate power for a new study. The previous discussion made clear that estimates of the true power of a study rely on good estimates of the population effect size. Anybody familiar with effect size meta-analysis knows that combining the results of multiple small samples increases the precision in the estimate of the effect size. Assuming that all studies are identical, the results can be pooled, and the sampling error decreases as a function of the total sample size (Schimmack, 2012). Let’s assume that 10 people flipped the same coin 400 times and we simply pool the results to have a sample of 4,000 trials. The result happens to be again a 52.5% bias towards heads (2100 / 4000 heads).
Due to the large sample size, the confidence interval around this estimate shrinks to 51% to 54% (52.5 +/- 1.5). A power analysis for a single study with 400 trials produces estimates of 6% and 33% power, providing strong information that a non-significant result is to be expected because a sample size of 400 trials is insufficient to detect that the coin may be biased in favor of heads by 1 to 4 percentage points.
The insight that confidence intervals around effect size estimates shrink when more data become available is hardly newsworthy to anybody who took an introductory course in statistics. However, it is worth repeating here because there are so many false claims about post-hoc power in the literature. As power calculations depend on assumed effect sizes, the confidence interval of post-hoc power estimates decreases as more data become available.
Conclusion
The key fallacy in post-hoc power calculations is to confuse point estimates of power with the true power of a study. This is a fallacy because point estimates of power rely on the point estimate of an effect size and point estimates are biased by sampling error. The proper way to evaluate power based on effect size estimates in actual data is to compute confidence intervals of power based on the confidence interval of the effect size estimates. The confidence intervals of post-hoc power estimates can be wide and uninformative, especially in a single study. However, they can also be meaningful, especially when they are based on precise effect size estimates in large samples or a meta-analysis with a large total sample size. Whether the information is useful or not needs to be evaluated on a case-by-case basis. Blanked statement that post-hoc power calculations are flawed or always uninformative are false and misleading.