Sample size helps determine the size and probability of obtaining better statistical significance given the value of P of less or equal to the rationally set value. Since sample size represents the denominator of the equation for stemming statistical significance, lesser sample size requires higher effect than when the sample is large to finalize similar level of statistical significance (Wilkerson & Olson, 1997). For example, the sample size of 663 women and 650 men were taken and proven to be statistically significance when p<0.05. If we employed more sample size of women and men, we could see a higher statistically significance results. The strength of statistical significance does not automatically reflect the strength of statistical meaningfulness. Statistical meaningfulness should indicate the research value and relevant of the outcome.
If statistical significance is found in a study with no indication of high-probability of correlation (i.e. effect), then we might look at how CIs will provide straight analysis of the degree or trend of the effect (Ranganathan, Pramesh & Buyse, 2015). CIs can provide values where we can be self-assured the true effect lies, however, it may not demonstrate the actual quantification of the probability or the forte of the evidence (Ranganathan, Pramesh & Buyse, 2015). For instance, in the case of determining whether differences exist between men and women on cultural competency scores, statistical significance can be explained (or calculated) given P-value. The statistical significance allows us to see whether the null hypothesis is true or not (i.e. rejected or not).
Ranganathan, P., Pramesh, C., & Buyse, M. (2015). Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals. Perspectives in Clinical Research, 6(2).
Wilkerson, M., & Olson, M. R. (1997). Misconceptions about sample size, statistical significance, and treatment effect. The Journal of Psychology, 131(6), 627-631.