Bus 308 Week 5 Lecture 1a: Different Views And Expected Outc

Bus308 Week 5 Lecture 1a Different Viewexpected Outcomesafter Readin

Bus308 Week 5 Lecture 1a explores alternative approaches to analyzing data outcomes, focusing on confidence intervals, the distinction between statistical and practical significance, and effect size measures. The lecture emphasizes the limitations of traditional hypothesis testing and introduces confidence intervals as a means to understand the range of plausible population parameters based on sample data. It further discusses how sample size influences the interpretation of significance and highlights the importance of effect size in practical decision-making.

Confidence intervals (CIs) are ranges derived from sample data that are likely to contain the true population parameter, with a specified level of certainty such as 95%. These intervals can be expressed as a range or as a mean or proportion with a margin of error. For example, an opinion poll reporting 48% support with a margin of error of 3% indicates the true support likely falls between 45% and 51%. When constructing confidence intervals for a single mean or proportion, their interpretation is straightforward: if the hypothesized value (such as a population mean of 1.00) falls within the interval, we do not reject the null hypothesis; if it falls outside, we reject it.

Constructing confidence intervals for differences between two means helps determine whether observed differences are statistically significant. If the interval for the difference includes zero, the means could be equal, leading us not to reject the null hypothesis of no difference. Conversely, if zero is outside the interval, a significant difference exists. Overlap between the confidence intervals of two means can suggest whether the difference is statistically significant or not, with minimal overlap indicating a higher likelihood of significance, especially at lower alpha levels.

A critical issue addressed is that large sample sizes can yield statistically significant results even for trivial differences. This raises concerns about the practical relevance of findings, as large samples can detect minute differences that are meaningless in real-world contexts. To assess the practical importance, the effect size is employed. Effect size quantifies the magnitude of the difference or relationship, offering insight into whether the statistically significant findings are practically meaningful. Effect size values are interpreted as large, moderate, or small, corresponding to the strength of the effect.

For example, a significant difference in salaries between males and females with a small effect size indicates the difference may be statistically significant but not practically important, especially if the monetary difference is negligible in decision-making. Conversely, a large effect size indicates a substantial impact, warranting practical consideration. Effect sizes can be calculated using various measures depending on the statistical test performed, and their interpretation aids in understanding whether the observed differences merit consideration beyond statistical significance.

In summary, confidence intervals offer a useful way to interpret the uncertainty around sample estimates, helping decision-makers account for variability. They assist in evaluating the consistency and practical significance of results. Recognizing the influence of sample size prevents misinterpretation of statistical significance as a direct measure of real-world importance. Incorporating effect size measures ensures that the significance observed aligns with meaningful, actionable differences, promoting more informed and responsible decision-making in research and applied settings.

References

  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Routledge.
  • Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7-29.
  • Hedges, L. V., & Olkin, I. (1985). Statistical Methods for Meta-Analysis. Academic Press.
  • Laerd Statistics. (2018). Confidence Intervals. Retrieved from https://statistics.laerd.com/
  • Looney, S. W. (2001). Effect size: A proposed alternative to statistical significance testing. Nursing Research, 50(6), 359-361.
  • McGraw, K. O., & Wong, S. P. (1992). A common language effect size statistic. Psychological Bulletin, 111(2), 361-365.
  • Nilson, L. B. (2016). Teaching at Its Best: A Research-Based Resource for College Instructors. Anker Publishing Company.
  • Politi, M. C., & Pescatello, L. S. (2016). Effect size as an indicator of clinical significance of the results of health education interventions. Journal of Public Health Management and Practice, 22(6), 530-532.
  • Sullivan, G. M., & Feinn, R. (2012). Using Effect Size—Or Why the P Value Is Not Enough. Journal of Graduate Medical Education, 4(3), 279-282.
  • Wilkinson, L. (2005). The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Lawrence Erlbaum Associates.