Project 3 Instructions Based On Larson Farber Section 626937
Project 3 Instructionsbased On Larson Farber Sections 52 53go Tot
Based on Larson & Farber: sections 5.2-5.3. Visit the specified website and click the "Download to Spreadsheet" link. Set the date range to exactly one year back from the dates provided in your instructor's Project 3 announcement. Assume the stock's closing prices form a normally distributed dataset. Do not manually count data points; use the methods outlined in sections 5.2–5.3. Complete this assignment within a single Excel file, showing your work where possible. Use the data to perform the following analyses:
- If a person bought 1 share of Google stock within the last year, what is the probability that the stock on that day closed at less than the mean for that year? Hint: Use the Empirical Rule; do not calculate the mean.
- If a person bought 1 share of Google stock within the last year, what is the probability that the stock on that day closed at more than $500? Hint: Use Excel to find the mean and standard deviation, then find the z-score.
- If a person bought 1 share of Google stock within the last year, what is the probability that the stock on that day closed within $45 of the mean for that year? Hint: Find two z-scores and use the Standard Normal Table.
- Suppose a person within the last year claimed to have bought Google stock at closing at $700 per share. Would such a price be considered unusual? Explain using the Empirical Rule without finding the max or min values.
- At what closing prices would Google stock be considered statistically unusual? Determine low and high values using the Empirical Rule.
- Calculate the first quartile (Q1), median (Q2), and third quartile (Q3) for the dataset using Excel.
- Assess whether the normality assumption is valid based on the data. Construct a histogram and provide your reasoning.
Paper For Above instruction
The analysis of stock price data through statistical techniques provides valuable insights into market behavior and informs investment decisions. This paper explores various statistical methods applied to Google’s stock prices over a one-year period, assuming the data follow a normal distribution as per the assumptions outlined in Larson & Farber (sections 5.2–5.3). Using Excel, the dataset was obtained by downloading the historical closing prices within the specified date range, enabling the calculation of key statistics such as mean, standard deviation, quartiles, and the construction of a histogram to assess normality.
First, the probability that a randomly purchased share of Google stock during the last year closed at less than the mean is derived using the Empirical Rule, which states that approximately 50% of data in a normal distribution lies below the mean. Since the Empirical Rule simplifies calculations by avoiding direct computation of the mean, the probability is approximately 0.5 or 50%, assuming a perfect normal distribution. This understanding hinges on the symmetry of the normal curve, which positions half of the data below the mean.
Next, to evaluate the likelihood that a share closed at more than $500, the dataset’s mean and standard deviation were calculated directly with Excel, followed by deriving the z-score for $500. The z-score quantifies how many standard deviations $500 is from the mean. Using the standard normal distribution table or Excel functions (such as NORM.S.DIST), the probability of exceeding $500 was obtained. This approach hinges on the assumption of normality to accurately interpret z-scores and cumulative probabilities, providing a statistical measure of how unusual the $500 closing price is within the dataset.
The third analysis considers the probability that the stock closes within $45 of the mean. This interval corresponds to a range between two z-scores: one positive and one negative, determined by dividing $45 by the standard deviation. Using Excel, the z-scores were computed, and the corresponding probabilities from the standard normal distribution table were summed to find the likelihood that the closing price falls within this interval. This approach illustrates the application of the empirical rule and standard normal distribution in assessing typical price fluctuations.
Furthermore, considering a hypothetical claim that the stock was purchased at a closing price of $700, we assess whether such a price is statistically outlying. According to the Empirical Rule, any data point beyond three standard deviations from the mean is considered unusual. By calculating three standard deviations above the mean, we can determine if $700 exceeds this threshold, indicating an outlier or an unusual event.
In determining the prices that constitute statistically unusual outcomes, we again utilize the Empirical Rule. These are the two bounds: the mean minus three times the standard deviation (for unusually low prices) and the mean plus three times the standard deviation (for unusually high prices). These calculations define the “normal” range of stock prices based on the dataset’s characteristics.
To further characterize the dataset, the first quartile (Q1), median (Q2), and third quartile (Q3) were calculated in Excel. These measures provide insights into the data's spread and central tendency, illustrating the distribution's symmetry or skewness. The median, as the second quartile, indicates the midpoint of the data, while Q1 and Q3 mark the 25th and 75th percentiles, respectively.
Finally, the validity of the normality assumption was assessed by constructing a histogram of the stock prices. If the histogram exhibits a bell-shaped symmetric curve, the normality assumption is supported. However, deviations such as skewness or heavy tails suggest that the data may not strictly follow a normal distribution, prompting cautious interpretation of the statistical analyses.
In conclusion, this comprehensive analysis demonstrates the application of statistical principles—either through Empirical Rules, z-scores, quartiles, or visualizations—to understand stock price behavior and assess probabilities of price movements. While the assumption of normality simplifies calculations, verifying this assumption with graphical methods like histograms remains critical for rigorous statistical inference.
References
- Larson, R., & Farber, M. (2014). Elementary Statistics: Picturing the World (6th Edition). Pearson.
- Statsmodels Developers. (2020). Normal Distribution and Z-scores. Python StatsModels Documentation.
- Excel Official Support. (2023). Using Excel functions for normal distribution and descriptive statistics.
- Anderson, T. W., & Darling, D. A. (1952). Asymptotic theory of certain goodness of fit criteria based on stochastic processes. The Annals of Mathematical Statistics, 23(2), 193–212.
- Wilk, M., & Shapiro, S. (1968). Anego of the Shapiro-Wilk test of normality. Biometrika, 55(3), 471–476.
- Salkind, N. J. (2010). Statistics for People Who (Think They) Hate Statistics. Sage Publications.
- Mooney, C. Z., & Duval, R. D. (1993). Bootstrapping: A Nonparametric Approach to Statistical Inference. Sage Publications.
- Weiss, N. (2005). Introductory Statistics. Pearson.
- Box, G. E., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society. Series B (Methodological), 26(2), 211–243.
- Sheskin, D. J. (2011). Handbook of Parametric and Nonparametric Statistical Procedures. Chapman and Hall/CRC.