BUSA 2185: Business Research Short Exercise 3 SPSS Reliabili

BUSA 2185 BUSINESS RESEARCH Short Exercise 3 SPSS Reliability and Validity Measurement

BUSA 2185: BUSINESS RESEARCH Short Exercise 3 SPSS Reliability and Validity Measurement

Provide definitions and explanations of key concepts including reliability, validity, and construct validity, along with methods to measure them and their acceptable ranges. Additionally, perform exploratory factor analysis (EFA) on pre-test and post-test variables, presenting the rotated component matrices as specified.

Paper For Above instruction

Introduction

In business research, ensuring the quality and accuracy of measurements is crucial for deriving valid and reliable conclusions. Reliability and validity are foundational concepts that underpin the integrity of collected data, hence their importance in empirical investigations. This paper discusses these concepts, their measurement techniques, and the application of exploratory factor analysis (EFA) to study variables, exemplified through SPSS analyses.

Reliability in Business Research

Definition of Reliability and Its Importance

Reliability refers to the consistency and stability of a measurement instrument over time and across different conditions. It indicates the extent to which an instrument yields the same results upon repeated applications, assuming the construct being measured remains unchanged (Cronbach, 1951). Reliability is essential because it ensures that data collected are dependable and that observed variations are attributable to actual differences rather than measurement errors, thus strengthening the validity of research findings (Nunnally & Bernstein, 1994).

Measuring Reliability

The most common method for assessing reliability is internal consistency, often measured using Cronbach's alpha coefficient (Cronbach, 1951). Test-retest reliability, which involves administering the same instrument at two different points in time and correlating the results, is also utilized. Additionally, split-half reliability assesses consistency by dividing the items into two halves and correlating their scores (Guttman, 1945). However, Cronbach's alpha remains the most frequently reported measure, especially in survey research.

Range for Reliability

  • Excellent: ≥ 0.90
  • Acceptable: 0.80 – 0.89
  • Questionable: 0.70 – 0.79
  • Poor:

Validity in Business Research

Definitions of Convergent and Discriminant Validity

Convergent validity refers to the degree to which two measures of constructs that theoretically should be related are actually related, indicating that they measure the same or similar constructs (Campbell & Fiske, 1959). Discriminant validity, on the other hand, assesses whether constructs that should have no relationship indeed do not, confirming that measures accurately reflect distinct traits (Fornell & Larcker, 1981). Both are necessary to ensure that measurement instruments accurately capture the constructs of interest, leading to valid inferences (Hair et al., 2010).

Measuring Validity

Validity is often assessed through correlation analyses, factor analyses, and examining average variance extracted (AVE). Convergent validity can be measured by the Average Variance Extracted (AVE), with values greater than 0.50 indicating good convergent validity (Fornell & Larcker, 1981). Discriminant validity can be established if the square root of the AVE exceeds the correlations between constructs. Content validity is typically evaluated qualitatively through expert reviews, ensuring the measurement comprehensively covers the construct.

Range for Good Convergent Validity

  • AVE > 0.50: considered indicative of good convergent validity (Fornell & Larcker, 1981)

Construct Validity and Exploratory Factor Analysis (EFA)

Construct validity entails demonstrating that a measurement instrument accurately measures the theoretical construct it intends to measure. EFA is a statistical technique used to explore the underlying structure of a set of variables, confirming whether the data support the expected construct structure. In SPSS, conducting EFA involves selecting the relevant variables, choosing extraction methods such as Principal Component Analysis (PCA), and applying rotation methods like Varimax to clarify factor loadings.

Example Procedure: Pre-test Variables

Following the provided example, analyses are performed by selecting Analyze > Dimension Reduction > Factor. For the pre-test variables, the 11 variables (PreA1-PreA2, PreEA1-PreEA3, PreUS1-PreUS3, PreFU1-PreFU3) are entered, with extraction set to a fixed number of four factors. Rotation with Varimax normalization enhances interpretability. The rotated component matrix presents the loadings of each variable onto the four factors, indicating the underlying constructs.

Results Interpretation

The coefficients in the rotated component matrix reveal the strength of each variable's association with the respective factors. Variables that load highly (above 0.7) on a single factor indicate clear associations with that construct. Similar procedures are performed for post-test variables, enabling comparison and validation of the measurement model. Presenting these matrices allows researchers to assess whether the data support the hypothesized factor structure, confirming construct validity.

Conclusion

Ensuring measurement reliability and validity is fundamental in business research for producing accurate, consistent, and meaningful results. Methods such as Cronbach's alpha and AVE facilitate these assessments. EFA, particularly with SPSS, offers insights into the underlying construct structure, supporting construct validity. Properly conducted, these analyses reinforce the robustness of research findings, guiding meaningful decisions in business contexts.

References

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.
  • Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50.
  • Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10(4), 255–282.
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate Data Analysis (7th ed.). Pearson.
  • Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill.
  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105.
  • Furr, R. M. (2011). Scale construction and psychometrics. In R. Furr & P. Bacharach (Eds.), Psychology Research Methods. Sage.
  • Preacher, K. J. (2011). Latent variable modeling. Quantitative Methods in Psychology, 7(2), 283–304.
  • Tabachnick, B. G., & Fidell, L. S. (2013). Using Multivariate Statistics (6th ed.). Pearson.
  • Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36.