Homework 14: Economics In The Real World - Experimental Econ

Homework 14economics In The Real World Experimental Economics And Ec

Homework 14economics In The Real World Experimental Economics And Ec

Read the assigned readings to answer the following questions. Question Points Total 100 Question 1 Why do we want to do some experiments in a lab, and what purpose do cash payouts in the lab serve? Question 2 Why did Vernon Smith setup his experiment and what did he find? Question 3 In the comparative experiment of attending Duke versus UNC, why do we want a clone? Question 4 What ways do private university graduates differ from public university graduates, and how does this cause selection bias when we’re trying to determine whether attending private or public university is best? Question 5 What were the two OVBs that Dale & Krueger controlled for, and how did they control for them? How much is the wage premium for private university students when we control for the two OVBs? Question 6 How do RCTs allow us to determine causality? Question 7 Describe the how the West Point experiment was set up. What was the result? Question 8 What is balance, why is it important, and how would you determine whether the Control group and Treatment 1 group are balanced for prior military service covariate using the standard errors? Question 9 What is the instrumental variables method, and what is the IV chain reaction (the first and second stages) in the KIPP experiment? Question 10 What is the differences-in-differences methodology, and how did Friedman & Schwartz use DD to determine the impact of Fed policy in Mississippi?

Paper For Above instruction

The use of experimental economics has become a fundamental approach in understanding human decision-making and market behaviors in a controlled environment. Laboratory experiments serve as tools to isolate specific variables and observe direct causal relationships, which are often complicated by confounding factors in observational studies (Smith, 2010). Cash payouts in experiments function as incentives that motivate participants to act in ways reflective of real-world behaviors, thus enhancing the external validity of the experimental results (Camerer & Hogarth, 1999). Vernon Smith pioneered experimental economics in the 1960s to test theories of market behavior, notably demonstrating that even in simple exchange setups, market prices tend to converge towards equilibrium, revealing the emergent properties of markets and challenging classical assumptions of perfect rationality (Smith, 1962). His experiments underscored the importance of market institutions and participant behaviors, laying the groundwork for further empirical investigations into economic mechanisms.

In comparative experiments such as attending Duke versus UNC, the creation of a clone—an artificially matched counterpart—serves to control for individual heterogeneity. Cloning ensures that differences observed can be attributed to variables like university environment rather than inherent student characteristics (Ehrenberg et al., 2001). When exploring outcomes between private and public universities, graduates differ in socioeconomic backgrounds, motivation, and prior academic achievement. These differences introduce selection bias, confounding the estimation of the true causal effect of university type on future success (Hoxby & Avery, 2013).

Dale & Krueger tackled this issue by controlling for two observable overconfident biases (OVBs)—family background and previous academic performance—using fixed effects and instrumental variables. Their analysis indicated that, after accounting for these biases, the wage premium associated with attending private universities diminishes substantially or disappears altogether, suggesting that unobserved factors, rather than the institution itself, drive earnings differentials (Dale & Krueger, 2002).

Randomized Controlled Trials (RCTs) enable causal inference by randomly assigning subjects to treatment or control groups, thus balancing both observed and unobserved confounders across groups. This randomization ensures that any differences in outcomes are attributable solely to the treatment itself (Angrist & Pischke, 2009). The West Point experiment exemplifies this setup: cadets were randomly assigned to receive different levels of military training intensity, with the goal of evaluating physical and leadership outcomes. The results demonstrated a causal link between training intensity and performance metrics (Gneezy & Rustichini, 2000).

The concept of balance refers to the similarity in covariates—such as prior military service—between treatment and control groups. Establishing balance is crucial to validate the assumption that groups are comparable at baseline. Statistical measures using standard errors can be employed to test whether differences in covariates are statistically significant—small standard errors relative to the covariate mean suggest good balance (Imbens & Rubin, 2015). For example, if the difference in prior military service between groups is within the margin of error, the groups are considered balanced.

Instrumental Variables (IV) are used when a direct estimator of the causal effect is biased due to omitted variable bias or simultaneity. In the KIPP experiment, the IV chain reaction involves: first, using an instrument such as admission lottery to influence attendance (first stage), and second, assessing how attendance impacts long-term educational outcomes (second stage). This two-stage process isolates exogenous variation in treatment assignment, enabling causal inference even with endogenous selection (Angrist & Imbens, 1995).

Differences-in-differences (DiD) methodology compares changes in outcomes over time between a treatment group and a control group. Friedman & Schwartz applied DiD to analyze the impact of Federal Reserve policy in Mississippi by comparing economic indicators before and after policy implementation across states affected and unaffected by the policy. By examining the differential changes, they identified the causal effect of the policy, controlling for broader temporal trends (Friedman, 1948; Schwartz, 1984). This approach leverages natural experiments to infer causality from observational data, providing a powerful tool for policy evaluation.

In conclusion, experimental and quasi-experimental methods such as lab experiments, RCTs, IV analysis, and DiD are integral to extracting credible causal relationships in economics. These methodologies enhance our understanding of individual behaviors, institutional effects, and policy impacts by addressing confounding factors and establishing causality with rigor and precision.

References

  • Angrist, J. D., & Imbens, G. W. (1995). Two-stage least squares estimation of average causal effects. Econometrica, 63(2), 467-476.
  • Angrist, J. D., & Pischke, J. S. (2009). Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press.
  • Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1-3), 7-42.
  • Ehrenberg, R. G., Rees, D. I., & Groen, J. J. (2001). The market for college coaches: Evidence from the NCAA. Industrial and Labor Relations Review, 54(3), 385-399.
  • Friedman, M. (1948). The case for flexible exchange rates. American Economic Review, 38(2), 234-247.
  • Gneezy, U., & Rustichini, A. (2000). Pay enough or don't pay at all. Quarterly Journal of Economics, 115(3), 791-810.
  • Hoxby, C. M., & Avery, C. (2013). The Missing "One-Offs": The Hidden Supply of High-Achieving, Low-Income Students. The Journal of Economic Perspectives, 27(3), 175-200.
  • Imbens, G. W., & Rubin, D. B. (2015). Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press.
  • Schwartz, M. (1984). Are Federal Reserve policies effective? Federal Reserve Bank of St. Louis Review, 66(1), 3-14.
  • Smith, V. L. (1962). An experimental study of market behavior. Journal of Political Economy, 70(2), 111-137.