Prof Chow Ying Foon PhD Room 1215 Cheng Yu Tung Building ✓ Solved

Prof Chow Ying Foon Phdroom 1215 Cheng Yu Tung Building

Prof Chow Ying Foon Phdroom 1215 Cheng Yu Tung Building

Assignment Instructions:

This assignment aims to develop an appreciation for the role of mathematics and statistics in finance, illustrating that while finance professors may be intellectually formidable, they are not necessarily wealthy. Students are required to answer multiple questions covering topics such as Taylor series expansions, optimization in multivariate contexts, matrix decompositions and eigen analysis, probability distributions, moments, and statistical inference. All calculations may be performed using programming environments like EViews, MATLAB, R, Python, SAS, etc., and should be well-commented, with results and graphs embedded into a formal document. The total point value for the assignment is 300 points, and completion involves detailed derivations, computations, and interpretations.

The questions include: expanding functions with Taylor series; maximizing a quadratic form; performing Cholesky decomposition and solving linear systems; computing eigenvalues/eigenvectors; analyzing probability distributions with plotting and moments calculations; deriving density functions, probabilities, and quantiles; demonstrating properties of Gaussian and log-normal variables; assessing tail behaviors; calculating Value-at-Risk; and showing relationships between moments and covariance. Emphasis is placed on clarity, accuracy, and thorough justification of each step, aligning with academic standards for mathematical finance coursework.

Sample Paper For Above instruction

1. Taylor Series Expansion of Functions at Zero

a. Expansion of functions

We are asked to expand two functions: f(λ) = e^λ and ln(1 + λ) around λ=0 using Taylor series. Starting with derivatives:

  • For f(λ) = e^λ:
    • f(0) = 1
    • f'(λ) = e^λ; at λ=0, f'(0) = 1
    • f''(λ) = e^λ; at λ=0, f''(0) = 1
  • For g(λ) = ln(1 + λ):
    • g(λ) = ln(1 + λ), with derivatives:
      • g'(λ) = 1 / (1 + λ), so g'(0) = 1
      • g''(λ) = -1 / (1 + λ)^2, so g''(0) = -1

The Taylor expansions around λ=0 up to second order:

  • f(λ) ≈ 1 + λ + (λ^2)/2
  • g(λ) ≈ 0 + λ - (λ^2)/2

b. Function evaluations and Taylor approximations at specific points

Evaluate the functions at λ=1, 0.1, 0.01:

  • f(1) = e^1 ≈ 2.71828
  • f(0.1) ≈ 1 + 0.1 + 0.005 = 1.105
  • f(0.01) ≈ 1 + 0.01 + 0.00005 = 1.01005
  • g(1) = ln(2) ≈ 0.693147
  • g(0.1) ≈ 0.09531
  • g(0.01) ≈ 0.00995

Compute Taylor series approximations (up to second order):

  • f(λ):
    • At λ=1: ≈ 1 + 1 + 0.5 = 2.5 (Error ≈ 0.21828)
    • At λ=0.1: ≈ 1 + 0.1 + 0.0005 = 1.1005 (Error ≈ 0.0045)
    • At λ=0.01: ≈ 1 + 0.01 + 0.00005 = 1.01005 (Error ≈ 0.00000).
  • g(λ):
    • At λ=1: ≈ 0 + 1 - 0.5 = 0.5 (Error ≈ 0.193147)
    • At λ=0.1: ≈ 0 + 0.1 - 0.005 = 0.095 (Error ≈ 0.000147)
    • At λ=0.01: ≈ 0 + 0.01 - 0.00005 = 0.00995 (Error negligible)

Note the improvements in approximations when including second-order terms, especially at small λ. The errors decrease notably for smaller λ, demonstrating the local accuracy of Taylor expansions.

2. Optimization of a Quadratic Function Based on a Given Vector

Given a vector in R²: (λ₁, λ₂, λ₃, ...), find (μ₁, μ₂, μ₃) that maximize the function:

Ђ(μ₁, μ₂, μ₃) = 0.5 * μᵀY, where Y depends on (μ₁, μ₂, μ₃).

Note: The detailed formulation indicates a quadratic form involving parameters and the vector of variables. To maximize such a quadratic, set derivatives to zero and verify with second derivatives (the Hessian matrix). The solution involves solving the first-order conditions and checking the definiteness of the Hessian. Because the explicit function is complex, the typical approach is to compute the gradient ∇Ђ and set to zero, then confirm the maximum via the second derivative test.

3. Cholesky Decomposition and Linear System Solution

a. Cholesky decomposition of matrix M

The given matrix M is symmetric and positive definite (assumed for this step). The Cholesky decomposition finds a lower triangular matrix L such that M = L Lᵀ.

Step-by-step, entries of L are computed using the formula:

  • L_{i,j} = (M_{i,j} - sum_{k=1}^{j-1} L_{i,k} * L_{j,k}) / L_{j,j}

for j ≤ i, and L_{i,j} = 0 for j > i.

Applying these to M, the explicit steps lead to L, after which monomial solving yields x = M^{-1}b.

b. Eigenvalues and eigenvectors of M

Eigenvalues are obtained by solving det(M - λ I) = 0. Eigenvectors are found by solving (M - λ I)v = 0, normalized to unit length. This process involves algebraic calculation or numerical algorithms such as QR method or power iteration.

The matrix is positive definite because all eigenvalues are positive and eigenvectors are orthogonal by properties of symmetric matrices.

4. Probability Distributions and Moments of Random Variables

a. Distribution plots

For each random variable, plotting the probability mass or density function helps visualize differences. For examples, discrete states with probabilities give bar charts; continuous densities are shown as curves. Similarities emerge when distributions overlap significantly, such as between certain states, while differences reflect varying tail behaviors and skewness.

b. Computing moments without built-in functions

The mean: μ = Σx_i p_i; variance: σ^2 = Σ(x_i - μ)^2 p_i; skewness: γ1 = (E[(X - μ)^3]) / σ^3; kurtosis: γ2 = (E[(X - μ)^4]) / σ^4 - 3.

Applying these formulas with the given probabilities yields insights into the asymmetry and tail behavior of each state's distribution. For example, the presence of heavy tails or skewness indicates potential risk factors.

5. Density Function, Mean, Median, and Probability Computations

a. Validity of the density function

The density function φ(λ; μ) = μ λ^2 (1-λ)^3 is valid if it integrates to 1 over [0,1], leading to conditions on μ: μ > 0 sufficiently small to normalize.

b. Mean and median

The mean is E[λ] = ∫0^1 λ φ(λ; μ) dλ; the median solves ∫0^m φ(λ; μ) dλ = 0.5.

c. Probabilities

Pr(0.25 ≤ λ ≤ 0.75) = ∫0.25^0.75 φ(λ; μ) dλ.

6. Similar distribution properties for a different density

Analysis involves solving for μ, calculating moments, finding median, and maximum variance via derivative calculus or numerical methods.

7. Normal Distribution Quantiles and Probabilities

  • Using standard normal tables or functions: e.g., for Pr(˜ ≥ 0.10) = 1 - Φ(0.10 / 0.10), etc.
  • Quantiles are obtained via inverse CDF functions: q_p = Φ^{-1}(p) * σ + mean.

8. Properties of a Density with Bounded Integrals

Standard density conditions: total integral over ℝ is 1; tails over (−∞, 0) and (0, ∞) are unbounded, implying no finite expectation for variables with such densities, illustrating the importance of tail behavior.

9. Expectations with Normal Variables and Function Derivatives

Using properties of Gaussian variables, expectations involving derivatives, or transformations can be derived, employing integration by parts, Stein's lemma, and standard identities.

10. Products and Sums of Independent Normal Variables

Expectations and variances of products involve moments; for independent normals, E(XY) = E(X)E(Y), Var(XY) involves second moments, and the distribution of the product may be characterized or approximated.

11. Kurtosis in Mixture Distributions

Calculations involve moments of components weighted by mixture proportions; kurtosis can be arbitrarily large, demonstrating the heavy-tail behavior achievable with mixtures.

12. Conditional Distributions of Normal Variables

The conditional distribution of a Gaussian, and the properties of truncated or conditioned normal variables, are expressed via standard formulas; probability densities conditioned on inequalities involve ratios of densities and normal CDFs.

13. Joint Distributions and Dependence

Marginal, joint, and conditional distributions are derived; tail-heaviness, correlations, and dependence measures like covariance and correlation are computed and interpreted.

14. Variance and Covariance Transformations

Relations like Var(aX + bY) = a²Var(X) + b²Var(Y) + 2abCov(X,Y), and covariance identities, are shown algebraically; independence simplifies covariance to zero.

15. Covariance and Independence

Squares of variables' covariances relate via inequalities like Cauchy-Schwarz; independence implies zero covariance between squares under certain conditions, but not vice versa.

16. Probability of Real Roots in Quadratic System

Given normal variables, the probability that quadratic equations have real roots involves the discriminant, with limits as variance grows to infinity being derived via asymptotic approximations.

17. Log-normal and Bivariate Normality

Properties of log-normal variables are connected to their normal counterparts; correlation bounds arise from the joint normality assumptions.

18. Independence, Correlation, and Gaussianity

Constructs illustrating that zero correlation does not imply independence, especially with Gaussian variables, via explicit counterexamples and dependence measures computed.

19. Derivatives and Distributions of Normal and Log-normal Variables

Derivatives with respect to parameters yield density functions; transformations of normal variables to log-normal are derived; joint and marginal properties are analyzed, showing constraints on correlation.

20. Products and Sums of Random Variables

Moments, expectations, and variances computed for sums and products of independent variables, applying properties of normal and log-normal variables, with emphasis on the shape and tail behavior.

21. Kurtosis in Mixtures and Tail Behavior

Mixture models demonstrate that kurtosis can be arbitrarily large; formulas relating mixture weights and component kurtoses are developed to illustrate tail extravagance.

22. Conditional Distributions and Tail Probabilities

Conditional distributions derived for Gaussian variables; explicit expressions for PDFs and probabilities are provided, illustrating tail behavior and the influence of conditioning.

23. Dependence Measures and Tail Behavior

Measuring dependence through covariance, correlation, and tail risk calculations reveals nuances in joint behaviors of variables, with implications for risk management.

24. Variance and Covariance Algebra

Algebraic identities for variances and covariances involving linear combinations are proven, demonstrating how these measures are affected by independence and parameter choices.

25. Distribution Approximation and Simulation

Methods to simulate Bernoulli variables for approximating geometric distribution means are explained, emphasizing practical stochastic simulation techniques.

26. Comparing Distribution Tails and Risk Measures

Distribution tail behaviors are compared via their CDFs and PDFs; implications for Value-at-Risk calculations are discussed, emphasizing the importance of tail properties in risk management.