Question 1 – Face Recognition Perform A Face Identification ✓ Solved
Question 1 – Face recognition Perform a face identification
Question 1 – Face recognition Perform a face identification task. For training, you are given 4 facial images of a person DG and 4 facial images of a person KS. For testing, you are given a facial image X that is known to be of either DG or KS. The task is to determine the likelihood ratio P(X|DG) / P(X|KS) and on that basis to decide which of the 2 persons is depicted by X and to state the level of confidence you have in the decision. Use PCA on the 8 training images to compute a 7-dimensional subspace, project training vectors, build Gaussian models for each person using projected vectors, compute PDFs for the test projection under each model, compute likelihood ratio and log10-likelihood ratio, and decide between DG and KS assuming equal priors and equal costs. Follow steps a)–h):
a) Convert each 70x50 training image to a 3500x1 double vector; print first 5 components of each transposed vector.
b) Compute the 3500x1 mean vector and 3500x3500 covariance matrix of the 8 training vectors, compute the first 7 principal axes; print first 5 components of mean and top-left 5x5 of covariance.
c) Project training vectors onto the 7 principal axes; print all components of each transposed projected vector.
d) From the 4 DG projected vectors compute 7x1 DG mean and 7x7 DG covariance and diagonalise the covariance; same for KS; print all components of the two transposed mean vectors and diagonals.
e) Convert the test image to 3500x1 double vector; print first 5 components of transposed vector.
f) Project the test vector onto the principal axes; print all components of the transposed projected vector.
g) For each model compute the multivariate Gaussian PDF that the test projection was produced by that model and calculate the likelihood ratio and log10-likelihood ratio; print PDFs and ratios.
h) Assuming equal priors and equal costs, state whether evidence supports DG or KS and the confidence level, using the likelihood ratio to justify the conclusion.
Paper For Above Instructions
Overview
This paper explains a complete, reproducible approach to solving the face-recognition task using principal component analysis (PCA), Gaussian modelling in the reduced subspace, and likelihood-ratio decision-making. The method follows the provided stepwise assignment: image vectorisation, PCA to a 7-dimensional subspace, projection of training and test images, per-person Gaussian model estimation (mean and diagonal covariance), computation of multivariate normal PDFs for the test projection, calculation of likelihood ratio and log10-likelihood ratio, and a decision under equal priors and costs. Implementation notes use MATLAB idioms (reshape, double, mean, cov, eigs, mvnpdf), but the mathematical steps and decision rules are general.
Data preparation and vectorisation (Step a & e)
Each grayscale face image is 70×50 pixels. Convert an image I (70×50 uint8) to a 3500×1 double vector x by stacking columns and converting type: x = double(reshape(I,70*50,1)). For the eight training images (dg1…dg4, ks1…ks4) form the matrix Xtrain = [x1 x2 … x8] of size 3500×8. For the test image X, compute xtest similarly. As requested, print the first five components of each transposed vector: disp(x(1:5)') in MATLAB.
PCA: mean, covariance, and principal axes (Step b)
Compute the mean vector mu = mean(Xtrain,2) (3500×1). Subtract mu from each column to obtain zero-mean data Y = Xtrain - repmat(mu,1,8). The full 3500×3500 covariance matrix can be computed as C = cov(Y'). For computational efficiency with few samples, one can compute the 8×8 covariance in sample space and map back; however the assignment explicitly requests the 3500×3500 covariance and the top 7 eigenvectors. Use MATLAB's eigs to compute the first 7 principal axes (eigenvectors) U = eigs(C,7,'largestabs'). Print mu(1:5)' and the top-left 5×5 block of C as requested.
Projection into 7-D subspace (Step c & f)
Project each mean-normalised training vector into the PCA subspace: Ztrain = U' Y, producing a 7×8 matrix where each column is the 7-D representation of a training image. Print all components of each transposed projected vector (i.e., display Ztrain(:,i)'). For the test vector: ztest = U' (xtest - mu). Print ztest' (seven values).
Per-person Gaussian models (Step d)
Split Ztrain into the two persons: Z_DG contains the four columns corresponding to DG; Z_KS contains the four KS columns. For each person compute the sample mean in 7-D: mu_DG = mean(Z_DG,2) and mu_KS = mean(Z_KS,2). Compute 7×7 covariance matrices: S_DG = cov(Z_DG') and S_KS = cov(Z_KS'). The assignment requests diagonal covariance matrices for the models; diagonalise by keeping only the diagonal entries diag_DG = diag(S_DG) and diag_KS = diag(S_KS). The per-person model is then N(mu_person, Diag(diag_person)). Print mu_DG', mu_KS' and the diagonal vectors fully.
Compute multivariate Gaussian PDFs and likelihood ratio (Step g)
Assuming independent Gaussian components in the PCA basis with variances given by the diagonal vectors, compute the PDF values p_DG = N(ztest; mu_DG, Diag(diag_DG)) and p_KS = N(ztest; mu_KS, Diag(diag_KS)). In MATLAB use mvnpdf(ztest', mu_DG', diag_matrix_DG) (ensure correct shapes); if mvnpdf requires a full covariance matrix provide cov_DG = diag(diag_DG). Compute likelihood ratio LR = p_DG / p_KS and logLR10 = log10(LR). Print p_DG, p_KS, LR and logLR10. If any variance component is numerically zero, regularise by adding a small epsilon (e.g., 1e-6) to the diagonal to avoid singular covariance.
Decision rule and confidence (Step h)
Under equal priors and equal costs, choose hypothesis H = DG if LR > 1, else choose KS. The magnitude of LR quantifies confidence: e.g., LR between 1 and 3 indicates weak support, 3–10 moderate, >10 strong support for DG; conversely LR
Implementation checklist and reproducibility
- Scripts to create Xtrain and xtest: use reshape and double; print first five components explicitly.
- Compute and print mu(1:5)' and C(1:5,1:5) to satisfy reporting requirements.
- Compute U via eigs(C,7) and project via U'*(X - mu).
- Form per-person statistics and ensure covariance diagonals are used; display full vectors as required.
- Compute PDFs with mvnpdf and present p_DG, p_KS, LR and log10(LR). Add small regulariser to variances as needed.
- Make the final decision with the LR and state confidence in plain English.
Notes on interpretation and pitfalls
PCA reduces dimensionality and concentrates variance into principal axes, improving model estimation with few samples (Jolliffe, 2002). However, modelling the reduced features as independent Gaussian (diagonal covariance) is an approximation; if off-diagonal terms are significant, the diagonal model will underestimate correlation and can affect PDFs and LR values (Bishop, 2006). Regularisation of very small variances prevents numerical instability (Hastie et al., 2009). For robustness consider cross-validation or bootstrapping if more labeled data are available (Duda et al., 2001).
Summary
The prescribed workflow implements PCA-based face recognition with Gaussian per-person models and likelihood-ratio decision-making. The MATLAB implementation uses reshape, double, mean, cov, eigs, and mvnpdf to produce the required printed vectors, matrices, projected coordinates, PDFs, likelihood ratio and log10-likelihood ratio. The decision rule under equal priors and costs selects the person with the higher model likelihood for the test image; the LR magnitude indicates decision confidence.
References
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer. (PCA, Gaussian models)
- Jolliffe, I. T. (2002). Principal Component Analysis. Springer Series in Statistics.
- Duda, R. O., Hart, P. E., & Stork, D. G. (2001). Pattern Classification. Wiley.
- Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning. Springer. (regularisation, model selection)
- Turk, M., & Pentland, A. (1991). Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), 71–86. (PCA for faces)
- Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Transactions on PAMI.
- Phillips, P. J., et al. (2000). The FERET Evaluation Methodology for Face-Recognition Algorithms. IEEE Transactions on PAMI.
- MathWorks. (2024). MATLAB Documentation: mvnpdf, eigs, reshape, mean. MathWorks Inc. (MATLAB function references)
- Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., & Ma, Y. (2009). Robust Face Recognition via Sparse Representation. IEEE Transactions on PAMI.
- Kumar, A., Belhumeur, P., & Nayar, S. (2011). FaceTracer: A Search Engine for Large Collections of Images with Faces. International Journal of Computer Vision.