PhD Qualifying Exam

Our qualifying exam is being changed effective with the August 2016 exam to encourage students to take it after their first year. This is a positive change that we are very excited about. This will prompt students to engage in research earlier in their graduate student careers.

The qualifying exam now has two options. Students will pick one of the two and write one exam.
1. Option A. Based on Statistics 709 and 710
2. Option B. Based on Statistics 609/610, and 849/850.

Both exams (Option A and Option B) will have 4 questions, with multiple parts. Candidates will be expected to answer all four questions. Both exams will be 4 hours in length.

General Qualifying Exam Guidelines

The student must pass the PhD Qualifying Examination within six semesters from the first fall semester of registration as a graduate student in the Department. The examination may be attempted a maximum of two times.

Master’s degree students who successfully complete the Department’s MS Degree Requirements within four semesters and are then admitted to the PhD program must pass the PhD Qualifying Examination within four semesters after entering the PhD program.

The examination is a written exam and is based on a syllabus made available by the PhD Qualifying Examination Committee. Students choose whether they will do Option A (based on the material of 709 and 710), or Option B (based on the material of 609, 610, 849, and 850).

The Qualifying Examination is generally given during the last week of August. Occasionally it may be offered right before the start of the Spring semester also, although in recent years there has not been enough interest from students to hold an exam at that time.

Passing or failing this examination will not affect the student’s candidacy for the Master’s degree.

PhD Qualifying Exam Syllabus

Option A (709/710)

Probability and Distribution Theory

Set operations, sigma-fields, measures, probability measures, distribution functions, measurable functions, random variables and vectors, induced measures, abstract integration theory, *monotone and dominated convergence theorems, product measures and *Fubinis theorem, differentiation under integral sign, expectations, moments, inequalities, absolute continuity of measures, independence of classes of events, independence of random vectors, *Radon-Nikodym theorem, probability density functions, change of variables, generating functions, characteristic functions, *uniqueness theorem, quadratic forms and their distributions, *Cochrans theorem, conditional expectations, conditional distributions, properties of conditional expectations, Markov chains.
*proofs not required

Asymptotic Theory

Modes of convergence (almost sure, in probability, in rth mean, in distribution, weak convergence) and their relationships, stochastic orders (Op(1) and op(1)), *Borel-Cantelli lemma, *Helly-Bray theorem, *Levy-Cramér theorem, *Skorohod theorem, Slutsky theorem, Scheffé's theorem, convergence of moments, convergence in distribution of a sequence of multivariate random vectors in terms of their linear combinations, continuous mapping theorem, delta method, weak laws of large numbers, strong law of large numbers, central limit theorems (Lindeberg, Liapunov, Lindeberg-Feller).
*proofs not required

Models and Inference Criteria

Statistical models, suffcient statistics, factorization theorem, minimal suffciency, exponential family, natural parameter space properties under sampling from exponential family, moments of exponential family, completeness, completeness of exponential family, Basu's theorem, inference problems, elements of decision theory, loss and risk functions, admissibility, Bayes and minimax criteria, prior and posterior distributions, large sample criteria, weak and strong consistency, asymptotic biases and variances, asymptotic inference.

Point Estimation

Unbiased estimators, UMVUE, Rao-Blackwell and Lehmann-Scheffé theorems, Fisher information, Cramér-Rao lower bounds, least squares in general linear models, geometric interpretation of least squares, estimable functions, Gauss-Markov theorem, U and V -statistics, methods of moments, Bayes and generalized Bayes estimators, minimax estimators, shrinkage estimators, likelihood functions, maximum likelihood estimation (MLE), MLE in exponential family, consistency and asymptotic normality of MLE and asymptotic normality of solution to the score equation, asymptotic efficient estimators, asymptotic relative efficiency, empirical distributions, empirical likelihoods, density estimation, semi-parametric methods, consistency and asymptotic normality of sample quantiles, L- and M-estimators.

Hypothesis Testing and Confidence Sets

Tests, randomized tests, power function and size of a test, Neyman-Pearson lemma, uniformly most powerful tests, monotone likelihood ratios, unbiased tests, uniformly most powerful unbiased for exponential family, similar test and Neyman structure, likelihood ratio test, and other large-sample equivalents to LR test (Walds test, Raos score test, Pearsons goodness of fit chi-square test for multinomial parameters), limiting distribution of tests, general linear hypothesis and likelihood ratio tests in linear models, ANOVA table and distribution theory, confidence sets, pivotal quantities, optimal confidence sets (uniformly most accurate confidence sets, confidence intervals of minimum length or expected length), large sample confidence sets using MLE and likelihood ratio statistics, relation between confidence sets and tests of hypothesis, confidence sets and simultaneous confidence intervals with applications in one-way and balanced two-way layout and
simple regression analysis.

Main References

Chung, K.L. (1974). A Course in Probability Theory, 2nd edition. Academic Press, New York.
Lehmann, E.L. (1986). Testing Statistical Hypotheses, 2nd edition. Springer-Verlag, New York.
Searle, S.R. (1971). Linear Models. Wiley, New York.
Shao, J. (2003). Mathematical Statistics, 2nd edition. Springer, New York.

Option B (609/610/849/850)

Probability Theory

Probability and conditional probability, correlation and independence, random variables, distributions, transformations, expectations, moment generating functions, useful distributions (binomial, Poisson, negative binomial, normal, gamma, chi-square, t- and F-distributions), exponential and location-scale families, multivariate normal and linear and quadratic forms, convergence (almost surely, in probability, in distribution), law of large numbers, central limit theorem, convergence of transformations, Slutsky theorem and delta-method.

Statistical Inference

Sample, population, statistics, sampling distribution, sufficiency, minimal sufficiency, completeness, maximum likelihood, moment method, estimation equation, least squares, weighted least squares, Bayes estimators, unbiasedness, UMVUE, information inequality, likelihood ratio tests, evaluation of tests and Neyman-Pearson Lemma, uniformly most power tests, unbiased tests, the duality between tests and confidence sets, pivotal quantities, consistency, asymptotic normality and efficiency, robustness, asymptotic tests based on likelihoods and chi-square tests.

Linear and Generalized Linear Models

Linear regression, least squares fit, Gauss-Markov theorem, distributions of quadratic forms, standard model assumptions, computational issues, testing simple and compound hypotheses, prediction, diagnostic tools and model selection (residuals, leverage and influence, Cp, R-square and adjusted R-square, stepwise methods, all possible regressions, leaps and bounds, AIC and BIC), transformations, Box-Cox transformations, multicollinearity, ridge regression, generalized linear models (estimation and testing theory, prediction and model selection, residuals and diagnostics).

Experimental Design and Applications

Model formulation, ANOVA table, hypothesis testing, diagnostic tools, transformations, multiple comparisons, contrasts, completely randomized designs, block designs, designs with multiple blocking factors, factorial designs, designs with multiple random effects, subsampling, split plot and strip plot designs, general linear models for designed experiments, parameterization of factors, estimability, cell means model, unbalanced designs and missing data, random and mixed effects models, model representations in matrix form, model fitting, testing, and diagnostics, ML and REML.

Main References

Grimmett, Geoffrey R. and Stirzaker, David R. (2001), Probability and Random Processes (Third edition), Oxford University Press.
Bickel, P.J. and Doksum, K.A. (2001), Mathematical Statistics (Vol. I), 2nd edition, Prentice Hall.
Casella, G. and Berger, R.L. (2002), Statistical Inference, 2nd edition, Brooks/Cole Cengage Learning.
McCullagh, P. and Nelder, J.A. (1989). Generalized Linear Models, 2nd ed.
Milliken, G. A. and Johnson, D. E. (2009). Analysis of Messy Data. Volume I: Designed Experiments, 2nd ed.
Seber, G. A. F. and Lee, A. J. (2003). Linear Regression Analysis, 2nd ed.