### Probability and Mathematical Statistics

Set operations, σ-fields, measure, Lebesgue measure on R^{k}, probability measures, distribution functions on Rk, measurable functions, simple functions, random variables and vectors, induced measures, independence of r.v.’s, concepts of abstract integration theory, basic integration theorems, *monotone convergence theorem, *dominated convergence theorem, product measures and Fubini’s theorem, differentiation under integral sign, expectation of r.v., moments, inequalities (Chebychev, Jensen, moment), existence of moments vs. tail behavior of the c.d.f., moments of multivariate distributions, multiplication law of expectation for independent r.v.’s, absolute continuity and singularity of measures, *Radon-Nikodym theorem, probability density functions, conditional probability and conditional expectation, conditional distributions and moments, properties of conditional expectation, independence of classes of events, Markov chains. *Chung: Ch. 1,2,3, Sec. 9.1, 9.2; Shao: Sec. 1.1-1.4.*

Distribution of functions of r.v.’s, change of variables, derived p.d.f.’s, Jacobian, linear transformation and behavior of multivariate moments, generating functions (moment, probability), characteristic functions, properties, computation, *inversion formulas and uniqueness theorem, moments and c.f.’s, cumulants, expansions of c.f.’s, *moment theorem, relation between multivariate c.f.’s and marginal c.f.’s, independence, standard univariate distributions, their properties and interrelations (binomial, hypergeometric, negative binomial, Poisson, normal, exponential, gamma, beta, Cauchy, logistic, log-normal, Weibull, central and noncentral *t*, χ^{2}, *F*, multivariate normal, multivariate *t*, multinomial), quadratic forms, distribution of quadratic forms in normal r.v.’s, conditions for σ^{2} distribution, independence of quadratic forms, Cochran’s theorm, partitioning a total sum of squares. *Chung: Ch. 6; Shao: Sec. 1.3; Searle: Ch. 2.*

Modes of convergence (almost sure, in probability, in rth mean, in distribution, weak convergence) and their relationships, *Helly-Bray theorems, *Levy- Cramér theorem, Slutsky theorems, Scheffé’s theorem, convergence of moments, Cramér-Wold theorem (convergence in distribution of sequence of multivariate random vectors in terms of linear combinations), continuous mapping theorem and delta method, weak laws of large numbers (Chebychev, Khintchine, proof by c.f.’s and by truncation method), Borel-Cantelli lemma, *Kolmogorov’s inequality, *Kolomogrov’s strong law of large numbers, classical central limit theorems, (Lindeberg-Levy, Liapunov, *Lindeberg-Feller, multivariate CLT in i.i.d. case). *Chung: Ch. 4,5, Sec. 7.1, 7.2.*

*statements only; proofs not required

Statistical models, general inference problem, sufficient statistics, factorization theorem, minimal sufficiency, exponential family, natural parameter space properties under sampling from exponential family, moments of exponential family, completeness, completeness of exponential family, Basu’s theorem. *Lehmann: Ch. 1, Sec. 2.6, 2.7; Shao: Sec. 2.1, 2.2.*

Elements of decision theory, loss and risk functions, admissibility, Bayes and minimax criteria, prior and posterior distributions, Bayes theorem, point estimation, criteria, unbiasedness, UMVUE, Rao-Blackwell and Lehmann-Scheffé theorems, Fisher information, Cramér-Rao lower bounds for variance, Bayes and minimax estimation, Bayes risk, admissibility, interrelations between Bayes, minimax, admissible. *Lehmann & Casella: Ch. 2,4,5.*

Large sample estimation criteria, weak and strong consistency, asymptotic efficiency, CAN and BAN estimators, asymptotic relative efficiency, maximum likelihood estimation (MLE), likelihood function, likelihood equation, MLE in exponential family, invariance of MLE, consistency and asymptotic normality of MLE and asymptotic normality of score vector (i.i.d. case). Other methods of estimation (least squares, method of moments), empirical distributions, consistency and asymptotic normality of sample quantiles. *Lehmann & Casella: Ch. 6; Shao: Sec. 2.5, 3.3, 3.5, 4.4, 4.5, 5.1.1, 5.3.1.*

Hypothesis testing; randomized tests, power function and size of a test, Neyman-Pearson lemma, uniformly most powerful (UMP) tests, monotone likelihood ratio family and UMP tests, unbiased tests, uniformly most powerful unbiased (UMPU) for one parameter exponential family, similar test and Neyman structure, UMPU tests in multiparameter exponential family, likelihood ratio (LR) test, and other “large-sample” equivalents to LR test (Wald’s test, Rao’s score test, Pearson’s goodness of fit χ^{2} test for multinomial parameters), limiting distribution of LR. *Lehmann: Sec. 3.1-3.3; 4.1-4.6, 5.1-5.3; Shao: Sec. 6.1-6.4.*

Confidence sets, pivotal quantities, “optimal” confidence sets (uniformly most accurate (UMA) one-sided confidence bounds, confidence intervals of minimum expected length), large sample confidence sets using MLE and LR statistics, relation between confidence sets and tests of hypothesis. *Lehmann: Sec. 3.5, 5.6-5.9; Shao: sec. 7.1-7.3.*

General linear model and least squares, normal equations, projections, geometric interpretation of least squares, generalized inverses of matrices, estimable functions, Gauss-Markov theorem, generalized least squares estimators, decomposition of sum of squares and related distribution theory under normal linear model, general linear hypothesis and likelihood ratio tests, corresponding ANOVA table and distribution theory, canonical form of the linear hypothesis, sufficiency and completeness in normal linear model confidence sets and simultaneous confidence intervals, applications in one-way and balanced two-way layout, simple regression analysis. *Searle: Ch. 1,3,5; Shao: Sec. 3.3, 7.5, 6.2.3, 6.3.2.*

### Main References

Chung, K.L., A Course in Probability Theory

Lehmann, E.L., Testing Statistical-Hypotheses, 2nd edition

Lehmann, E.L. and Casella, G., Theory of Point Estimation

Searle, S.R., Linear Models

Shao, J., Mathematical Statistics, 2nd edition

### Additional References

Bickel, P.J. and Doksum, K.A., Mathematical Statistics, Vol. 1, 2nd edition

Billingsley, P., Probability and Measure

Durrett, R.A., Probability: Theory and Examples

Ferguson, T.S., Mathematical Statistics – A decision theoretic approach

Rao, C.R., Linear Statistical Inference and Its Applications

Rohatgi, V.K., An Introduction to Probability Theory and Mathematical Statistics

Scheffé, H., The Analysis of Variance

Revised July 2005; reformatted August 2011.

## Statistical Methods and Applications

Linear regression: estimation and testing theory for least squares fit; distributions of quadratic forms; the Gauss-Markov assumptions; diagnostic tools; prediction and model selection. *Seber and Lee: Sec. 2.4, 3.1-3.10, 4.1-4.7, 5.1- 5.3, 9.1-9.3, 10.2, 10.4.1, 10.5-10.7; 11.3; 12.1-12.2, 12.3.1-12.3.3, 12.4; Rencher: Ch. 5, 7, 8, 9; Rao and Toutenburg: Sec. 3.1 – 3.9.*

Generalized linear models: estimation and testing theory; prediction and model selection; residuals and diagnostics; categorical data analysis. * McCullagh and Nelder: Sec. 2.1-2.4, 4.1.1-4.1.2, 4.3.1-4.3.2, 4.4, 6.2.1-6.2.2. Rao and Toutenburg: Sec. 10.1, 10.3, 10.5; Rencher: Sec. 17.2, 17.4-17.5.*

Classical experimental design: randomized designs; block designs; designs with multiple blocking factors; factorial designs; fractional factorial designs; nested designs and designs with nested factors; linear contrasts; data transformations. *Mead: Ch. 2-3, 6-9, 12-15; Milliken and Johnson: Ch. 1-8, 24-25, 30; Box, Hunter and Hunter: Ch. 5-8.*

General linear models for designed experiments: definition and parameterization of factors; estimability; cell means model; unbalanced designs and missing data. *Mead: Ch. 4, 10-11; Milliken and Johnson: Ch. 9-17.*

Random and mixed effects models: model representations in matrix form; model fitting, testing, and diagnostics; ML and REML. *Milliken and Johnson: Ch. 18-23.*

### Main References

Box, G. E. P., Hunter, J. S., and Hunter, W. G. (2005). Statistics for Experimenters, 2nd ed. Wiley.

Mead, R. (1988). The Design of Experiments. Cambridge University Press.

McCullagh, P. and Nelder, J.A. (1999). Generalized Linear Models, 2nd ed. Chapman and Hall/CRC.

Milliken, G. A. and Johnson, D. E. (1993). Analysis of Messy Data. Volume I: Designed Experiments. Chapman and Hall/CRC.

Rao, C. R. and Toutenburg, H. (1995). Linear Models: Least Squares and Alternatives. Springer.

Rencher, A. C. (2000). Linear Models in Statistics. Wiley.

Seber, G. A. F. and Lee, A. J. (2003). Linear Regression Analysis, 2nd ed. Wiley.

### Additional References

Agresti, A. (2002). Categorical Data Analysis, 2nd ed. Wiley.

Christensen, R. (2002). Plane Answers to Complex Questions, 3rd ed. Springer.

Dean, A. and Voss, D. (1999). Design and Analysis of Experiments. Springer.

Faraway, J. J. (2004). Linear Models with R. Chapman and Hall/CRC.

Hocking, R. R. (1996). Methods and Applications of Linear Models. Wiley.

Littell, R. C., Milliken, G. A., Stroup, W. W. and Wolfinger, R. D. (1996). SAS System for Mixed Models. SAS Institute.

McCulloch, C. E. and Searle, S. R. (2001). Generalized, Linear, and Mixed Models. Wiley.

Pinheiro, J. C. and Bates, D. M. (2002). Mixed Effects Models in S and S-Plus. Springer.

Simonoff, J. S. (2003). Analyzing Categorical Data. Springer.

Wu, C. F. J. and Hamada, M. (2000). Experiments: Planning, Analysis, and Parameter Design Optimization. Wiley.

Yandell, B. S. (1997). Practical Data Analysis for Designed Experiments. Chapman and Hall.

Revised May 2006; reformatted August 2011.