**Statistics 992: Topics in high dimensional statistical
inference**

Michael
Newton (newton@stat.wisc.edu)

Spring
2006, MW 2:30 – 3:45, MSC 5295

**Overview:** A traditional model in statistics entails observations sampled from a
fixed, possibly complicated, population.
Inference about parameters describing this population is based on
approximations in which the number of parameters does not increase with the
sample size. Often the context
prescribes a different model, in which parameter dimension is tied to sample
size. By reviewing early and
contemporary literature, we will study a range of topics related to
parameter-rich statistics. In Part
I, we will review the classical view, some difficulties that arise, connections
between frequentist and Bayesian perspectives, and empirical Bayesian
methodology. In Part II, we will
study Bayesian methodology, considering both parametric and nonparametric
hierarchical models, and we will review computational approaches to model
fitting. Part III concerns recent
advances, including new techniques for high-dimensional testing and estimation.

**Part
I: On the origins of empirical
Bayes**

Cramer,
H. (1946). *Mathematical methods of statistics*, Princeton, N.J.: Princeton
University Press.

J.
Neyman and E.L. Scott (1948). Consistent
estimates based on partially consistent observations. *Econometrika *(16), 1-32.

A. Wald
(1949). Note on the consistency of the maximum
likelihood estimate*. Annals
of Mathematical Statistics* (20), 595-601.

J.
Wolfowitz (1949). On Wald's proof of the
consistency of the maximum likelihood estimator. *Annals of Mathematical Statistics* (20), 601-602.

C. Stein
(1955). Inadmissibility of the usual estimator for the mean of a multivariate
normal distribution. *Proceedings
of the Third Berkeley Symposium*, Vol 1, Berkeley: University of California Press, 197-206.

J.
Kiefer and J. Wolfowitz (1956). Consistency
of the maximum likelihood estimates in the presence of infinitely many nuisance
parameters*. Annals of
Mathematical Statistics* (27), 887-906.

W. James
and C. Stein (1961). Estimation with quadratic loss. Proceedings of the Fourth
Berkeley Symposium, Vol 1, Berkeley: University of California Press, 351-379.

B. Efron
and C.N. Morris (1971). Limiting the
risk of Bayes and empirical Bayes estimators -- Part I: The Bayes case. *J.
Amer. Statist. Assoc.*
(66), 807-815.

B. Efron
and C.N. Morris (1972a). Limiting the
risk of Bayes and empirical Bayes estimators -- Part II: The empirical Bayes
case. *J. Amer. Statis. Assoc.* (67), 130-139.

B. Efron
and C.N. Morris (1972b). Empirical
Bayes on vector observations: An extension of Stein's method. *Biometrika
*(59), 335-347.

B. Efron
and C.N. Morris (1973a). Stein's
estimation rule and its competitors -- an empirical Bayes approach. *J.
Amer. Statist. Assoc.* (68), 117-130.

B. Efron
and C.N. Morris (1973b). Combining
possibly related estimation problems (with discussion). *J. Roy. Statist.
Soc. Series B.*
(35), 379-421.

B. Efron
(1986). Why isn't everyone a Bayesian?
(with discussion [d1, d2, d3,
d4, d5,
reply]) *American Statistician*, (40) 1-11.

B.P.
Carlin and T.A. Louis (2000). Empirical
Bayes: Past, present and future. *J. Amer. Statist. Assoc.*, (95), 1286-1289.