Course Code:             MATH 3465

Course Title:                Statistical Inference

Course Type:               Core

Level:                           3

Semester:                     2

No. of Credits:               3

Pre-requisite(s):           MATH 2275 (Statistics I) and  (MATH 2270) Multivariable Calculus

 

Course Rationale

 

The course is aimed at those whose future careers will involve a heavy use of statistical methods and at the same time fulfil the required mathematical statistical inference needed at the undergraduate level. The course will benefit future statisticians, economists, geneticists and mathematicians with an interest in Statistics. The course has two main goals:

  • To give students a firm and comprehensive knowledge of basic Frequentist and Bayesian Inference.
  • To introduce main ideas of non-parametric inference based on the empirical distribution function.

The course is necessary to provide the mathematical statistical foundation for more advanced courses in programmes in statistics in the Department of Mathematics and Statistics. The exposure of students to this course gives them the ability to become good problem solvers. This course therefore creates the opportunity for our graduates to become critical and creative thinkers – thereby fulfilling one of the key attributes of our graduates. 

 

Course Description

 

This is a second course in Statistical Theory. The course may be thought of as a direct continuation of the introductory second year course Statistics I. This course is necessary to expose students to both classical and Bayesian inference which they would not have encountered in Statistics I. While Statistics I gives a relatively broad non-theoretical approach to statistics, this course completes the undergraduate statistical theory so that students can understand the underlying concepts in a more concise mathematical setting.

 

The course consists of three fairly distinct modules–frequentist inference, Bayesian inference and non-parametric methods. We continue the discussion of classical inference begun in Math 2275 Likelihood techniques are applied to a wide range of models. There is a fairly detailed discussion of unbiasedness and sufficiency. UMP and likelihood ratio tests are discussed. For Bayesian Inference, we introduce the ideas of subjective probability, prior and posterior distributions and the basics of Bayesian estimation and testing. In the short section on non-parametric methods we introduce the empirical distribution function and tests based on it. There is a brief introduction to inference on censored data and an introduction to the bootstrap.

 

All lectures, assignments, handouts, and review materials are available online through myeLearning to all students. Blended leaning techniques will be employed. Lectures will be supplemented with laboratory work and group discussions.

 

Assessment is designed to encourage students to work continuously with the course materials. Active learning will be achieved through weekly assignments and problem sheets allowing continuous feedback and guidance on problem solving techniques in tutorials and lectures. Assessment will be based on the weekly assignments and in-course tests followed by a final examination based on the whole course.

 

Learning Outcomes

 

Upon successful completion of the course, students will be able to:

  • Derive maximum likelihood estimates of parameters from the main distributions and in linear models and state the main properties of maximum likelihood estimators.
  • Calculate the Fisher Information in an observation and determine whether an estimator is UMVUE.
  • Determine whether a statistic is sufficient.
  • Explain the importance of sufficient statistics.
  • Derive best tests of a simple null versus a simple alternative.
  • Define a UMP test and be able to show that a test is UMP.
  • Derive the likelihood ratio tests of parameters in various linear models.
  • Explain the concepts of prior and posterior distributions.
  • Calculate the posterior distribution of a parameter.
  • Find the Bayes estimate of a parameter.
  • Test for equality of two distributions.
  • Calculate the Kaplan-Meier Estimate of a survival function.
  • Find bootstrap estimates of parameters in simple situations.

 

Content

 

  • Method of moments estimators. Maximum likelihood estimators of parameters in various one-parameter and multi-parameter distributions. MLEs of parameters in linear models. Desirable properties of MLEs.
  • Unbiased estimators. Fisher Information and the Cramer-Rao Inequality, including proofs.
  • Sufficiency and joint sufficiency. The Fisher Factorization Criterion. The Rao-Blackwell theorem with proof.
  • The Neyman-Pearson theory of hypothesis testing. The Neyman-Pearson theorem with proof. Uniformly most powerful tests. Tests with monotone likelihood ratio.
  • Likelihood Ratio Tests. Derivation of usual F-tests in linear models as likelihood ratio tests.
  • Subjective probability. Prior and posterior distributions of parameters. Conjugate priors. Bayes estimators. Bayes tests.
  • The Empirical Distribution Function.
  • Kolmogorov-Smirnov Tests.
  • Censored data and the Kaplan-Meier Estimator.

 

Teaching Methodology

Lectures, tutorials, assignments and problem papers.

Lectures: Two lectures per week.

Labs: One two hour computer lab per week.

 

Assignments: One assignment (marked) per week.

 

Additional problems will be given during lectures and tutorials but will not be marked. However, students will need to do some of these, as well as the assignments, in order to learn the material properly and to adequately prepare for examinations and quizzes.

 

Assessment

Coursework Mark: 50%, based on three in-term examinations of 12% each and assignments/lab work weighted at 14%.

Final Examination: 50% (one 2-hour written paper).

 

Course Calendar

 

WEEK

LECTURE TOPIC

ASSIGNMENTS

TESTS

1

Introduction/Course Overview

Method of moments estimators. Maximum likelihood estimation. Maximum Likelihood estimators of parameters in linear models.

 

 

Assignment 1

 

2

Maximum Likelihood estimators of parameters in linear models. Desirable properties of mles

Assignment 2  Assignment 1 returned.

 

3

. Sufficiency and joint sufficiency. Fisher factorization criterion. The Rao-Blackwell theorem. Exponential families.

Assignment 3

Assignment 2 returned.

 

4

Unbiased estimators. Fisher Information and Cramer-Rao inequality with proof

Assignment 4  Assignment 3 returned

 

5

The Neyman-Pearson lemma. Statement and examples. Proof of the Neyman-Pearson Lemma.

Assignment  5          

Assignment  4 returned

 

6

Uniformly most powerful tests. Tests with monotone likelihood ratios.  Likelihood Ratio Tests.

Assignment  6

Assignment 5 returned

Test 1 on material in weeks 1-4.

7

 LR tests of parameters in linear models

Assignment            7

Assignment 6 Returned

 

8

Prior and posterior distributions. Conjugate priors. Bayes estimators.

Assignment            8

Assignment 7 returned

 

9

Testing in the Bayesian Framework.

Assignment            9

Assignment 8 returned

Test 2 on material from weeks 5-7

10

The Empirical Distribution Function. One-sample and two-sample Kolmogorov-Smirnov tests.

Assignment          10

Assignment 9 returned

11

The Bootstrap.

Assignment          11

Assignment 10 returned

 

12

The Bootstrap Continued.

Assignment          12

Assignment 11 returned

Test 3 on material from 8-9

13

Revision

Revision

Assignment 12 returned

 

 

Reference Material

Prescribed Text

Statistical Inference by Casella and Berger, 2nd edition, Wiley.

 

 

Other Recommended Texts

Mathematical Statistics by Hogg and Craig (Wiley).

Probability and Statistics 3rd edition by Morris de Groot and Schervish

(Addison-Wesley 2002)

 

 

Software

Some of the assigned exercises will require the use of R, Minitab and SPSS statistical packages. R is free statistical software. Students can download R and use it on their personal computers. Minitab is free for UWI students. SPSS can be accessed at the computer labs.