Statistical Inference Theory — Casella-Berger | UMich STATS 511 | MIT 18.650

This repository contains my lecture notes and solved exercises from Casella & Berger: Statistical Inference (Chapters 6–9), along with supporting materials from MIT 18.650 and UMich STATS 511. The focus is on point estimation, hypothesis testing, and interval estimation — the core pillars of modern Machine Learning, AI, and Robotics.

Key Topics Covered

  1. Principles of Data Reduction: Sufficiency, Minimal Sufficiency, Ancillary Statistics, Complete Statistics, and the Likelihood Principle.
  2. Point Estimation: Methods of Moments, Maximum Likelihood, Bayes Estimators, EM Algorithm, Evaluating Estimators, and Fisher Information.
  3. Hypothesis Testing: Neyman-Pearson Lemma, Likelihood Ratio Tests, and Decision Theory.
  4. Interval Estimation: Confidence Intervals, Bayesian Credible Intervals, and Large Sample Approximations.

Repository Structure

Purpose

This work builds a rigorous foundation in Statistical Inference essential for Statistical Machine Learning, Computational Statistics, and Probabilistic Robotics. These concepts form the backbone of Monte Carlo Methods, Bayesian Filtering, and State Estimation — critical for modern control systems and AI-driven robots.

By mastering these principles, I aim to unify theory and implementation in robotics and autonomous systems, developing probabilistic models and decision algorithms grounded in mathematical clarity.

About Me

I’m focused on building deep mathematical and statistical foundations for Robotics, AI, and State Estimation.

Contact:
📧 sampath@umich.edu
🔗 LinkedIn Profile
🧠 View GitHub Repository