diff --git a/chapter-prelude/prelude.ipynb b/chapter-prelude/prelude.ipynb index 5ab4dcbc..65c7d827 100644 --- a/chapter-prelude/prelude.ipynb +++ b/chapter-prelude/prelude.ipynb @@ -51,7 +51,7 @@ "\n", "- **Part 2: Modelling approximation uncertainty: Set-based versus distributional representations.** Different statistical approaches for modelling uncertainty are discussed in {ref}`Chapter 4 `.\n", "\n", - "- **Part 3: Machine learning methods for representing uncertainty.** This is the main part of the book, which presents several different machine learning methods that allow for representing a learners uncertainty in a prediction. First, approaches that use classical frequentist statistics for quantifying uncertainty are discussed: Chapters {ref}`5`, {ref}`6` and {ref}`7` discuss how to estimate probabilities via scoring, calibration and ensembles. {ref}`Chapter 8` treat maximum likelihood estimation and the fisher information matrix. {ref}`Chapter 9` discusses generative models. Next, Bayesian approaches for uncertainty quantification are discussed: {ref}`Chapter 10` presents gaussian processes. Chapter {ref}`11` and {ref}`12` describe ensembles of deep neural networks and bayesian neural networks. {ref}`Chapter 13` addresses the concept of credal sets and {ref}`chapter 14` reliable classification. Lastly, the concept of set valued prediction is introduced. Chapter {ref}`15` and {ref}`16` discuss conformal prediction for classification and regression respectively. {ref}`Chapter 17` explains set-valued prediction based on utility maximization.\n", + "- **Part 3: Machine learning methods for representing uncertainty.** This is the main part of the book, which presents several different machine learning methods that allow for representing a learners uncertainty in a prediction. First, approaches that use classical frequentist statistics for quantifying uncertainty are discussed: Chapters {ref}`5`, {ref}`6` and {ref}`7` discuss how to estimate probabilities via scoring, calibration and ensembles. {ref}`Chapter 8` treat maximum likelihood estimation and the fisher information matrix. {ref}`Chapter 9` discusses generative models. Next, Bayesian approaches for uncertainty quantification are discussed: {ref}`Chapter 10` presents gaussian processes. Chapter {ref}`11` and {ref}`12` describe ensembles of deep neural networks and bayesian neural networks. {ref}`Chapter 13` addresses the concept of credal sets and {ref}`chapter 14` reliable classification. Lastly, the concept of set valued prediction is introduced. Chapter {ref}`15` and [16](../chapter-conformel_regression/conformel_regression) discuss conformal prediction for classification and regression respectively. {ref}`Chapter 17` explains set-valued prediction based on utility maximization.\n", "\n", "" ]