Back to Browse

1.5.2 Minimizing the Expected Loss - Pattern Recognition and Machine Learning

1.0K views
Apr 3, 2024
23:26

We consider how to make optimal decisions when different types of errors have different costs. We introduce the notion of the loss function, or the loss matrix when working with discrete classes, to capture these different costs. We define the expected loss and discuss how it can be minimized pointwise by assigning each datapoint to the class for which it incurs the lowest average loss. I conclude by showing the loss matrix for which minimizing the expected loss results in the minimum misclassification rate rule discussed in the previous section. I also explicitly derive the intuitively clear result that, when we're certain about the class of a data point, the expected loss is minimized when we assign it to that class.

Download

1 formats

Video Formats

360pmp477.2 MB

Right-click 'Download' and select 'Save Link As' if the file opens in a new tab.

1.5.2 Minimizing the Expected Loss - Pattern Recognition and Machine Learning | NatokHD