Loss Function
- used in Machine Learning
-
Also known as Energy Function
-
Used to evaluate a model based on some data and the ground truth
-
This is used to optimize the model during training
-
Mathematically it is just defined as a Norm function, to evaluate the distance between the model and the ground truth.
-
Like the different kind of Norm functions, there are also different Loss functions (here the loss between indivisual data points and the prediction).
- L1-Loss:
- L2-Loss:
- L1-Loss:
-
The loss function can also be computed for the whole model (all data points).
-
Normalized over the number of data points: the mean loss
-
L1-Loss : l1
-
MSE-Loss: “Mean squared error”
-
Binary Cross Entropy Loss: Used in Logistic Regression .
-
General Cross Entropy Loss: for
different labels/classes
-
Example #
We want to classify images using 3 labels: “Cat”, “Dog”, “Deer”. The neural
network will output a vector OF the probabilities for each image. For a
given image the network would for exampe output this
is then compared with the ground truth, so something like
if the image was a cat. We can then compute the Loss-Function to measure how
good/bad the prediction was.