How it works...

In this recipe, we used the classification_report () function of the scikit-learn library to extract a performance report. This function builds a text report showing the main classification metrics. A text summary of the precision, recall, and the F1 score for each class is returned. Referring to the terms introduced in the confusion matrix addressed in the previous recipe, these metrics are calculated as follows:

  • The precision is the ratio tp / (tp + fp), where tp is the number of true positives and fp the number of false positives. The precision is the ability of the classifier to not label a sample that is negative as positive.
  • The recall is the ratio tp / (tp + fn), where tp is the number of true positives and fn the number of false negatives. The recall is the ability of the classifier to find the positive samples.
  • The F1 score is said to be a weighted harmonic mean of the precision and recall, where an F-beta score reaches its peak value at 1 and its lowest score at 0.