visualizing classifier performance in R, with only 3 commands
Sing T, Sander O, Beerenwinkel N, Lengauer T.  ROCR: visualizing classifier performance in R. Bioinformatics 21(20):3940-1.
Free full text: http://bioinformatics.oxfordjournals.org/content/21/20/3940.full
ROCR was originally developed at the Max Planck Institute for Informatics
ROCR (with obvious pronounciation) is an R package for evaluating and visualizing classifier performance. It is…
Accuracy, error rate, true positive rate, false positive rate, true negative rate, false negative rate, sensitivity, specificity, recall, positive predictive value, negative predictive value, precision, fallout, miss, phi correlation coefficient, Matthews correlation coefficient, mutual information, chi square statistic, odds ratio, lift value, precision/recall F measure, ROC convex hull, area under the ROC curve, precision/recall break-even point, calibration error, mean cross-entropy, root mean squared error, SAR measure, expected cost, explicit cost.
ROC curves, precision/recall plots, lift charts, cost curves, custom curves by freely selecting one performance measure for the x axis and one for the y axis, handling of data from cross-validation or bootstrapping, curve averaging (vertically, horizontally, or by threshold), standard error bars, box plots, curves that are color-coded by cutoff, printing threshold values on the curve, tight integration with Rs plotting facilities (making it easy to adjust plots or to combine multiple plots), fully customizable, easy to use (only 3 commands).
The most straightforward way to install and use
ROCR is to install it from
CRAN by starting
R and using the
Alternatively you can install it from command line using the tar ball like this:
R CMD INSTALL ROCR_*.tar.gz
from withing R …
Using ROCR’s 3 commands to produce a simple ROC plot:
pred <- prediction(predictions, labels) perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf, col=rainbow(10))