Single-cell transcriptomics is rapidly advancing our understanding of the cellular composition of complex tissues and organisms. A major limitation in most analysis pipelines is the reliance on manual annotations to determine cell identities, which are time-consuming and irreproducible. The exponential growth in the number of cells and samples has prompted the adaptation and development of supervised classification methods for automatic cell identification.
Leiden University Medical Center researchers benchmarked 22 classification methods that automatically assign cell identities including single-cell-specific and general-purpose classifiers. The performance of the methods is evaluated using 27 publicly available single-cell RNA sequencing datasets of different sizes, technologies, species, and levels of complexity. The researchers use 2 experimental setups to evaluate the performance of each method for within dataset predictions (intra-dataset) and across datasets (inter-dataset) based on accuracy, percentage of unclassified cells, and computation time. They further evaluate the methods’ sensitivity to the input features, number of cells per population, and their performance across different annotation levels and datasets. They find that most classifiers perform well on a variety of datasets with decreased accuracy for complex datasets with overlapping classes or deep annotations. The general-purpose support vector machine classifier has overall the best performance across the different experiments.
Performance comparison of supervised classifiers
for cell identification using different scRNA-seq datasets
Heatmap of the a median F1-scores and b percentage of unlabeled cells across all cell populations per classifier (rows) per dataset (columns). Gray boxes indicate that the corresponding method could not be tested on the corresponding dataset. Classifiers are ordered based on the mean of the median F1-scores. Asterisk (*) indicates that the prior-knowledge classifiers, SCINA, DigitalCellSorter, GarnettCV, Garnettpretrained, and Moana, could not be tested on all cell populations of the PBMC datasets. SCINADE, GarnettDE, and DigitalCellSorterDE are versions of SCINA, GarnettCV, and DigitalCellSorter; the marker genes are defined using differential expression from the training data. Different numbers of marker genes, 5, 10, 15, and 20, were tested, and the best result is shown here. SCINA, Garnett, and DigitalCellSorter produced the best result for the Zheng sorted dataset using 20, 15, and 5 markers, and for the Zheng 68K dataset using 10, 5, and 5 markers, respectively
Availability – the code used for the evaluation is available on GitHub (https://github.com/tabdelaal/scRNAseq_Benchmark). Additionally, the researchers provide a Snakemake workflow to facilitate the benchmarking and to support the extension of new methods and new datasets.