PopV uses popular vote of a variety of cell-type transfer tools to classify cell-types in a query dataset based on a test dataset. Using this variety of algorithms, we compute the agreement between those algorithms and use this agreement to predict which cell-types are with a high likelihood the same cell-types observed in the reference.
Currently implemented algorithms are:
- K-nearest neighbor classification after dataset integration with BBKNN
- K-nearest neighbor classification after dataset integration with SCANORAMA
- K-nearest neighbor classification after dataset integration with scVI
- K-nearest neighbor classification after dataset integration with Harmony
- Random forest classification
- Support vector machine classification
- XGboost classification
- OnClass cell type classification
- scANVI label transfer
- Celltypist cell type classification
All algorithms are implemented as a class in popv/algorithms.
All algorithms that allow for pre-training are pre-trained. This excludes by design BBKNN, Harmony and SCANORAMA as all construct a new embedding space. To provide pretrained methods for BBKNN and Harmony, we use a nearest-neighbor index in PCA space and position query cells at the average position of the 5 nearest neighbors.
Pretrained models are stored on HuggingFace.
PopV has three levels of prediction complexities:
- retrain: Will train all classifiers from scratch. For 50k cells, this takes up to an hour of computing time using a GPU.
- inference: Uses pretrained classifiers to annotate query and reference cells and construct a joint embedding using all integration methods. For 50k cells, this takes up to half an hour of GPU time.
- fast: Uses only methods with pretrained classifiers to annotate only query cells. For 50k cells, this takes 5 minutes without a GPU (without UMAP embedding).
PopV will output a cell-type classification for each of the used classifiers, as well as the majority vote across all classifiers. Additionally, PopV uses the ontology to go through the full ontology descendants for the OnClass prediction (disabled in fast mode). This method will be further described when PopV is published. PopV also outputs a score that counts the number of classifiers agreeing on the PopV prediction. This can be seen as the certainty that the current prediction is correct for every single cell in the query data.
We found that disagreement of a single expert is still highly reliable, while
disagreement of more than two classifiers signifies less reliable results. The
aim of PopV is not to fully annotate a dataset but to highlight cells that may
require further manual annotation. PopV also outputs UMAP embeddings of all
integrated latent spaces if popv.settings.compute_embedding == True
and computes
certainties for every used classifier if popv.settings.return_probabilities == True
.
We suggest using a package manager like conda
or mamba
to install the
package. OnClass files for annotation based on Tabula sapiens are deposited in
popv/resources/ontology
. We use Cell Ontology
as an ontology throughout our experiments. PopV will automatically look for the
ontology in this folder. If you want to provide your user-edited ontology,
our tutorials demonstrate how to generate the Natural
Language Model used in OnClass for this user-defined ontology.
conda create -n yourenv python=3.12
conda activate yourenv
pip install popv
We provide an example notebook in Google Colab:
This notebook will guide you through annotating a dataset based on the annotated Tabula sapiens reference and demonstrates how to run annotation on your own query dataset. This notebook requires that all cells are annotated based on a cell ontology. We strongly encourage the use of a common cell ontology, see also Osumi-Sutherland et al. Using a cell ontology is a requirement to run OnClass as a prediction algorithm. Setting ontology to false, will disable this step and allows running popV without using a cell ontology.