You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Workflow currently includes omitting genes found in less than 50 cells, but this is done on a per capture level. This causes some genes to be omitted from captures and present in others, which may affect DGE across conditions. Granted, these genes probably don't have much data to work with, but it would still be better to avoid this just in case
The text was updated successfully, but these errors were encountered:
I don't think this is a problem for our pseudobulk workflow though, since we're running edgeR::filterByExpr anyway, which flags low/no expression genes in each sample for removal from the model fitting procedure.
I agree that it shouldn't affect pseudo bulk, but I'm not sure if that is our 'defacto' workflow yet. I could see a niche scenario where it informs clustering in some way.
I just implemented it to the package (not pushed yet), do you think there'd be a reason to NOT do it? E.g. information loss?
Workflow currently includes omitting genes found in less than 50 cells, but this is done on a per capture level. This causes some genes to be omitted from captures and present in others, which may affect DGE across conditions. Granted, these genes probably don't have much data to work with, but it would still be better to avoid this just in case
The text was updated successfully, but these errors were encountered: