Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GEX-Based Segmentation "Misalignment" #40

Open
dl-mallory opened this issue Dec 23, 2024 · 3 comments
Open

GEX-Based Segmentation "Misalignment" #40

dl-mallory opened this issue Dec 23, 2024 · 3 comments

Comments

@dl-mallory
Copy link

Hi!

Thank you so much for the wonderful package. I'm just getting underway learning the tidbits here and there with the package and Visium HD analysis itself. However, I'm running into some weirdness regarding the GEX-based segmentation specifically, which could just be due to either poor alignment or poorly prepared libraries.

Take, for instance, the following ROI:

test_fig

The HE-based nuclei identification looks good, but the GEX-based fluorescent identification seems A) misaligned and B) scaled incorrectly. I checked my mpp manually in qupath based on the HE image that was used as input to SpaceRanger.

Based on spaceranger output summary, i visually checked the alignment between the HE and the CytAssist image and the alignment looked perfect; the alignment between the total UMI count and the tissue image, at least for the 8um default SpaceRanger binned output, looked fine as well. I ran spaceranger both with and without exporting a manual alignment .json in Loupe Browser and the alignment didn't improve.

After attempting to bin_to_cell, it looks like the labels are still being thrown off by the GEX misalignment:

plot_joint_labels

Do you perhaps have any idea what the origin of this alignment issue is? Intuitively it feels like a count matrix-he image misalignment, so maybe the issue stems from the grid_image I'm generating from the count matrix? Happy to elaborate - and apologies if the question is poorly worded!

@dl-mallory
Copy link
Author

Also, to clarify, I'm running

bint.grid_image(samples, 'n_counts_adjusted', sigma= 5, mpp = 0.2506, save_path = sample_out_path + '/gex.tiff')

@ktpolanski
Copy link
Contributor

The H&E segmentation will almost always look perfectly aligned, as it's visualising the segmentation of a morphology image back on the same image it's based on. The GEX segmentation makes use of the same spatial coordinate system on the H&E for visualisation, but the segmentation works on the actual expression values from the file. As such, if you've got some noise in the data, then you can get objects reflecting on bits of RNA where they shouldn't be.

I'd mainly recommend following the tutorial notebook and taking a look at the count totals via sc.pl.spatial(). You can also double-check on the actual GEX segmentation, as also shown in the tutorial. To my untrained eye, from the binary presence/absence of the GEX objects alone, it seems like you've got a slight misalignment at the Spaceranger output level. Bin2cell does not mess around with the GEX-HE alignment, the closest it comes is cropping the morphology image and spatial coordinates to limit the segmentation search space. The lack of impact of this can be visualised by doing a sc.pl.spatial() with the default image and spatial coordinates, as also shown in the tutorial.

Also that is an oddly particular mpp you've got.

@dl-mallory
Copy link
Author

dl-mallory commented Dec 24, 2024

Yeah - I figured given that the deep-learning based HE nuclei identification will be invariant to any sort of CytAssist/HE alignment and really only evaluates StarDist itself, this is more along the lines of a SpaceRanger misalignment

I tried playing around with a few alignments in Loupe and it seems like I can find a pretty good alignment. But there is always the visual 'artifact' of the resolution difference between the HE/CytAssist images making me think that there is an issue. I'm trying another run or two of SpaceRanger and playing with the outputs to see if the alignment can be better. But, in and of itself, I think that the sample quality was pretty low.

I'm new to the spatial analysis, so I've got some learning to do. I'll play around more with this and see what I can find!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants