-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up RenderTransformMeshMappingWithMasks #199
Speed up RenderTransformMeshMappingWithMasks #199
Conversation
Use FastLinearIntensityMap. For now only when the input is ByteProcessor. FastLinearIntensityMap should work the same for other types. If necessary just add if branches with wrappers for the other ImageProcessor variants...
... I'm looking at the failing tests ... |
bb5a65a
to
881edd0
Compare
For following up on this: For the However, I guess in the "real runs" we would use something else... with masks? multiple channels? Which is/are the important |
And also for following up: Another possible improvement would be to try to reduce overdraw. Some target pixels are computed multiple times from different overlapping sources, and only the "top-most" source value is used in the end. This means that some transformed interpolated pixel values are computed unnecessarily. This could be reduced by splitting the target into smaller tiles and checking front-to-back whether a tile is completely covered by source triangles. It depends on how much overdraw is actually there. I don't have a good intuition about that... |
@tpietzsch - I'm pretty sure you understand this from our discussion earlier today, but I figured it would not hurt to post the following example of different tile overlaps in our multi-SEM volumes: |
Thanks for tackling this in that level of detail, @tpietzsch ! After reviewing your changes, I agree that all of what you detected does not constitute a real error. As discussed offline, it's fine to just adjust the png used in the test to the current values. I'm also happy to discuss and implement more general/robust tests that don't rely on a hard-coded png. I also agree that the edge cases that you sketched out above are not ideal. However:
|
(Note that this is just one new commit on top of #198)
Avoid doing per-pixel triangle containment checks and affine transforms:
Pre-compute per target line the start and end of the intersection with the triangle.
Only transform the start coordinate to source space, then use diff vector of the affine transform when stepping through the target pixels.