Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up RenderTransformMeshMappingWithMasks #199

Merged
merged 15 commits into from
Jan 17, 2025

Conversation

tpietzsch
Copy link

(Note that this is just one new commit on top of #198)

Avoid doing per-pixel triangle containment checks and affine transforms:
Pre-compute per target line the start and end of the intersection with the triangle.
Only transform the start coordinate to source space, then use diff vector of the affine transform when stepping through the target pixels.

@tpietzsch
Copy link
Author

... I'm looking at the failing tests ...

@tpietzsch tpietzsch force-pushed the newexport-rendermesh branch from bb5a65a to 881edd0 Compare January 6, 2025 13:05
@tpietzsch
Copy link
Author

I have looked at the failing tests and the reason they failed was that pixels on the edge of triangles are sometimes not drawn when the test expects them to.

In particular, looking at RenderRawTileTest, it draws these triangles (1,2,3,4) in target coordinates:
Image
I hope my drawing makes sense.
Basically, the problem is the following. If you look at the first line in triangle 1 being drawn: y=0, and the (discretized) intersection with the triangle is computed as 0 <= x < 3. So y=0, x=3 is never drawn. But the RenderRawTileTest expects that. The only solution that makes the test pass is to include the upper bound in the triangle intersection, that is, 0<=x<=3. Unfortunately, that also means that the pixels on "inner edges" where triangles meet are drawn twice. That is, pixel (1,1) would be drawn both by triangle 1 and 2.

I don't like that at all, but changing the behaviour could trigger bugs, so I don't know... I think it would be better to use the lower <= x < upper version to avoid overdrawing inner pixels, and instead shift the outside edges of the border triangles a bit. But it seems difficult to catch all places where that padding might need to be added. So maybe it's best to live with duplicate pixels for a bit.

Anyway. That fixed the RenderRawTileTest.

What remains is more "off-by-1-when-rounded-to-int" differences that are caused by different order of floating point computation and numerical precision.
For example, in ShortRendererTest, I get 304 pixels that are different.
I can reduce that to 167, when computing an affine transform for every target coordinate, instead of moving by diff vectors.
I can further reduce it to 8, by using AffineModel2D.applyInverseInPlace() instead of AffineTransform2D.inverse().apply() to do the affine transform. I assume the remaining 8 are caused by different triangle "winning" on the twice-drawn edge pixels and models of the triangles yielding numerically slightly different results although (at infinite precision) they should coincide on the edges.

I think none of that is actually a "real error". If it's ok with you, I would just change the tests (that is, change the expected pngs)?

@tpietzsch
Copy link
Author

For following up on this:

For the PixelMapper.LineMapper interface, I explored various alternative implementations. For the SingleChannelMapper, it didn't make sense to go lower-level than just using the ImageProcessor.set()/get() methods (much more convoluted code for very little benefit).

However, I guess in the "real runs" we would use something else... with masks? multiple channels? Which is/are the important PixelMapper implementations? How can I benchmark?
@trautmane: Is it feasible to modify TobiasTest to use the PixelMapper implementation that is used for the "big exports"?

@tpietzsch
Copy link
Author

And also for following up:

Another possible improvement would be to try to reduce overdraw. Some target pixels are computed multiple times from different overlapping sources, and only the "top-most" source value is used in the end. This means that some transformed interpolated pixel values are computed unnecessarily. This could be reduced by splitting the target into smaller tiles and checking front-to-back whether a tile is completely covered by source triangles. It depends on how much overdraw is actually there. I don't have a good intuition about that...

@trautmane
Copy link
Collaborator

@tpietzsch - I'm pretty sure you understand this from our discussion earlier today, but I figured it would not hurt to post the following example of different tile overlaps in our multi-SEM volumes:

multisem-tile-overlap-example

@minnerbe
Copy link
Collaborator

Thanks for tackling this in that level of detail, @tpietzsch !

After reviewing your changes, I agree that all of what you detected does not constitute a real error. As discussed offline, it's fine to just adjust the png used in the test to the current values. I'm also happy to discuss and implement more general/robust tests that don't rely on a hard-coded png.

I also agree that the edge cases that you sketched out above are not ideal. However:

  1. I would rather err on the side of caution and draw pixels twice than risk not drawing a pixel at all.
  2. I have observed such kinds of numerical instabilities before with RenderTransformMeshMapping, but in my experience they only occur in cases where coordinates exactly hit triangle boundaries, which is virtually never the case in practice, since there's always some kind of transform involved.

@trautmane trautmane merged commit e0e7d51 into saalfeldlab:newexport Jan 17, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants