Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor intensity matching #200

Open
wants to merge 8 commits into
base: newsolver
Choose a base branch
from

Conversation

minnerbe
Copy link
Collaborator

I played around with the intensity match derivation process and introduced two optimizations:

  1. I introduced a binning strategy for the matching process. After matches are found on sub-tiles by rendering, transforming, and partitioning images tiles, I bin the matches into nBins x nBins bins (where I tentatively set nBins = 256). All matches in one bin result in a single match with a weight that equals the sum of all original matches in that bin. I am aware that this introduces a discretization error, but since the number of bins is sufficiently large, the effect should be small. Reducing the number of bins further improves the runtime, trading off accuracy.
  2. I specialized Point and PointMatch for 1D points. Since there are a lot of static and final methods in the original Point class, there's only so much that one can specialize, but all methods that are called in the intensity matching process make full use of the optimizations.

I tested the impact of these changes on a randomly chosen neighboring pair of images from some fibsem stack, using the parameters that were specified in the alignment-prep files for that project.

For this tile pair, the matching process (including RANSAC filtering) took 2.36s with a single core on my local machine. Of this, the rendering accounted for 0.95s, while the RANSAC filtering took 1.38s. The optimizations described above resulted in the following runtime improvements for the RANSAC filtering:

base :  1.38s
1.   :  0.98s  (~30% improvement)
1.+2.:  0.18s  (~85% improvement)

My chosen example might not be a representative test case. Maybe @trautmane can help me set up a test case for multi-sem so that we can make sure the changes didn't introduce bugs and see how much we gain in this case. @StephanPreibisch

@minnerbe
Copy link
Collaborator Author

minnerbe commented Jan 26, 2025

I finally got a chance to test my changes on a real example (a 2x7 two-layer honeycomb setup from the latest multi-sem project). As expected, the results are not that dramatic, but even in this example the time for matching and outlier removal decreased from 2.4s to 1.0s.

In addition, I introduced an IntensityTile class that encapsulates the usual ArrayList<Tile<? extends AffineModel1D<?>>> construction. There was a bit of pain involved since all methods in the original Tile<M> class are marked final (which I find very unfortunate since this renders all object-oriented techniques useless).

Nevertheless, introducing this class reduces the overhead in the concurrent optimizer (since the optimizer has to care about less tiles) and saves ~30% runtime there. My guess is that this effect will be even more pronounced with larger stacks. As a side effect, also the code looks much more readable to me, now.

To check the validity of my optimizations, I compared the result before the optimizations with 1) after the introduction of Point1D and 2) after the introduction of IntensityTile.

@minnerbe minnerbe requested a review from trautmane January 26, 2025 22:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant