Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add report from matcher with correct inlier ratio computation #374

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

johnwlambert
Copy link
Collaborator

@johnwlambert johnwlambert commented Nov 5, 2021

This actually computes inlier ratio w.r.t. gt model with # putatives as the denominator.

  • Add MatcherReport class and .evaluate() method of base matcher class

Solves #306

Adds section:
Screen Shot 2021-11-04 at 11 32 27 PM

@travisdriver
Copy link
Collaborator

I'm not sure I really agree with any of these changes. I think spreading the evaluation tasks around to all the different modules just increases complexity. I think more responsibility should be given to the TwoViewEstimationReport (or other derived subclasses).

Moreover, we are now passing around three different reports (two of which are just copies of each other) that could easily be storing the same information.

@@ -122,6 +122,7 @@ def create_computation_graph(
descriptors_graph_list += [delayed_descs]

# Estimate two-view geometry and get indices of verified correspondences.
matcher_reports_dict = {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we creating and passing around an additional report that just holds the number of matches and inlier ratio, both of which are already stored in the TwoViewEstimationReport.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main idea behind this is the following direction from @dellaert :
(1) Each blue box in the graph should have its own table.
(2) We need to be totally functional in our programming. (We cannot modify objects)
(3) Each table can only be returned by one blue box.
(4) No two blue boxes can create a single gray box.
(5) The graph must determine the code/data types exactly.

Screen Shot 2021-11-05 at 8 56 37 AM

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we talked about offline, I think all of this can be achieved without creating a redundant data type for each blue box. If we continue on this trajectory, we will be passing around 7-8 different reports, which will be extremely cumbersome.

I think we should continue brainstorming.

@travisdriver
Copy link
Collaborator

How does this solve #306?

@johnwlambert
Copy link
Collaborator Author

johnwlambert commented Nov 5, 2021

How does this solve #306?

306 notes that the "inlier ratio w.r.t. GT model" was far higher than "inlier ratio w.r.t. est model". We would actually expect the following two fractions:

  • Fraction 1: (# verified correspondences w.r.t. est model ) / (# putatives )
  • Fraction 2: (# verified correspondences w.r.t. GT model ) / (# putatives )

to be very close to each other.

It turns out the "inlier ratio w.r.t. GT model" that we were reporting in the Verifier's output was actually

  • Fraction 3: (# verified correspondences that are verified by GT model) / (# verified correspondences).

The denominator was different, explaining the issue we saw in 306. This PR provides the computation of Fraction 2 and adds it to the report.

Copy link
Collaborator

@akshay-krishnan akshay-krishnan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like having a different section for each blue box and each box returning an instance of its metrics which don't have anything to do with other boxes.
But I don't think we should have blue boxes defining custom data types for this.
I think treating them all as metrics makes the system simpler and don't see any advantages to having additional types. Let me know if there are any.

@johnwlambert
Copy link
Collaborator Author

How does this solve #306?

306 notes that the "inlier ratio w.r.t. GT model" was far higher than "inlier ratio w.r.t. est model". We would actually expect the following two fractions:

  • Fraction 1: (# verified correspondences w.r.t. est model ) / (# putatives )
  • Fraction 2: (# verified correspondences w.r.t. GT model ) / (# putatives )

to be very close to each other.

It turns out the "inlier ratio w.r.t. GT model" that we were reporting in the Verifier's output was actually

  • Fraction 3: (# verified correspondences that are verified by GT model) / (# verified correspondences).

The denominator was different, explaining the issue we saw in 306. This PR provides the computation of Fraction 2 and adds it to the report.

@akshay-krishnan @ayushbaid @travisdriver wanted to ping you guys on this again

@akshay-krishnan
Copy link
Collaborator

Could you pull from master and update the PR? Or would you like one of us to "take it from here"?

Everything in the PR looks good, except for the custom matcher report. This adds more boilerplate. We created the metrics struct to avoid such custom reports per module and to share operations across metrics. We could either return (num_inliers, inlier_ratio) as tuples (simpler), or return a GtsfmMetricsGroup and merge groups from different matchers later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate further inlier ratio w.r.t. GT model and w.r.t estimated model
3 participants