Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问能分享loftr-lite的配置文件吗? #16

Open
youngyyp opened this issue Sep 30, 2022 · 5 comments
Open

请问能分享loftr-lite的配置文件吗? #16

youngyyp opened this issue Sep 30, 2022 · 5 comments

Comments

@youngyyp
Copy link

No description provided.

@Tangshitao
Copy link
Owner

把所有channel减少到1/2

@jeannotes
Copy link

jeannotes commented Oct 25, 2022

@Tangshitao
请问一下,在loftr-lite以及loftr两个模型里面,分别使用
QuadTree-A (ours, K = 8)、QuadTree-B (ours, K = 8)、QuadTree-B∗ (ours, K = 16)所采用的batch-size相比不采用quadtree的变换有多少,我这边使用QuadTree-B (ours, K = 8)发现batch-size比之前的还小了。这正常吗?

还有一个问题:
QuadTree-B∗ (ours, K = 16),这个我看论文描述是 使用了VIT类似的transformer block,能描述一下吗?

@Tangshitao
Copy link
Owner

  1. k=8的显存消耗不可能比k=16大。
  2. QuadTree-B∗ 就是这个repo里面release的。

@jeannotes
Copy link

1.我这边是跑了 QuadTree-B∗ (ours, K = 16)在3090上batch-size是8,而原始的loftr batch-size是10,发现batch-size还降低了,正常吗

2.QuadTree-B∗ 与QuadTree-B的实现区别就是VIT之间的不一样?能公开一下代码吗,想看一下区别

@Tangshitao
Copy link
Owner

  1. K=16时候确实显存消耗会大一些, 你可以试试小一点的K。
  2. 只是VIT不一样,Quadtree-B用的是原来loftr的结构,B*是ViT结构

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants