Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo on an image pair request #18

Open
fb82 opened this issue Nov 10, 2022 · 4 comments
Open

Demo on an image pair request #18

fb82 opened this issue Nov 10, 2022 · 4 comments

Comments

@fb82
Copy link

fb82 commented Nov 10, 2022

Hi,

could you provide a small demo to just run the feature matching on a single image pair and provide the correspondences?

Thanks in advance!

@Tangshitao
Copy link
Owner

@fb82
Copy link
Author

fb82 commented Nov 13, 2022

Thanks!

Here my simple script based on the info in the link, that compute the matches and put them in a .mat file. Maybe can help someone...

import os
import argparse
import torch
import cv2
import numpy as np
import scipy.io as sio

from src.config.default import get_cfg_defaults
from src.utils.profiler import build_profiler
from src.utils.misc import lower_config

from src.loftr import LoFTR
from src.loftr.utils.supervision import compute_supervision_coarse, compute_supervision_fine

if __name__ == '__main__':
# run QuadTreeAttention setup.py in the other folder first !!!
    parser = argparse.ArgumentParser(
        description='QuadTreeAttention demo',
        formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--weight', type=str, default="weights/outdoor.ckpt", help="Path to the checkpoint.")
    parser.add_argument('--config_path', type=str, default="configs/loftr/outdoor/loftr_ds_quadtree.py", help="Path to the config.")
    parser.add_argument('--input_1st', type=str, help="1st image.")
    parser.add_argument('--input_2nd', type=str, help="2nd image.")
    parser.add_argument('--output_file', type=str, default='quadtreeattention.mat', help=".mat file to save matches.")

    opt = parser.parse_args()

    # init default-cfg and merge it with the main- and data-cfg
    config = get_cfg_defaults()
    config.merge_from_file(opt.config_path)
    _config = lower_config(config)

    # Matcher: LoFTR
    matcher = LoFTR(config=_config['loftr'])
    state_dict = torch.load(opt.weight, map_location='cpu')['state_dict']
    matcher.load_state_dict(state_dict, strict=True)

    # Load example images
    img0_pth = opt.input_1st
    img1_pth = opt.input_2nd
    img0_raw = cv2.imread(img0_pth, cv2.IMREAD_GRAYSCALE)
    img1_raw = cv2.imread(img1_pth, cv2.IMREAD_GRAYSCALE)
    
   # img0_raw = cv2.resize(img0_raw, (640, 480))
   # img1_raw = cv2.resize(img1_raw, (640, 480))

    img0 = torch.from_numpy(img0_raw)[None][None].cuda() / 255.
    img1 = torch.from_numpy(img1_raw)[None][None].cuda() / 255.
    batch = {'image0': img0, 'image1': img1}

    # Inference with LoFTR and get prediction
    with torch.no_grad():
        matcher.eval()
        matcher.to('cuda')
        matcher(batch)
        mkpts0 = batch['mkpts0_f'].cpu().numpy()
        mkpts1 = batch['mkpts1_f'].cpu().numpy()
        mconf = batch['mconf'].cpu().numpy()
        mask_conf = mconf > 0
        conf = mconf[mask_conf]

    dict_to_save = {}        
    dict_to_save['kpt1'] = mkpts0
    dict_to_save['kpt2'] = mkpts1
    dict_to_save['conf'] = mconf
    
    sio.savemat(opt.output_file,dict_to_save)

@Tangshitao
Copy link
Owner

Thank you very much! I can commit this change if you send a pull request.

@fb82
Copy link
Author

fb82 commented Nov 13, 2022

Just copy the code above in a file inside the FeatureMatching folder (I'm not good with git :| )...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants