You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your inspiring work in this paper!But there are still some issues that are troubling me.
I tried the model "SAFNet_challenge123.pth" with "Challenge123_Dataset\Test" data, but the result seems to have some issues.
In the case of set 016, the overexposed areas did not correctly blend the texture information from the underexposed image, but instead became distorted and blurry.
Did your tests show the same results as mine?
The text was updated successfully, but these errors were encountered:
Yes, the result of my model is the same as your presented image. The reason is that the right border of this image has very large motion displacement and over exposure regions. Even if SAFNet can deal with this challenge scenes better than current other multi-exposure HDR models, the current SAFNet model in my paper is relatively small (only 1.12M parameters and 0.976 TFLOPs), enhancing its selective alignment fusion ability and refinement ability can achieve better results. And my Challenge123 dataset leaves room for this improvement.
Hi, thanks for your inspiring work in this paper!But there are still some issues that are troubling me.
I tried the model "SAFNet_challenge123.pth" with "Challenge123_Dataset\Test" data, but the result seems to have some issues.
In the case of set 016, the overexposed areas did not correctly blend the texture information from the underexposed image, but instead became distorted and blurry.
Did your tests show the same results as mine?
The text was updated successfully, but these errors were encountered: