Replies: 2 comments 2 replies
-
Yes you understand this correctly. The quality is not reduced (but beware the 16 Bit problem, as well as the imwrite defaults). A better starting point could be the tutorial |
Beta Was this translation helpful? Give feedback.
-
In my case, I am creating the original images with a DSLR camera mounted on a copy stand (pointed straight down at a flat surface), and I move the artwork around under it for each shot. According to the tutorial:
Would you recommend using the affine stitcher for my use-case? It seems that it would be identical considerations as a flatbed scanner, where the only possible deviations are in the angle of the artwork, but not in the perspective of it (which ideally is always perpendicular). Does affine stitcher perform fewer "changes" overall to the image, and therefore perhaps it would be a better option for artwork reproduction in a controlled photographing environment? |
Beta Was this translation helpful? Give feedback.
-
I'm curious to understand the internal workings of this library (and perhaps the upstream OpenCV stitching library) better so that I can know where there is potential for "loss of quality". My motivation is to use this for stitching together multiple photographs of a piece of artwork to create a final image that can be printed very large at 300 dpi (where a single photograph of the artwork would be too small by itself, and flatbed scanning is not possible). I know that the original photographs are high quality, but then I put them into this "black box", and magically they are combined into a single image. It's amazing! But I want to better understand how the "black box" works. Thankfully it's open source, so I can learn! :-)
I am starting in
stitching/cli/stitch.py
and tracing the logic to understand each step. Institcher.stitch()
, I see that one of the first steps is to resize the images to a "medium resolution" (imgs = self.resize_medium_resolution()
). Later it resizesimgs
again to a "low resolution" (imgs = self.resize_low_resolution(imgs)
). But after that it resizes them to a "final resolution" (imgs = self.resize_final_resolution()
), and in that case it uses the originalself.images
(not theimgs
which are low resolution). So, it appears that the final panorama is using the original high-quality images, and therefore no quality should be lost during the the conversions to medium and low resolutions, is that correct? I assume those conversions are useful for other steps in calculating where the seams are, but they do not affect final image quality.The CLI argument for final image resolution suggests as much:
I just want to confirm that I'm understanding all of this correctly so far.
It seems that the "verbose" output contains a lot of great information! It's wonderful to see exactly where the seams of the image are calculated to be, so that I can closely examine those areas of the final image to look for problems. I am still wrapping my head around all of the verbose output to better understand things.
Are there any other considerations that I should be aware of that might affect final image quality? Where are the points in the process that have the most impact on finished pixels? Would you have any concerns if you were using this for digital reproduction of art, where "as close to perfect as possible" is the ultimate goal? Any insights, ideas, warnings, or other considerations would be greatly appreciated! Thanks again for this library! :-)
Beta Was this translation helpful? Give feedback.
All reactions