-
Notifications
You must be signed in to change notification settings - Fork 4
Volume Render Settings
This separate command specifies volume rendering settings.
You need to restart the BigTrace for the changes to take effect.
Render width and height specify the size of OpenGL canvas, this is true "render resolution". The dataset rendering happens to the offscreen picture with these dimensions and then it is rescaled to the current BVV window size. The smaller resolution makes dataset navigation faster, so typical settings for work are 800x600 or so.
But! If you plan to perform high-quality animation renders, it is advisable to make these dimensions equal to the final render output dimensions.
Dither window size and number of dither samples. Dither technique allows rendering dataset faster by sampling less pixels and interpolating the values of unsampled pixels. Here is how Tobias, the author of BVV explains it: Especially if multiple sources are rendered at the same time, things can get slooooow (at least on my Macbook GPU…) if rendered at full resolution. Dither window size “4x4” means: draw only one pixel in each 4x4 window. Then if there is time left, draw another pixel in each 4x4 window, then another, until the target time is up. Interpolate the rest. Continue in the next frame, until all 4x4 pixels have been drawn. Number of dither samples: Pixels are interpolated from this many nearest neighbors when dithering. This is not very expensive – turn it up to 8.
Dithering is a two-edged sword. Although it is a lot faster to draw only every 16th pixel, iterating this until all 16 are filled is a lot slower than rendering them all in the first place. My explanation is that still in each iteration you touch enough texture data from all over the place to make caches less efficient… (?)
So maybe if you have a decent GPU, you don’t need it…
GPU cache size (in MB) defines how much of the volumetric data can be uploaded to the GPU and finally shown on the screen. If your dataset is at a single resolution and its size is 6 Gb, while you have 1 Gb GPU memory, some parts of the volume will not be shown. Therefore it helps to convert large data to the multi-resolution format (HDF5, for example). This allows to overcome a limitation in the visualization.
For the timelapse dataset, for example, since we always look at a single frame, GPU size should be larger than a single frame volume size.
Again, from Tobias: It helps to turn this up as much as possible, obviously… Depends on how much memory your GPU has. For example, my GPU has 1GB, so I can go maybe up to 600 MB, but not more (the OS and other programs share need some of that memory too!)
GPU cache tile size defines how GPU memory is split and accessed for the render. Depending on the specific GPU architecture, optimal tile size can be different. Good initial default size is 32, but increasing its size can speed up some renders.
Developed in Cell Biology group of Utrecht University.
Check out Updates history. The plugin and this wiki are under constant development.
E-mail for any questions, feedback, errors or suggestion
or tag @ekatrukha at image.sc forum.