-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InstancedMesh support #16
Comments
@agviegas thank you for the bug report! It should work if you use "Outlines V1". I think the issue with outlines V2 is that the code that computes the surface IDs doesn't take into account instancing. |
Awesome, thanks a lot for the reply! 🙂 The main limitation of V1 is parallel faces, right? If I understood correctly, as normal vectors don't have information about the position in the scene, it's hard to distinguish between two faces contained in 2 planes that are parallel but not coplanar. Here are some ideas that I came across; sorry in advance if these are not relevant / if you already went through them. TLDR: Have you thought about using planes instead of only normals? Have you considered using the plane of the face instead of the normal vector? This way, you wouldn't have only information about the orientation, but also the position of the face, and the artifacts of v1 would go away. Additionally, you wouldn't need to precompute face IDs. The equation of a plane looks like this: Which you can easily get from a vector The vector Finally, for rendering this "plane pass" as a color in a postproduction pass, we need to reduce this from 4 components to only 3 (r,g,b). The To normalize that new vector, we could take the Again, thanks a lot for the fantastic work. Cheers! |
@agviegas I had not heard of this approach before! It sounds intriguing. I'd love to see it in action, it sounds not too difficult to implement. I don't know when I'll get a chance to give it a shot but happy to point you in the right direction.
This would be really exciting if it works as well as it sounds because it would fix a lot of these artifacts "for free", without having to worry about the additional buffer or manually tweaking geometry in Blender etc |
@OmarShehata thanks a lot for the pointers! I tried it out and the results improved significantly, but they are not perfect yet (see screenshots below: before - after). I'll keep you updated with further findings! |
@OmarShehata I've had the chance to experiment a bit more. Now I am separating them into 2 different render targets and using them both to compute the edges (this can probably be optimized; I'll make further tests). I've also rewritten the fragment shader, as the depth buffer was not needed anymore, and that simplified it quite a lot. As it turns out that the plane distance pass acts as a "smarter" depth buffer, I could also remove all the bias and multiplier factors from the fragment shader. This is how it looks now. I think there's still room for improvement. I've started to learn WebGL recently, so please forgive any blunders! 🙂 uniform sampler2D sceneColorBuffer;
uniform sampler2D surfaceBuffer;
uniform sampler2D planeDistanceBuffer;
uniform vec4 screenSize;
uniform vec3 outlineColor;
uniform int width;
uniform float tolerance;
varying vec2 vUv;
vec3 getValue(sampler2D buffer, int x, int y) {
return texture2D(buffer, vUv + screenSize.zw * vec2(x, y)).rgb;
}
float normalDiff(vec3 normal1, vec3 normal2) {
return ((dot(normal1, normal2) - 1.) * -1.) / 2.;
}
void main() {
vec4 sceneColor = texture2D(sceneColorBuffer, vUv);
vec3 normal = getValue(surfaceBuffer, 0, 0);
vec3 distance = getValue(planeDistanceBuffer, 0, 0);
vec3 normalTop = getValue(surfaceBuffer, 0, width);
vec3 normalBottom = getValue(surfaceBuffer, 0, -width);
vec3 normalRight = getValue(surfaceBuffer, width, 0);
vec3 normalLeft = getValue(surfaceBuffer, -width, 0);
vec3 normalTopRight = getValue(surfaceBuffer, width, width);
vec3 normalTopLeft = getValue(surfaceBuffer, -width, width);
vec3 normalBottomRight = getValue(surfaceBuffer, width, -width);
vec3 normalBottomLeft = getValue(surfaceBuffer, -width, -width);
vec3 distanceTop = getValue(planeDistanceBuffer, 0, width);
vec3 distanceBottom = getValue(planeDistanceBuffer, 0, -width);
vec3 distanceRight = getValue(planeDistanceBuffer, width, 0);
vec3 distanceLeft = getValue(planeDistanceBuffer, -width, 0);
vec3 distanceTopRight = getValue(planeDistanceBuffer, width, width);
vec3 distanceTopLeft = getValue(planeDistanceBuffer, -width, width);
vec3 distanceBottomRight = getValue(planeDistanceBuffer, width, -width);
vec3 distanceBottomLeft = getValue(planeDistanceBuffer, -width, -width);
float depthDiff = 0.0;
depthDiff += normalDiff(normal, normalTop);
depthDiff += normalDiff(normal, normalBottom);
depthDiff += normalDiff(normal, normalLeft);
depthDiff += normalDiff(normal, normalRight);
depthDiff += normalDiff(normal, normalTopRight);
depthDiff += normalDiff(normal, normalTopLeft);
depthDiff += normalDiff(normal, normalBottomRight);
depthDiff += normalDiff(normal, normalBottomLeft);
depthDiff += step(0.001, abs((distance - distanceTop).x));
depthDiff += step(0.001, abs((distance - distanceBottom).x));
depthDiff += step(0.001, abs((distance - distanceLeft).x));
depthDiff += step(0.001, abs((distance - distanceRight).x));
depthDiff += step(0.001, abs((distance - distanceTopRight).x));
depthDiff += step(0.001, abs((distance - distanceTopLeft).x));
depthDiff += step(0.001, abs((distance - distanceBottomRight).x));
depthDiff += step(0.001, abs((distance - distanceBottomLeft).x));
float outline = step(tolerance, depthDiff);
float background = 1.0;
vec3 absNormal = abs(normal);
background *= step(absNormal.x, 0.);
background *= step(absNormal.y, 0.);
background *= step(absNormal.z, 0.);
background = (background - 1.) * -1.;
outline *= background;
vec4 color = vec4(outlineColor,1.);
gl_FragColor = mix(sceneColor, color, outline);
} White color, with 1 and tolerance 2 look like this: |
Looks amazing!! Would you like to open a PR or just make your own fork and I can link to it from mine? (I don't want to take credit for your work & idea!) would be cool to have an article explaining the technique and the insight, and I'm curious to hear if you were to share it on Twitter/other graphics communities if this is a known technique or just something no one has tried on the web before etc |
Thanks a lot! I can make a PR. Would you rather make this an improvement of V1, or call this V3? 🙂 I couldn't have done it without your previous discoveries. Regarding the article, I'm aware that you already wrote two articles on Medium talking about V1 and V2. If you'd like to co-publish another one there (as a natural sequence of the other two), I'm up for it. Otherwise, I'm open to any other ideas. |
I think it might be easiest at this point to make it as a copy of the folder, to keep it clean/easy to read the source code, kind of like the "vertex welder" example: https://github.com/OmarShehata/webgl-outlines/tree/main/vertex-welder Out of curiosity, do you have a debug rendering of the "plane distance" buffer? I think it would be interesting to compare that to the existing depth buffer that was used. I think you're correct that it is essentially acting as a depth buffer, and maybe one reason it works better is that it normal depth buffers don't encode distance linearly. This might make a difference in scenes with far away scenes like mountains (like I'm curious what it'd look like on scenes like this https://twitter.com/ianmaclarty/status/1499495014082441218) |
Got it! I can do a PR like that. I have also prepared it to make it compatible with instanced meshes.
If I understood this correctly, the plane distance guarantees that all the pixels in the same plane have the same color (as it measures the minimum signed distance of the plane to the origin), while the depth buffer only measures the distance of each pixel to the camera, so you can find many tones within the same plane (making it harder to distinguish between them). The current implementation is quite primitive because the plane distance is computed to the origin of the scene. Maybe this could be improved by computing the plane distance to the camera, making it more scalable when the camera is far away from the origin. Regarding far away distances, I would also like to see how it behaves. It is likely that when the plane distance goes beyond the value that the pixel can store, the current implementation will stop working, but I'm sure we can find solutions (e.g. maybe big dinstances are solved by the depth buffer and short distances by the plane distance?). |
Ah, this is a very subtle but important distinction! I think this helps a lot in removing unwanted lines inside the same surface (similarly to the benefit you get from the surface ID color but solved in a different way) the screenshots look great, can't wait to play with this in a live demo! |
This looks amazing!! Brilliant work @agviegas and @OmarShehata!! Do you know when this could be released? 😬 Thanks!! |
Thanks @RodrigoHamuy! I never had the time to make a PR, but you can check out this code here (everything is inside the postproduction object). These weeks are crazy, so I don't think I'll have time to do it myself, but if you want to do it and I'll be happy to answer any question you have :) |
Hi @agviegas, thanks for enhancing this repo forward to fit engine fragments, I am actually developing a tool on top of yours, and I am missing the outline implemention to achieve the prototype's goal. Last week I fell on @OmarShehata's article on better outlines with post processing, That would really help me |
@agviegas thanks man 🫶 |
Hey, fantastic work here!
Just for the notice, I tried adding a simple InstancedMesh to the scene of your example and it looks like the image below.
The text was updated successfully, but these errors were encountered: