Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find the landmark of a random vector? #122

Open
yeluoo opened this issue Jan 21, 2024 · 12 comments
Open

How to find the landmark of a random vector? #122

yeluoo opened this issue Jan 21, 2024 · 12 comments

Comments

@yeluoo
Copy link

yeluoo commented Jan 21, 2024

Angled faces cannot be produced here.

    random_id_vec = np.random.normal(model.id_mean, np.sqrt(model.id_var))
    
    # create random expression vector
    exp_vec = np.zeros(52)
    exp_vec[0] = 1

    # generate full head mesh
    mesh_full = model.gen_full(random_id_vec, exp_vec)

    # render
    depth_full, image_full = render_cvcam(trimesh.Trimesh(vertices = mesh_full.vertices, 
                                                      faces = mesh_full.faces_v-1),
                                          Rt = Rt)```


@yeluoo
Copy link
Author

yeluoo commented Jan 22, 2024

@zhuhao-nju

@icewired-yy
Copy link

@yeluoo What's wrong with this code? Is the rendered result blank?

@yeluoo
Copy link
Author

yeluoo commented Jan 23, 2024

There is no problem with the code, I want to get the angled face @icewired-yy

@icewired-yy
Copy link

icewired-yy commented Jan 23, 2024

@yeluoo I see.

If you want to get the angled face in the rendered image, you can modify the RT matrix. The RT matrix consists of the rotation matrix at [0:3, 0:3] and the camera position at [0:3, 3]. You can modify both of these two parts to control the RT matrix, as well as the final view angle in rendering.

If you want to export an angled face model(vertex coordinates have been modified), you can try applying the rotation matrix to the vertices, since the model is Topologically Uniformed.

All the advice above is based on my experience. Hope it will be helpful to you.

@yeluoo
Copy link
Author

yeluoo commented Jan 24, 2024

I am currently using the solution you mentioned. Adjusting the RT matrix is ​​equivalent to adjusting the perspective. What I want is for the face to move left and right. These are two different things. @icewired-yy @icewired-yy

@icewired-yy
Copy link

@yeluoo Sorry for misunderstanding your questions.

Do you mean moving the face left or right in the rendered image? We can consider the original face locates at the center of the rendered image.

@icewired-yy
Copy link

@yeluoo Sorry for misunderstanding your questions.

Do you mean moving the face left or right in the rendered image? We can consider the original face locates at the center of the rendered image.

@yeluoo If that's right, you can try to modify the vertex coordinates of the model before rendering it. Just like what is used in official implementation. For example in fit demo:

mesh_tm = trimesh.Trimesh(vertices = mesh.vertices.copy(), 
                              faces = fs_fitter.fv_indices_front-1, 
                              process = False)
mesh_tm.vertices[:, :2] = mesh_tm.vertices[:, 0:2] - np.array([src_img.shape[1] / 2, src_img.shape[0] / 2])
mesh_tm.vertices = mesh_tm.vertices / src_img.shape[0] * 2
mesh_tm.vertices[:, 2] = mesh_tm.vertices[:, 2] - 10

The above code moves the model from the image space ([0 ~ resolution]) to the OpenGL NDC (-1 ~ 1). And this operation is equivalent to the left or right moving operation you want.

@yeluoo
Copy link
Author

yeluoo commented Jan 25, 2024

哈喽,我感觉你没有懂我意思,你有QQ吗,或者你加我1830343214,这里感觉说不清楚 @icewired-yy

@yeluoo
Copy link
Author

yeluoo commented Jan 25, 2024

Does the bilinear model have the original 20 expressions? I don’t need many of the expanded 52 expressions.

My goal is to use a parametric model to generate landmarks for faces with different fatness, thickness, postures, and expressions. @icewired-yy @icewired-yy @icewired-yy

@icewired-yy
Copy link

Does the bilinear model have the original 20 expressions? I don’t need many of the expanded 52 expressions.

My goal is to use a parametric model to generate landmarks for faces with different fatness, thickness, postures, and expressions. @icewired-yy @icewired-yy @icewired-yy

@yeluoo The FaceScape dataset has already fitted the 20 expressions for every participant with its bilinear model, they are stored in the TU model part of the FaceScape dataset.

To extract the landmarks from a generated face model, or any FaceScape bilinear model, use the landmark_indices.* provided under the toolkit/predef in the facescape repository. Use these indices to get the 2D or 3D coordinates of the landmark vertices.

By the way, FaceScape has no posture dimension in its bilinear model. If you want to generate face model with different posture, try the FLAME. FLAME has three dim: shape expression and posture.

The 52-dim expression vector doesn't mean it can only generate 52 expressions, you can generate the expression vector from normal distribution just like what you have done with the identity vector.

@yeluoo
Copy link
Author

yeluoo commented Jan 29, 2024

Thank you for your patient answer, I have another question。

random_color_vec = (np.random.random(100) - 0.5) * 100
                mesh = model.gen_face_color(
                    id_vec=id_vec,
                    exp_vec=exp_vec,
                    vc_vec=random_color_vec,
                )
depth, face_full = renderer.render_cvcam(
                    trimesh.Trimesh(vertices=mesh.vertices, 
                                    faces = mesh.faces_v-1,
                                    vertex_colors = mesh.vert_colors),
                    K=K,
                    Rt = Rt,
                    rend_size = (768, 768)
                )

The background of the rendering result here is white. I hope the background is black or there is no background. Where should I modify it? @icewired-yy @icewired-yy @icewired-yy

@icewired-yy
Copy link

Thank you for your patient answer, I have another question。

random_color_vec = (np.random.random(100) - 0.5) * 100
                mesh = model.gen_face_color(
                    id_vec=id_vec,
                    exp_vec=exp_vec,
                    vc_vec=random_color_vec,
                )
depth, face_full = renderer.render_cvcam(
                    trimesh.Trimesh(vertices=mesh.vertices, 
                                    faces = mesh.faces_v-1,
                                    vertex_colors = mesh.vert_colors),
                    K=K,
                    Rt = Rt,
                    rend_size = (768, 768)
                )

The background of the rendering result here is white. I hope the background is black or there is no background. Where should I modify it? @icewired-yy @icewired-yy @icewired-yy

Glad to hear that my suggestion is helpful to you @yeluoo.

The default background of the rendered results from pyrender is white. Maybe some settings may change the background. However, my solution is to use the depth map to get the mask to modify the final result.

You see, the output of rendering is depth and color. The depth of the background is zero, so we can use:

''' Render scan mesh '''
colorImage, depthImage = scanRenderer(calibratedScanMesh, K, Rt, light)

''' Mask out the face region '''
validRegionMask = depthImage != 0
validRegionMask = validRegionMask[..., np.newaxis]
colorImage = colorImage * validRegionMask

This is the part of my code, and you can try my solution. Hope it will be helpful to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants