This repository contains the code of the paper in SGP 2020 "DFR: Differentiable Function Rendering for Learning 3D Generation from images".
Paper url: https://diglib.eg.org/handle/10.1111/cgf14082
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
Create an anaconda environment called dfr
using
conda env create -f dfr.yaml
conda activate dfr
Then, compile the extension modules.
python setup.py build_ext --inplace
We provide four demos as illustrated in our paper.
The detailed definition of network can be seen in dfr/models.py
.
We load the pre-trained implicit function from test/checkpoints/gan-chair.pth.tar
python test1_run_time.py
This script will render the implicit function from different views and save them as test1_x.png
.
The runtime will be printed in console.
Given a renference image and a neural-network defined function, you can optimize the function to fit the renference image.
We provide an example image with resolution 224x224.
You can try this optimization process with other images, just replace the ./input/input_test2.png
and run
python test2_differentiable.py
The script will create a gif ./test2.gif
and the optimized mesh ./test2.off
to illustrate the process
We provide the pretrained model of single-image 3D reconstruction and image-based 3D GAN.
You can run the single-image 3D reconstruction via
test3_reconstruct.py
This script will read images in ./reconstruction/input
and save reconstructed shapes in ./reconstruction/output
.
You can generate random shapes via
python test4_gan.py
This script will randomly sample noise vectors and generate 3D shapes from them. The results are saved in ./gan