thanks for the excellent works Sadtalker
official result | my code result |
---|---|
- Data preprocessing.
- PoseNet training codes.
- ExpNet training codes.
python data_preprocess.py
python save_orig_mel.py
After data preprocessing, your data directory structure should be as follows:
python write_train_list.py --data_dir <your image dir>
In paper, the labels ranges from 0 to 45. In my work, the characters in each video are different, so the labels for each video are different. If there are duplicate characters in your videos, you may need to rewrite write_train_list.py.
before training, you should download the pretrained model and put it in ./checkpoints
python train_posevae.py --save_dir <save root dir>
--save_name <save name>
--train_data_path <your train data txt>
--num_class <your num_class>
Note that the num_class must equals to the label in train list. In paper, num_class is 46.
python rewrite_safemodel.py
python inference.py --driven_audio <audio.wav> \
--source_image <picture.png> \
--checkpoint_dir <latest.safetensors merged from rewrite_safemodel.py>
--enhancer gfpgan
--pose_style <your pose_style>