Skip to content

Latest commit

 

History

History
executable file
·
208 lines (130 loc) · 9.33 KB

README.md

File metadata and controls

executable file
·
208 lines (130 loc) · 9.33 KB

Rethinking Portrait Matting with Privacy Preserving

Installation | Prepare Datasets | Pretrained Models | Train on P3M-10k | Test | Inference

Installation

Requirements:

  • Python 3.7.7+ with Numpy and scikit-image
  • Pytorch (version>=1.7.1)
  • Torchvision (version 0.8.2)
  1. Clone this repository

    git clone https://github.com/ViTAE-Transformer/P3M-Net.git;

  2. Go into the repository

    cd P3M-Net;

  3. Create conda environment and activate

    conda create -n p3m python=3.7.7,

    conda activate p3m;

  4. Install dependencies, install pytorch and torchvision separately if you need

    pip install -r requirements.txt,

    conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch.

Our code has been tested with Python 3.7.7, Pytorch 1.7.1, Torchvision 0.8.2, CUDA 10.2 on Ubuntu 18.04.

Prepare Datasets

Dataset

Dataset Link
(Google Drive)

Dataset Link
(Baidu Wangpan 百度网盘)

Dataset Release Agreement
P3M-10k Link Link (pw: cied) Agreement (MIT License)
  1. Download the datasets P3M-10k from the above links and unzip to the folders P3M_DATASET_ROOT_PATH, set up the configuratures in the file core/config.py. Please make sure that you have checked out and agreed to the agreements.

After dataset preparation, the structure of the complete datasets should be like the following.

P3M-10k
├── train
    ├── blurred_image
    ├── mask (alpha mattes)
    ├── fg_blurred
    ├── bg
    ├── facemask
├── validation
    ├── P3M-500-P
        ├── blurred_image
        ├── mask
        ├── trimap
        ├── facemask
    ├── P3M-500-NP
        ├── original_image
        ├── mask
        ├── trimap
  1. If you want to test on RWP Test set, please download the original images and alpha mattes at this link.

After datasets preparation, the structure of the complete datasets should be like the following.

RealWorldPortrait-636
├── image
├── alpha
├── ...

Pretrained Models

Here we provide the model P3M-Net(ViTAE-S) that is trained on P3M-10k for testing.

Model Google Drive Baidu Wangpan(百度网盘)
P3M-Net(ViTAE-S) Link Link (pw: hxxy)

Here we provide the pretrained models of all backbones for training.

Model Google Drive Baidu Wangpan(百度网盘)
pretrained models Link Link (pw:gxn9)

Train on P3M-10k

  1. Download P3M-10k dataset in root P3M_DATASET_ROOT_PATH (set up in core/config.py);

  2. Download the pretrained models of all backbones in the previous section, and set up the output folder REPOSITORY_ROOT_PATH in core/config.py. The folder structure should be like the following,

[REPOSITORY_ROOT_PATH]
├── logs
├── models
    ├── pretrained
        ├── r34mp_pretrained_imagenet.pth.tar
        ├── swin_pretrained_epoch_299.pth
        ├── vitae_pretrained_ckpt.pth.tar
    ├── trained
  1. Set up parameters in scripts/train.sh, specify config file cfg, name for the run nickname, etc. Run the file:

    chmod +x scripts/train.sh

    ./scripts/train.sh

Test

Set up parameters in scripts/test.sh, specify config file cfg, name for the run nickname, etc. Run the file:

`chmod +x scripts/test.sh`

`./scripts/test.sh`
Test using provided model

Test on P3M-10k

  1. Download provided model on P3M-10k as shown in the previous section, unzip to the folder models/pretrained/;

  2. Download P3M-10k dataset in root P3M_DATASET_ROOT_PATH (set up in core/config.py);

  3. Setup parameters in scripts/test_dataset.sh, choose dataset=P3M10K, and valset=P3M_500_NP or valset=P3M_500_P depends on which validation set you want to use, run the file:

    chmod +x scripts/test_dataset.sh

    ./scripts/test_dataset.sh

  4. The results of the alpha matte will be saved in folder args.test_result_dir. Note that there may be some slight differences of the evaluation results with the ones reported in the paper due to some packages versions differences and the testing strategy.

Test on RWP

  1. Download provided model on P3M-10k as shown in the previous section, unzip to the folder models/pretrained/;

  2. Download RWP dataset in root RWP_TEST_SET_ROOT_PATH (set up in core/config.py). Download link is here;

  3. Setup parameters in scripts/test_dataset.sh, choose dataset=RWP and valset=RWP, run the file:

    chmod +x scripts/test_dataset.sh

    ./scripts/test_dataset.sh

  4. The results of the alpha matte will be saved in folder args.test_result_dir. Note that there may be some slight differences of the evaluation results with the ones reported in the paper due to some packages versions differences and the testing strategy.

Test on Samples

  1. Download provided model on P3M-10k as shown in the previous section, unzip to the folder models/pretrained/;

  2. Download images in root SAMPLES_ROOT_PATH/original (set up in config.py)

  3. Set up parameters in scripts/test_samples.sh, and run the file:

    chmod +x samples/original/*

    chmod +x scripts/test_samples.sh

    ./scripts/test_samples.sh

  4. The results of the alpha matte will be saved in folder SAMPLES_RESULT_ALPHA_PATH (set up in config.py). The color results will be saved in folder SAMPLES_RESULT_COLOR_PATH (set up in config.py). Note that there may be some slight differences of the evaluation results with the ones reported in the paper due to some packages versions differences and the testing strategy.

Inference Code - How to Test on Your Images

Here we provide the procedure of testing on sample images by our pretrained P3M-Net(ViTAE-S) model:

  1. Setup environment following this instruction page;

  2. Insert the path REPOSITORY_ROOT_PATH in the file core/config.py;

  3. Download the pretrained P3M-Net(ViTAE-S) model from here (Google Drive | Baidu Wangpan (pw: hxxy))) and unzip to the folder models/pretrained/;

  4. Save your sample images in folder samples/original/.;

  5. Setup parameters in the file scripts/test_samples.sh and run by:

    chmod +x scripts/test_samples.sh

    scripts/test_samples.sh;

  6. The results of alpha matte and transparent color image will be saved in folder samples/result_alpha/. and samples/result_color/..

We show some sample images, the predicted alpha mattes, and their transparent results as below. We use the pretrained P3M-Net(ViTAE-S) model from section P3M-Net and Variants with `RESIZE` test strategy.