This repository is the official implementation of [Deep Geometry Analysis for Uncalibrated Multi-View 3D Human Pose Estimation].
To install requirements:
#1. Create a conda virtual environment.
conda create -n mvhpe python=3.7.11
conda activate mvhpe
#2. Install Pytorch
pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
#3. Install requirements.
pip install -r requirements.txt
-
Download the required data.
- Download our data from Google Drive
- Download our pretrained model from Google Drive
-
You need to add the
data
andcheckpoint
as below.
|-- data
`-- |-- h36m_sub1.npz
`-- ...
`-- h36m_sub11.npz
`-- score.pkl
|-- checkpoint
`-- |-- h36m_cpn_uncalibration.pth
`-- h36m_gt_uncalibration.pth
`-- h36m_cpn_calibration.pth
To evaluate our model, run:
python eval_h36m_gt_uncalibration.py --test --in_chans 2 --previous_dir ./checkpoint/h36m_gt_uncalibration.pth --root_path /home/zzj/TMM/MV-3D-HPE/data/ --gpu 0
python eval_h36m_cpn_uncalibration.py --test --out_chans 3 --previous_dir ./checkpoint/h36m_cpn_uncalibration.pth --root_path /home/zzj/TMM/MV-3D-HPE/data/ --gpu 0
python eval_h36m_cpn_calibration.py --test --out_chans 2 --previous_dir ./checkpoint/h36m_cpn_calibration.pth --root_path /home/zzj/TMM/MV-3D-HPE/data/ --gpu 0
Our model achieves the following performance on Human3.6M:
Methods | Camera | MPJPE |
---|---|---|
Ours (GT, T=27) | Uncalibration | 5.2mm |
Ours (CPN, T=27) | Uncalibration | 24.5mm |
Ours (CPN, T=27) | Calibration | 23.8mm |
Implement the functions to
- We will provide training code after the paper is accepted.
By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.