Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameter setting in deepfake detection #6

Open
wytcsuch opened this issue Jul 20, 2021 · 6 comments
Open

Parameter setting in deepfake detection #6

wytcsuch opened this issue Jul 20, 2021 · 6 comments

Comments

@wytcsuch
Copy link

wytcsuch commented Jul 20, 2021

Thank you very much for your contribution.In the deepfake detection module of the paper, parameter lambda1-4 are set as follows which is inconsistent with the code:
参数设置

loss1=0.05*l1(low_freq_part,zero).to(device) 
loss2=-0.001*max_value.to(device)
loss3 = 0.01*l1(residual_gray,zero_1).to(device)
loss_c =20*l_c(classes,labels.type(torch.cuda.LongTensor))
loss5=0.1*l1(y,y_trans).to(device)

Can you explain that? Thank you.

@vishal3477
Copy link
Owner

Hi,
The values in the code might be changed as we were carrying out various ablation studies to find out the optimized parameters. For reproducing the experiment results in paper, please follow the training details mentioned in the paper. Thank you!!

@wytcsuch
Copy link
Author

wytcsuch commented Jul 21, 2021

@vishal3477
Thank you very much for your reply. I set the parameters according to the paper and train my own data. The training loss is as follows.
微信截图_20210721142341
I find that the Repetitive Loss is negative. It seems that the training is not normal. Can you help me?
In order to clarify the logic of the code, I just changed the parameter names in the code:

    low_freq, low_freq_k_part, max_value, low_freq_orig, fingerprint_res, low_freq_trans, fingerprint_gray =model_FEN(batch)   
    outputs, features=model_CLS(fingerprint_res)
    _, preds=torch.max(outputs, dim=1)
   
    n=25
    zero=torch.zeros([low_freq.shape[0],2*n+1,2*n+1], dtype=torch.float32).to(device)  
    zero_1=torch.zeros(fingerprint_gray.shape, dtype=torch.float32).to(device)
    
    Magnitude_loss = opt.lambda_1 *L2(fingerprint_gray,zero_1)  #Magnitude loss 
    Spectrum_loss= opt.lambda_2 *L2(low_freq_k_part,zero)   #Spectrum loss
    Repetitive_loss= - opt.lambda_3 *max_value #Repetitive_loss   
    Energy_loss= opt.lambda_4 *L2(low_freq,low_freq_trans)   #Energy_loss
    Cross_loss =opt.lambda_cros * L_cross(outputs,labels)  #
    
    loss= Spectrum_loss + Repetitive_loss + Magnitude_loss + Cross_loss + Energy_loss

parameter setting:

parser.add_argument('--lambda_1', default = 0.05, type = float)  #0.01
parser.add_argument('--lambda_2', default = 0.001, type = float) #0.05
parser.add_argument('--lambda_3', default = 0.1, type = float) #0.001
parser.add_argument('--lambda_4', default = 1.0, type = float)  #0.1
parser.add_argument('--lambda_cros', default = 1.0, type = float)

I'm looking forward to your reply

@vishal3477
Copy link
Owner

The repetitive loss is negative as defined in paper.
image

@littlejuyan
Copy link

@vishal3477 Thank you very much for your reply. I set the parameters according to the paper and train my own data. The training loss is as follows. 微信截图_20210721142341 I find that the Repetitive Loss is negative. It seems that the training is not normal. Can you help me? In order to clarify the logic of the code, I just changed the parameter names in the code:

    low_freq, low_freq_k_part, max_value, low_freq_orig, fingerprint_res, low_freq_trans, fingerprint_gray =model_FEN(batch)   
    outputs, features=model_CLS(fingerprint_res)
    _, preds=torch.max(outputs, dim=1)
   
    n=25
    zero=torch.zeros([low_freq.shape[0],2*n+1,2*n+1], dtype=torch.float32).to(device)  
    zero_1=torch.zeros(fingerprint_gray.shape, dtype=torch.float32).to(device)
    
    Magnitude_loss = opt.lambda_1 *L2(fingerprint_gray,zero_1)  #Magnitude loss 
    Spectrum_loss= opt.lambda_2 *L2(low_freq_k_part,zero)   #Spectrum loss
    Repetitive_loss= - opt.lambda_3 *max_value #Repetitive_loss   
    Energy_loss= opt.lambda_4 *L2(low_freq,low_freq_trans)   #Energy_loss
    Cross_loss =opt.lambda_cros * L_cross(outputs,labels)  #
    
    loss= Spectrum_loss + Repetitive_loss + Magnitude_loss + Cross_loss + Energy_loss

parameter setting:

parser.add_argument('--lambda_1', default = 0.05, type = float)  #0.01
parser.add_argument('--lambda_2', default = 0.001, type = float) #0.05
parser.add_argument('--lambda_3', default = 0.1, type = float) #0.001
parser.add_argument('--lambda_4', default = 1.0, type = float)  #0.1
parser.add_argument('--lambda_cros', default = 1.0, type = float)

I'm looking forward to your reply

Hi, did you solve this problem? Same here. My losses are very similar to yours and the classification accuracy doesn't improve at all, it is always like 50%...

@vishal3477
Copy link
Owner

@littlejuyan can you please share the losses you are getting. The printed loss would be negative as defined in the paper as we want to maximize it.

@peihuan494
Copy link

I see that others have mentioned that the accuracy of attribution seems to be higher, and the classification accuracy of my repetition is also about 50%. I don't know what the problem is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants