You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to understand the paper's approach deeper with your code implementation.
As far as I know, the paper suggests getting the gradient of the Softmax input w.r.t the target conv layer.
In your code I think it's referring to the output of the softmax layer loss = K.sum(model.layers[-1].output)
I was wondering if this should be corrected as loss = K.sum(model.layers[-1].output.op.inputs[0])
and get the gradient with K.gradient function.
Please correct me if I've misunderstood the concept or your approach.
Thank you!
The text was updated successfully, but these errors were encountered:
Hi, thanks for such a great blogpost. love it.
I'm trying to understand the paper's approach deeper with your code implementation.
As far as I know, the paper suggests getting the gradient of the Softmax input w.r.t the target conv layer.
In your code I think it's referring to the output of the softmax layer
loss = K.sum(model.layers[-1].output)
I was wondering if this should be corrected as
loss = K.sum(model.layers[-1].output.op.inputs[0])
and get the gradient with K.gradient function.
Please correct me if I've misunderstood the concept or your approach.
Thank you!
The text was updated successfully, but these errors were encountered: