You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In minimize training step, the training goal is to minimize the association discrepancy, the prior_loss in "loss2 = rec_loss + self.k * prior_loss" tends to 0. Therefore, the overall trend of loss2 tends to 0 too.
So why choose "score2 = -val_loss2" instead of "score2 = val_loss2" when the judgment logic is "score2 < self.best_score2 + self.delta"?
The text was updated successfully, but these errors were encountered:
In minimize training step, the training goal is to minimize the association discrepancy, the prior_loss in "loss2 = rec_loss + self.k * prior_loss" tends to 0. Therefore, the overall trend of loss2 tends to 0 too.
So why choose "score2 = -val_loss2" instead of "score2 = val_loss2" when the judgment logic is "score2 < self.best_score2 + self.delta"?
The text was updated successfully, but these errors were encountered: