You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have been stuck by the addition between x and context_x, i.e. the addition of Fig2 in your paper.
As demonstrated, x is in the embedding space of original features, while context_x, which is the output of Retrieval module, is the weighted sum of values, that is in label embedding space. Why can the embeddings from different spaces (i.e. feature and label) be added directly?
The text was updated successfully, but these errors were encountered:
Hi! I am sorry for the slow reply. Has the issue been resolved?
For TabR, as well as for any deep learning architecture, it is totally valid to sum any two representations if their shapes are compatible. Whether it will lead to good performance depends on the optimization problem induced by the architecture. In the case of TabR, the optimization process will try to configure the TabR's weights in such a way that the two embedding spaces are "compatible" in a sense that the sum of x and context_x will result in a reasonable representation. More generally, it depends on the architecture if adding two representations is a reasonable operation.
That said, I should add that the "embedding space" term is rather an intuitive term, at least in the context of this issue. It may be an intuitive way to think about architectures, but neither PyTorch nor the optimization algorithm are aware of "embedding spaces", they are aware only of the representation shapes.
Hi, I have been stuck by the addition between x and context_x, i.e. the addition of Fig2 in your paper.
As demonstrated, x is in the embedding space of original features, while context_x, which is the output of Retrieval module, is the weighted sum of values, that is in label embedding space. Why can the embeddings from different spaces (i.e. feature and label) be added directly?
The text was updated successfully, but these errors were encountered: