Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch] : implement support for replicated{1,2,3} pad #28271

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

11happy
Copy link
Contributor

@11happy 11happy commented Jan 4, 2025

Overview:
This pull request fixes #23322.

Testing:

  • Tested the updated code.
    Screenshot from 2025-01-05 00-46-00

CC:

@11happy 11happy requested a review from a team as a code owner January 4, 2025 19:18
@11happy 11happy requested review from mvafin and PiotrKrzem January 4, 2025 19:18
@github-actions github-actions bot added the category: PyTorch FE OpenVINO PyTorch Frontend label Jan 4, 2025
@sys-openvino-ci sys-openvino-ci added the ExternalPR External contributor label Jan 4, 2025
@11happy
Copy link
Contributor Author

11happy commented Jan 6, 2025

humble ping!
thank you

auto data = context.get_input(0);
auto paddings = context.const_input<std::vector<int64_t>>(1);
Output<Node> pad_value = context.mark_node(v0::Constant::create(element::f32, Shape{}, {0}));
return translate_pad_common(context, data, paddings, pad_value, "replicate");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure that logic from translate_pad_common is suitable for replication operation.
That is because a format of paddings is as follows <pad_begin_axis1>, <pad_end_axis1>, <pad_begin_axis2>, <pad_end_axis2>, ... in case of tuple.

May be we need to do as follows:

  1. do broadcast of padding to convert to tuple case
  2. create a pad vector with zeros with length equal to rank of input
  3. scatter elements from broadcasted padding to pad vector with known indices defined for each 1d, 2d and 3d cases

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not able to currently get why this logic maybe incorrect can you please give an example to explain. it would be very helpful. Thank you

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @11happy,
I meant logic for preparation of pad vector can be simplified but I understand it may be out-of-scope for this PR because it is used by other translators.
Let us see tests if they pass. please resolve conflicts and I will trigger CI for your PR.

Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resolved the conflicts.
Thank you

@rkazants
Copy link
Member

rkazants commented Jan 7, 2025

build_jenkins

@11happy 11happy requested a review from rkazants January 20, 2025 15:35
Signed-off-by: 11happy <[email protected]>
@rkazants
Copy link
Member

build_jenkins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: PyTorch FE OpenVINO PyTorch Frontend ExternalPR External contributor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Good First Issue]: Support aten::replication_pad{1,2,3}d for pytorch models
4 participants