Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onert] Use the shape signature in baseloader #14531

Merged
merged 1 commit into from
Jan 13, 2025

Conversation

ys44kim
Copy link
Contributor

@ys44kim ys44kim commented Jan 8, 2025

This commit use the shape signature in base loader.
if there is an unknown shape in the circle, maintain the unknown shape

ONE-DCO-1.0-Signed-off-by: youngsik kim [email protected]

shape.append(dim);
}
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note for reviewer

  • Currently if there is an unknown shape (shape_signature(-1)), we use tensor->shape(1)
  // Note for tensor->shape_signature()
  // We don't handle shape signature
  //    How we handle:
  //       If shape_signature[k] == -1, we will use tensor->shape()[k] == 1
  //       If app wants to change the input shape, call nnfw_apply_input_tensorinfo() can
  //       be used.  
[      Lower     ] Operand   %0 Info
[      Lower     ]   - Shape     : { 1 1 1440 }
[      Lower     ] Operand   %1 Info
[      Lower     ]   - Shape     : { 1 1 1 1 } // circle { 1 1 1 -1 }
  • if input_tensor is not set, it is defined as 1 and does not run a normal operation.
    Of course, assert/crash does not occur because it is set to 1
  • and when it execute graph, it is not known whether it is an unknown shape with the tensor shape.
  • I think in order to confirm unknown shape during prepare operation, '-1' must be maintained.
  • and set the shape info before execute operation if it is an unknown shape
  • result after this PR is applied
[      Lower     ] Operand   %0 Info
[      Lower     ]   - Shape     : { 1 1 1440 }
[      Lower     ]   - Shape     : { 1 1 1 -1 }

so, i would like to ask for feedback on what to do for unknown shape processing.

  1. Maintain the current status.
  2. Handle the shape signature. (This PR)
  3. other opinions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it guarantee in circle that shape signature takes first priority over tensor shape? @seanshpark @jinevening Do you know about that?

Copy link
Contributor

@hseok-oh hseok-oh Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previous implementation comes from: #2059
TFLite file converter: tensorflow/tensorflow@5591208

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@glistening Do you remember why we decided to ignore shape_signature on #2059?

Copy link
Contributor

@glistening glistening Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hseok-oh I read the issues you've reference above. It seems there was some conflict during running unknown shape with specifying shape via nnfw_set_input_tensorinfo. For example, onert_run with --shape_prepare. Ignoring signature seems to be a workaround. I don't know/remember details. If there is no error with --shape_prepare, we may let shape signature in and see what happens. :)

(ADD) I confirmed this PR works with --shape_prepare option.

Copy link
Contributor

@glistening glistening Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To reduce code duplication, it may be better:

  const auto *tensor_shape = tensor->shape_signature() ? tensor->shape_signature() : tensor->shape();
  if (tensor_shape != nullptr)
  {
    for (const auto &dim : *tensor_shape)
    {
      shape.append(dim);
    }
  }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To reduce code duplication, it may be better:

  const auto *tensor_shape = tensor->shape_signature() ? tensor->shape_signature() : tensor->shape();
  if (tensor_shape != nullptr)
  {
    for (const auto &dim : *tensor_shape)
    {
      shape.append(dim);
    }
  }

I updated with your proposed code.
the readability of the code is much better, and the duplicate code is gone.
thank you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it guarantee in circle that shape signature takes first priority over tensor shape? @seanshpark @jinevening Do you know about that?

Sorry for late response.
Compiler frontend uses shape signature as first order if exist, to check dimension is dynamic(unknown or not).

current related code in luci/import looks like

    const auto tensor_shape_signature = luci::wrap(tensor->shape_signature());
    const auto tensor_shape = luci::wrap(tensor->shape());
    assert(tensor_shape_signature.size() == 0 ||
           tensor_shape_signature.size() == tensor_shape.size());

    // Shape of GraphInput
    auto input_shape = std::make_unique<loco::TensorShape>();
    const auto &input_dims = tensor_shape; // in NHWC
    input_shape->rank(input_dims.size());
    for (uint32_t r = 0; r < input_dims.size(); ++r)
    {
      if (tensor_shape_signature.size() > 0 && tensor_shape_signature.at(r) == -1)
        input_shape->dim(r).unset();
      else
        input_shape->dim(r).set(input_dims[r]);
    }
    graph_input->shape(std::move(input_shape));
  }

@ys44kim ys44kim requested a review from a team January 8, 2025 05:02
This commit use the shape signature in baseloader.
if there is an unknown shape in the circle, maintain the unknown shape

ONE-DCO-1.0-Signed-off-by: youngsik kim <[email protected]>
@ys44kim ys44kim force-pushed the runtime/loader/dyn_250106 branch from ec7a51d to f56fd0c Compare January 8, 2025 07:45
Copy link
Contributor

@hseok-oh hseok-oh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @ys44kim 's approach. LGTM

@@ -364,7 +364,8 @@ ir::OperandIndex BaseLoader<LoaderDomain>::loadOperand(const Tensor *tensor, ir:
{
ir::Shape shape;
// Shape
const auto *tensor_shape = tensor->shape();
const auto *tensor_shape =
tensor->shape_signature() ? tensor->shape_signature() : tensor->shape();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it guarantee in circle that shape signature takes first priority over tensor shape? @seanshpark @jinevening Do you know about that?

Originally posted by @ragmani in #14531 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ragmani I guess so, and I confirmed (again ?). Please find some relevant code from tensorflow lite.

from v2.3.0 (signature seems to be introduced from v2.2.0).

TfLiteStatus Subgraph::ResizeInputTensorStrict(int tensor_index,
                                               const std::vector<int>& dims) {
  TF_LITE_ENSURE(&context_,
                 tensor_index < context_.tensors_size && tensor_index >= 0);
  TfLiteTensor* tensor = &context_.tensors[tensor_index];

  // Ensure that only unknown dimensions can be resized.
  TF_LITE_ENSURE_EQ(&context_, tensor->dims->size, dims.size());
  for (size_t idx = 0; idx < dims.size(); idx++) {
    // `dims_signature` is not defined when no unknown dimensions are present.
    int dim_signature;
    if (tensor->dims_signature && tensor->dims_signature->size) {
      dim_signature = tensor->dims_signature->data[idx];
    } else {
      dim_signature = tensor->dims->data[idx];
    }

    if (dim_signature != -1 && dim_signature != dims[idx]) {
      ReportError(
          "Attempting to resize dimension %d of tensor %d with value %d to %d. "
          "ResizeInputTensorStrict only allows mutating unknown dimensions "
          "identified by -1.",
          idx, tensor_index, dim_signature, dims[idx]);
      return kTfLiteError;
    }
  }

  return ResizeInputTensor(tensor_index, dims);
}

Copy link
Contributor

@ragmani ragmani Jan 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not referring to TensorFlow Lite. However, I find some relevant code, and realize TensorFlow Lite does not always prioritize dims_signature first.

ResizeInputTensorStrict is an experiment interface.

core/subgraph.cc

  // Change the dimensionality of a given tensor. Note, this is only acceptable
  // for tensor indices that are inputs.
  // Returns status of failure or success.
  TfLiteStatus ResizeInputTensor(int tensor_index,
                                 const std::vector<int>& dims);

  // WARNING: Experimental interface, subject to change
  // Change the dimensionality of a given tensor. This is only acceptable for
  // tensor indices that are inputs or variables. Only unknown dimensions can be
  // resized with this function. Unknown dimensions are indicated as `-1` in the
  // `dims_signature` attribute of a `TfLiteTensor`. Returns status of failure
  // or success.
  TfLiteStatus ResizeInputTensorStrict(int tensor_index,
                                       const std::vector<int>& dims);

ResizeInputTensor does not consider dims_signature at all.

TfLiteStatus Subgraph::ResizeInputTensor(int tensor_index,
                                         const std::vector<int>& dims) {
  const bool delegates_applied = !delegates_applied_.empty();
  const bool graph_is_immutable = state_ == kStateInvokableAndImmutable;
  if (graph_is_immutable && !delegates_applied) {
    ReportError("ResizeInputTensor is disallowed when graph is immutable.");
    return kTfLiteError;
  }

  TF_LITE_ENSURE(&context_,
                 tensor_index < context_.tensors_size && tensor_index >= 0);
  TfLiteTensor* tensor = &context_.tensors[tensor_index];

  // Short-circuit the state change if the dimensions don't change, avoiding
  // unnecessary (re)allocations.
  //
  // Note that it's required to check `tensor->data.raw != nullptr`. Otherwise
  // the subgraph won't allocate memory for a dynamic tensor when its size
  // is equal to the original tensor size.
  if (tensor->data.raw != nullptr &&
      EqualArrayAndTfLiteIntArray(tensor->dims, dims.size(), dims.data())) {
    return kTfLiteOk;
  }

  if (graph_is_immutable) {
    // Undo delegation if it resulted in the graph being immutable.
    TF_LITE_ENSURE_STATUS(UndoAllDelegates());
  }
  state_ = kStateUninvokable;
  return ResizeTensorImpl(tensor, ConvertVectorToTfLiteIntArray(dims));
}

Anyway, what I want to know is whether circle always prioritizes shape_signatures first if shape_signatures exists. Tensorflow lite is in a mostly inactive state. @seanshpark @jinevening Could you let us know?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ys44kim

According to #14531 (comment), compiler frontend uses shape signature first order about only unknown dimensions. Could you add defensive code to check each dimension?

Copy link
Contributor

@glistening glistening left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@zetwhite zetwhite left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍

@ys44kim
Copy link
Contributor Author

ys44kim commented Jan 13, 2025

@ragmani
PTAL if you have time.

@ys44kim
Copy link
Contributor Author

ys44kim commented Jan 13, 2025

@hseok-oh
This PR is required for the genai-playground test.
please merge if you have time.

@seanshpark
Copy link
Contributor

plz go without my approval.

@hseok-oh hseok-oh merged commit 8543108 into Samsung:master Jan 13, 2025
9 checks passed
@ys44kim ys44kim deleted the runtime/loader/dyn_250106 branch January 13, 2025 07:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants