Skip to content

Commit

Permalink
doc: conv: add note for the dilation computation formula
Browse files Browse the repository at this point in the history
  • Loading branch information
shu1chen authored and vpirogov committed Jan 31, 2025
1 parent 9ec09c7 commit 3558ed9
Show file tree
Hide file tree
Showing 2 changed files with 18 additions and 1 deletion.
6 changes: 6 additions & 0 deletions doc/primitives/convolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,12 @@ Here:
- \f$OW = \left\lfloor{\frac{IW - DKW + PW_L + PW_R}{SW}}
\right\rfloor + 1,\f$ where \f$DKW = 1 + (KW - 1) \cdot (DW + 1)\f$.

@note In oneDNN, convolution without dilation is defined by setting the dilation
parameters to `0`. This differs from PyTorch and TensorFlow, where a non-dilated
case corresponds to a dilation value of `1`. As a result, the PyTorch and
TensorFlow dilation parameters need to be adjusted by subtracting `1` (for example,
\f$DH_onednn = DH_torch - 1\f$, and \f$DW_onednn = DW_torch - 1\f$).

#### Deconvolution (Transposed Convolution)

Deconvolutions (also called fractionally strided convolutions or transposed
Expand Down
13 changes: 12 additions & 1 deletion examples/primitives/pooling.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*******************************************************************************
* Copyright 2020-2022 Intel Corporation
* Copyright 2020-2025 Intel Corporation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -66,6 +66,17 @@ void pooling_example(dnnl::engine::kind engine_kind) {
DH = 1, // height-wise dilation
DW = 1; // width-wise dilation

// oneDNN uses the following formula to calculate destination dimensions:
// dst = (src - ((weights - 1) * (dilation_onednn + 1) + 1)) / stride + 1
//
// PyTorch and TensorFlow use a different formula:
// dst = (src - ((weights - 1) * dilation_torch + 1)) / stride + 1
//
// As a result, the PyTorch and Tensorflow dilation parameters need to be
// adjusted by subtracting 1:
// dilation_onednn = dilation_torch - 1.
//
// Output tensor height and width.
const memory::dim OH = (IH - ((KH - 1) * DH + KH) + PH_L + PH_R) / SH + 1;
const memory::dim OW = (IW - ((KW - 1) * DW + KW) + PW_L + PW_R) / SW + 1;

Expand Down

0 comments on commit 3558ed9

Please sign in to comment.