Skip to content

Commit

Permalink
Fixing references to docs (openvinotoolkit#1638)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel authored Jan 23, 2024
1 parent 689bac2 commit d62e745
Show file tree
Hide file tree
Showing 70 changed files with 204 additions and 204 deletions.
4 changes: 2 additions & 2 deletions notebooks/002-openvino-api/002-openvino-api.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -117,14 +117,14 @@
"\n",
"After initializing OpenVINO Runtime, first read the model file with `read_model()`, then compile it to the specified device with the `compile_model()` method. \n",
"\n",
"[OpenVINO™ supports several model formats](https://docs.openvino.ai/2023.0/Supported_Model_Formats.html#doxid-supported-model-formats) and enables developers to convert them to its own OpenVINO IR format using a tool dedicated to this task.\n",
"[OpenVINO™ supports several model formats](https://docs.openvino.ai/2023.3/Supported_Model_Formats.html) and enables developers to convert them to its own OpenVINO IR format using a tool dedicated to this task.\n",
"\n",
"### OpenVINO IR Model\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"An OpenVINO IR (Intermediate Representation) model consists of an `.xml` file, containing information about network topology, and a `.bin` file, containing the weights and biases binary data. Models in OpenVINO IR format are obtained by using model conversion API. The `read_model()` function expects the `.bin` weights file to have the same filename and be located in the same directory as the `.xml` file: `model_weights_file == Path(model_xml).with_suffix(\".bin\")`. If this is the case, specifying the weights file is optional. If the weights file has a different filename, it can be specified using the `weights` parameter in `read_model()`.\n",
"\n",
"The OpenVINO [Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html) tool is used to convert models to OpenVINO IR format. Model conversion API reads the original model and creates an OpenVINO IR model (`.xml` and `.bin` files) so inference can be performed without delays due to format conversion. Optionally, model conversion API can adjust the model to be more suitable for inference, for example, by alternating input shapes, embedding preprocessing and cutting training parts off.\n",
"The OpenVINO [Model Conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html) tool is used to convert models to OpenVINO IR format. Model conversion API reads the original model and creates an OpenVINO IR model (`.xml` and `.bin` files) so inference can be performed without delays due to format conversion. Optionally, model conversion API can adjust the model to be more suitable for inference, for example, by alternating input shapes, embedding preprocessing and cutting training parts off.\n",
"For information on how to convert your existing TensorFlow, PyTorch or ONNX model to OpenVINO IR format with model conversion API, refer to the [tensorflow-to-openvino](../101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb) and [pytorch-onnx-to-openvino](../102-pytorch-onnx-to-openvino/102-pytorch-onnx-to-openvino.ipynb) notebooks. "
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"source": [
"# Convert a TensorFlow Model to OpenVINO™\n",
"\n",
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_IR_and_opsets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and do inference with a sample image. \n",
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_IR_and_opsets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and do inference with a sample image. \n",
"\n",
"#### Table of contents:\n",
"- [Imports](#Imports)\n",
Expand Down Expand Up @@ -187,7 +187,7 @@
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Use the model conversion Python API to convert the TensorFlow model to OpenVINO IR. The `ov.convert_model` function accept path to saved model directory and returns OpenVINO Model class instance which represents this model. Obtained model is ready to use and to be loaded on a device using `ov.compile_model` or can be saved on a disk using the `ov.save_model` function.\n",
"See the [tutorial](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html) for more information about using model conversion API with TensorFlow models."
"See the [tutorial](https://docs.openvino.ai/2023.3/openvino_docs_OV_Converter_UG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html) for more information about using model conversion API with TensorFlow models."
]
},
{
Expand Down Expand Up @@ -418,7 +418,7 @@
"## Timing\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Measure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the [Benchmark Tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html) in OpenVINO. Note that many optimizations are possible to improve the performance. "
"Measure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the [Benchmark Tool](https://docs.openvino.ai/2023.3/openvino_sample_benchmark_tool.html) in OpenVINO. Note that many optimizations are possible to improve the performance. "
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@
"### Convert ONNX Model to OpenVINO IR Format\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"To convert the ONNX model to OpenVINO IR with `FP16` precision, use model conversion API. The models are saved inside the current directory. For more information on how to convert models, see this [page](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html)."
"To convert the ONNX model to OpenVINO IR with `FP16` precision, use model conversion API. The models are saved inside the current directory. For more information on how to convert models, see this [page](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html)."
]
},
{
Expand Down Expand Up @@ -700,7 +700,7 @@
"## Performance Comparison\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Measure the time it takes to do inference on twenty images. This gives an indication of performance. For more accurate benchmarking, use the [Benchmark Tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html). Keep in mind that many optimizations are possible to improve the performance. "
"Measure the time it takes to do inference on twenty images. This gives an indication of performance. For more accurate benchmarking, use the [Benchmark Tool](https://docs.openvino.ai/2023.3/openvino_sample_benchmark_tool.html). Keep in mind that many optimizations are possible to improve the performance. "
]
},
{
Expand Down Expand Up @@ -863,8 +863,8 @@
"* [Pytorch ONNX Documentation](https://pytorch.org/docs/stable/onnx.html)\n",
"* [PIP install openvino-dev](https://pypi.org/project/openvino-dev/)\n",
"* [OpenVINO ONNX support](https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_ONNX_Support.html)\n",
"* [Model Conversion API documentation](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html)\n",
"* [Converting Pytorch model](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html)\n"
"* [Model Conversion API documentation](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html)\n",
"* [Converting Pytorch model](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html)\n"
]
}
],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@
"## Convert PyTorch Model to OpenVINO Intermediate Representation\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Starting from the 2023.0 release OpenVINO supports direct PyTorch models conversion to OpenVINO Intermediate Representation (IR) format. OpenVINO model conversion API should be used for these purposes. More details regarding PyTorch model conversion can be found in OpenVINO [documentation](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html)\n",
"Starting from the 2023.0 release OpenVINO supports direct PyTorch models conversion to OpenVINO Intermediate Representation (IR) format. OpenVINO model conversion API should be used for these purposes. More details regarding PyTorch model conversion can be found in OpenVINO [documentation](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch.html)\n",
"\n",
"\n",
"The `convert_model` function accepts the PyTorch model object and returns the `openvino.Model` instance ready to load on a device using `core.compile_model` or save on disk for next usage using `ov.save_model`. Optionally, we can provide additional parameters, such as:\n",
Expand All @@ -311,7 +311,7 @@
"* `example_input` - input data sample which can be used for model tracing.\n",
"* `input_shape` - the shape of input tensor for conversion\n",
"\n",
"and any other advanced options supported by model conversion Python API. More details can be found on this [page](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)"
"and any other advanced options supported by model conversion Python API. More details can be found on this [page](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Call the OpenVINO Model Conversion API to convert the PaddlePaddle model to OpenVINO IR, with FP32 precision. `ov.convert_model` function accept path to PaddlePaddle model and returns OpenVINO Model class instance which represents this model. Obtained model is ready to use and loading on device using `ov.compile_model` or can be saved on disk using `ov.save_model` function.\n",
"See the [Model Conversion Guide](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html) for more information about the Model Conversion API."
"See the [Model Conversion Guide](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html) for more information about the Model Conversion API."
]
},
{
Expand Down Expand Up @@ -487,7 +487,7 @@
"## Timing and Comparison\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Measure the time it takes to do inference on fifty images and compare the result. The timing information gives an indication of performance. For a fair comparison, we include the time it takes to process the image. For more accurate benchmarking, use the [OpenVINO benchmark tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html). Note that many optimizations are possible to improve the performance."
"Measure the time it takes to do inference on fifty images and compare the result. The timing information gives an indication of performance. For a fair comparison, we include the time it takes to process the image. For more accurate benchmarking, use the [OpenVINO benchmark tool](https://docs.openvino.ai/2023.3/openvino_sample_benchmark_tool.html). Note that many optimizations are possible to improve the performance."
]
},
{
Expand Down Expand Up @@ -689,7 +689,7 @@
"\n",
"\n",
"* [PaddleClas](https://github.com/PaddlePaddle/PaddleClas)\n",
"* [OpenVINO PaddlePaddle support](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle.html)"
"* [OpenVINO PaddlePaddle support](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle.html)"
]
}
],
Expand Down
4 changes: 2 additions & 2 deletions notebooks/104-model-tools/104-model-tools.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -309,9 +309,9 @@
"Conversion command: /home/ea/work/notebooks_convert/notebooks_conv_env/bin/python -- /home/ea/work/notebooks_convert/notebooks_conv_env/bin/mo --framework=onnx --output_dir=/tmp/tmpgpuw8ex1 --model_name=mobilenet-v2-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.624,57.12,57.375]' --reverse_input_channels --output=prob --input_model=model/public/mobilenet-v2-pytorch/mobilenet-v2.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True\n",
"\n",
"[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression by removing argument --compress_to_fp16 or set it to false --compress_to_fp16=False.\n",
"Find more information about compression to FP16 at https://docs.openvino.ai/latest/openvino_docs_MO_DG_FP16_Compression.html\n",
"Find more information about compression to FP16 at https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_FP16_Compression.html\n",
"[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.\n",
"Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html\n",
"Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html\n",
"[ SUCCESS ] Generated IR version 11 model.\n",
"[ SUCCESS ] XML file: /tmp/tmpgpuw8ex1/mobilenet-v2-pytorch.xml\n",
"[ SUCCESS ] BIN file: /tmp/tmpgpuw8ex1/mobilenet-v2-pytorch.bin\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -843,7 +843,7 @@
}
},
"source": [
"Finally, measure the inference performance of OpenVINO `FP32` and `INT8` models. For this purpose, use [Benchmark Tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html) in OpenVINO.\n",
"Finally, measure the inference performance of OpenVINO `FP32` and `INT8` models. For this purpose, use [Benchmark Tool](https://docs.openvino.ai/2023.3/openvino_sample_benchmark_tool.html) in OpenVINO.\n",
"\n",
"> **Note**: The `benchmark_app` tool is able to measure the performance of the OpenVINO Intermediate Representation (OpenVINO IR) models only. For more accurate performance, run `benchmark_app` in a terminal/command prompt after closing other applications. Run `benchmark_app -m model.xml -d CPU` to benchmark async inference on CPU for one minute. Change `CPU` to `GPU` to benchmark on GPU. Run `benchmark_app --help` to see an overview of all command-line options."
]
Expand Down
8 changes: 4 additions & 4 deletions notebooks/106-auto-device/106-auto-device.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
"source": [
"# Automatic Device Selection with OpenVINO™\n",
"\n",
"The [Auto device](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html) (or AUTO in short) selects the most suitable device for inference by considering the model precision, power efficiency and processing capability of the available [compute devices](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html). The model precision (such as `FP32`, `FP16`, `INT8`, etc.) is the first consideration to filter out the devices that cannot run the network efficiently.\n",
"The [Auto device](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_AUTO.html) (or AUTO in short) selects the most suitable device for inference by considering the model precision, power efficiency and processing capability of the available [compute devices](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html). The model precision (such as `FP32`, `FP16`, `INT8`, etc.) is the first consideration to filter out the devices that cannot run the network efficiently.\n",
"\n",
"Next, if dedicated accelerators are available, these devices are preferred (for example, integrated and discrete [GPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u)). [CPU](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html) is used as the default \"fallback device\". Keep in mind that AUTO makes this selection only once, during the loading of a model. \n",
"Next, if dedicated accelerators are available, these devices are preferred (for example, integrated and discrete [GPU](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_GPU.html)). [CPU](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_CPU.html) is used as the default \"fallback device\". Keep in mind that AUTO makes this selection only once, during the loading of a model. \n",
"\n",
"When using accelerator devices such as GPUs, loading models to these devices may take a long time. To address this challenge for applications that require fast first inference response, AUTO starts inference immediately on the CPU and then transparently shifts inference to the GPU, once it is ready. This dramatically reduces the time to execute first inference.\n",
"\n",
Expand Down Expand Up @@ -91,7 +91,7 @@
"ResNet 50 is image classification model pre-trained on ImageNet dataset described in paper [\"Deep Residual Learning for Image Recognition\"](https://arxiv.org/abs/1512.03385).\n",
"From OpenVINO 2023.0, we can directly convert a model from the PyTorch format to the OpenVINO IR format using model conversion API. To convert model, we should provide model object instance into `ov.convert_model` function, optionally, we can specify input shape for conversion (by default models from PyTorch converted with dynamic input shapes). `ov.convert_model` returns openvino.runtime.Model object ready to be loaded on a device with `ov.compile_model` or serialized for next usage with `ov.save_model`. \n",
"\n",
"For more information about model conversion API, see this [page](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html)."
"For more information about model conversion API, see this [page](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html)."
]
},
{
Expand Down Expand Up @@ -430,7 +430,7 @@
"\n",
"It is an advantage to define **performance hints** when using Automatic Device Selection. By specifying a **THROUGHPUT** or **LATENCY** hint, AUTO optimizes the performance based on the desired metric. The **THROUGHPUT** hint delivers higher frame per second (FPS) performance than the **LATENCY** hint, which delivers lower latency. The performance hints do not require any device-specific settings and they are completely portable between devices – meaning AUTO can configure the performance hint on whichever device is being used.\n",
"\n",
"For more information, refer to the [Performance Hints](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html#performance-hints) section of [Automatic Device Selection](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html) article.\n",
"For more information, refer to the [Performance Hints](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_AUTO.html#performance-hints-for-auto) section of [Automatic Device Selection](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_AUTO.html) article.\n",
"\n",
"### Class and callback definition\n",
"[back to top ⬆️](#Table-of-contents:)\n"
Expand Down
Loading

0 comments on commit d62e745

Please sign in to comment.