Skip to content

Commit

Permalink
[DOCS] adjustments preparing 2025.0 pass 2 (#28454)
Browse files Browse the repository at this point in the history
  • Loading branch information
kblaszczak-intel authored Jan 21, 2025
1 parent bad9b10 commit ca93523
Show file tree
Hide file tree
Showing 18 changed files with 77 additions and 525 deletions.
412 changes: 0 additions & 412 deletions cspell.json

This file was deleted.

98 changes: 28 additions & 70 deletions docs/articles_en/about-openvino/release-notes-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ OpenVINO Release Notes



2024.6 - 18 December 2024
2025.0 - 05 February 2025
#############################

:doc:`System Requirements <./release-notes-openvino/system-requirements>` | :doc:`Release policy <./release-notes-openvino/release-policy>` | :doc:`Installation Guides <./../get-started/install-openvino>`
Expand All @@ -26,10 +26,9 @@ OpenVINO Release Notes
What's new
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

* OpenVINO 2024.6 release includes updates for enhanced stability and improved LLM performance.
* Introduced support for Intel® Arc™ B-Series Graphics (formerly known as Battlemage).
* Implemented optimizations to improve the inference time and LLM performance on NPUs.
* Improved LLM performance with GenAI API optimizations and bug fixes.
* .
* .




Expand All @@ -39,26 +38,19 @@ OpenVINO™ Runtime
CPU Device Plugin
-----------------------------------------------------------------------------------------------

* KV cache now uses asymmetric 8-bit unsigned integer (U8) as the default precision, reducing
memory stress for LLMs and increasing their performance. This option can be controlled by
model meta data.
* Quality and accuracy has been improved for selected models with several bug fixes.
* .
* .

GPU Device Plugin
-----------------------------------------------------------------------------------------------

* Device memory copy optimizations have been introduced for inference with **Intel® Arc™ B-Series
Graphics** (formerly known as Battlemage). Since it does not utilize L2 cache for copying memory
between the device and host, a dedicated `copy` operation is used, if inputs or results are
not expected in the device memory.
* ChatGLM4 inference on GPU has been optimized.
* .
* .

NPU Device Plugin
-----------------------------------------------------------------------------------------------

* LLM performance and inference time has been improved with memory optimizations.


* .



Expand Down Expand Up @@ -98,14 +90,10 @@ Previous 2025 releases
.. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. dropdown:: 2024.5 - 20 November 2024
.. dropdown:: 2024.6 - 18 December 2024
:animate: fade-in-slide-down
:color: secondary

**What's new**

* More GenAI coverage and framework integrations to minimize code changes.




Expand All @@ -126,74 +114,44 @@ page.



Discontinued in 2024
Discontinued in 2025
-----------------------------

* Runtime components:

* Intel® Gaussian & Neural Accelerator (Intel® GNA). Consider using the Neural Processing
Unit (NPU) for low-powered systems like Intel® Core™ Ultra or 14th generation and beyond.
* OpenVINO C++/C/Python 1.0 APIs (see
`2023.3 API transition guide <https://docs.openvino.ai/2023.3/openvino_2_0_transition_guide.html>`__
for reference).
* All ONNX Frontend legacy API (known as ONNX_IMPORTER_API).
* ``PerfomanceMode.UNDEFINED`` property as part of the OpenVINO Python API.
* OpenVINO property Affinity API will is no longer available. It has been replaced with CPU
binding configurations (``ov::hint::enable_cpu_pinning``).

* Tools:

* Deployment Manager. See :doc:`installation <../get-started/install-openvino>` and
:doc:`deployment <../get-started/install-openvino>` guides for current distribution
options.
* `Accuracy Checker <https://github.com/openvinotoolkit/open_model_zoo/blob/master/tools/accuracy_checker/README.md>`__.
* `Post-Training Optimization Tool <https://docs.openvino.ai/2023.3/pot_introduction.html>`__
(POT). Neural Network Compression Framework (NNCF) should be used instead.
* A `Git patch <https://github.com/openvinotoolkit/nncf/tree/release_v281/third_party_integration/huggingface_transformers>`__
for NNCF integration with `huggingface/transformers <https://github.com/huggingface/transformers>`__.
The recommended approach is to use `huggingface/optimum-intel <https://github.com/huggingface/optimum-intel>`__
for applying NNCF optimization on top of models from Hugging Face.
* Support for Apache MXNet, Caffe, and Kaldi model formats. Conversion to ONNX may be used
as a solution.
* The macOS x86_64 debug bins are no longer provided with the OpenVINO toolkit, starting
with OpenVINO 2024.5.
* Python 3.8 is no longer supported, starting with OpenVINO 2024.5.

* As MxNet doesn't support Python version higher than 3.8, according to the
`MxNet PyPI project <https://pypi.org/project/mxnet/>`__,
it is no longer supported by OpenVINO, either.

* Discrete Keem Bay support is no longer supported, starting with OpenVINO 2024.5.
* Support for discrete devices (formerly codenamed Raptor Lake) is no longer available for
NPU.
* Intel® Streaming SIMD Extensions (Intel® SSE) are currently not enabled in the binary
package by default. They are still supported in the source code form.
* The OpenVINO™ Development Tools package (pip install openvino-dev) is no longer available
for OpenVINO releases in 2025.
* Model Optimizer is no longer avilable. Consider using the
:doc:`new conversion methods <../openvino-workflow/model-preparation/convert-model-to-ir>`
instead. For more details, see the
`model conversion transition guide <https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api.html>`__.


Deprecated and to be removed in the future
--------------------------------------------

* Intel® Streaming SIMD Extensions (Intel® SSE) will be supported in source code form, but not
enabled in the binary package by default, starting with OpenVINO 2025.0.
* Ubuntu 20.04 support will be deprecated in future OpenVINO releases due to the end of
standard support.
* The openvino-nightly PyPI module will soon be discontinued. End-users should proceed with the
Simple PyPI nightly repo instead. More information in
`Release Policy <https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/release-policy.html#nightly-releases>`__.
* The OpenVINO™ Development Tools package (pip install openvino-dev) will be removed from
installation options and distribution channels beginning with OpenVINO 2025.0.
* Model Optimizer will be discontinued with OpenVINO 2025.0. Consider using the
:doc:`new conversion methods <../openvino-workflow/model-preparation/convert-model-to-ir>`
instead. For more details, see the
`model conversion transition guide <https://docs.openvino.ai/2024/documentation/legacy-features/transition-legacy-conversion-api.html>`__.
* OpenVINO property Affinity API will be discontinued with OpenVINO 2025.0.
It will be replaced with CPU binding configurations (``ov::hint::enable_cpu_pinning``).



* “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the
future. OpenVINO's dynamic shape models are recommended instead.
* MacOS x86 is no longer recommended for use due to the discontinuation of validation.
Full support will be removed later in 2025.
* The `openvino` namespace of the OpenVINO Python API has been redesigned, removing the nested
`openvino.runtime` module. The old namespace is now considered deprecated and will be
discontinued in 2026.0.


* “auto shape” and “auto batch size” (reshaping a model in runtime) will be removed in the
future. OpenVINO's dynamic shape models are recommended instead.

* Starting with 2025.0 MacOS x86 is no longer recommended for use due to the discontinuation
of validation. Full support will be removed later in 2025.



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ OpenVINO™ GenAI Dependencies
OpenVINO™ GenAI depends on both `OpenVINO <https://github.com/openvinotoolkit/openvino>`__ and
`OpenVINO Tokenizers <https://github.com/openvinotoolkit/openvino_tokenizers>`__. During OpenVINO™
GenAI installation from PyPi, the same versions of OpenVINO and OpenVINO Tokenizers
are used (e.g. ``openvino==2024.6.0`` and ``openvino-tokenizers==2024.6.0.0`` are installed for
``openvino-genai==2024.6.0``).
are used (e.g. ``openvino==2025.0.0`` and ``openvino-tokenizers==2025.0.0.0`` are installed for
``openvino-genai==2025.0.0``).

Trying to update any of the dependency packages might result in a version incompatibility
due to different Application Binary Interfaces (ABIs), which will result in errors while running
Expand Down
6 changes: 3 additions & 3 deletions docs/articles_en/get-started/install-openvino.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Install OpenVINO™ 2024.6
Install OpenVINO™ 2025.0
==========================


Expand All @@ -23,10 +23,10 @@ Install OpenVINO™ 2024.6
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<iframe id="selector" src="../_static/selector-tool/selector-15432eb.html" style="width: 100%; border: none" title="Download Intel® Distribution of OpenVINO™ Toolkit"></iframe>

OpenVINO 2024.6, described here, is not a Long-Term-Support version!
OpenVINO 2025.0, described here, is not a Long-Term-Support version!
All currently supported versions are:

* 2024.6 (development)
* 2025.0 (development)
* 2023.3 (LTS)


Expand Down
16 changes: 11 additions & 5 deletions docs/articles_en/openvino-workflow-generative.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ options:

`Check out the OpenVINO GenAI Quick-start Guide [PDF] <https://docs.openvino.ai/nightly/_static/download/GenAI_Quick_Start_Guide.pdf>`__

.. tab-item:: Hugging Face integration
.. tab-item:: Optimum Intel (Hugging Face integration)

| - Suggested for prototyping and, if the use case is not covered by OpenVINO GenAI, production.
| - Bigger footprint and more dependencies.
Expand All @@ -55,10 +55,16 @@ options:
as well as conversion on the fly. For integration with the final product it may offer
lower performance, though.

Note that the base version of OpenVINO may also be used to run generative AI. Although it may
offer a simpler environment, with fewer dependencies, it has significant limitations and a more
demanding implementation process. For reference, see
`the article on generative AI usage of OpenVINO 2024.6 <https://docs.openvino.ai/2024/openvino-workflow-generative/llm-inference-native-ov.html>`__.
.. tab-item:: Base OpenVINO (not recommended)

Note that the base version of OpenVINO may also be used to run generative AI. Although it may
offer a simpler environment, with fewer dependencies, it has significant limitations and a more
demanding implementation process.

To learn more, refer to the article for the 2024.6 OpenVINO version:
`Generative AI with Base OpenVINO <https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/llm-inference-native-ov.html>`__



The advantages of using OpenVINO for generative model deployment:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -621,7 +621,7 @@ Two types of map entries are possible: descriptor and container.
Descriptor sets the expected structure and possible parameter values of the map.

For possible low-level properties and their description, refer to the header file:
`remote_properties.hpp <https://github.com/openvinotoolkit/openvino/blob/releases/2024/0/src/inference/include/openvino/runtime/intel_gpu/remote_properties.hpp>`__.
`remote_properties.hpp <https://github.com/openvinotoolkit/openvino/blob/releases/2025/0/src/inference/include/openvino/runtime/intel_gpu/remote_properties.hpp>`__.

Examples
###########################################################
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ The ``ov::CompiledModel`` class is also extended to support the properties:
* ``ov::CompiledModel::set_property``

For documentation about OpenVINO common device-independent properties, refer to
`properties.hpp (GitHub) <https://github.com/openvinotoolkit/openvino/blob/releases/2024/0/src/inference/include/openvino/runtime/properties.hpp>`__.
`properties.hpp (GitHub) <https://github.com/openvinotoolkit/openvino/blob/releases/2025/0/src/inference/include/openvino/runtime/properties.hpp>`__.
Device-specific configuration keys can be found in a corresponding device folders,
for example, ``openvino/runtime/intel_gpu/properties.hpp``.

Expand Down
2 changes: 1 addition & 1 deletion docs/dev/ov_dependencies.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#This file provides a comprehensive list of all dependencies of OpenVINO 2024.6
#This file provides a comprehensive list of all dependencies of OpenVINO 2025.0
#The file is part of the automation pipeline for posting OpenVINO IR models on the HuggingFace Hub, including OneBOM dependency checks.


Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_setup/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
============================
OpenVINO 2024.6
OpenVINO 2025.0
============================

.. meta::
Expand Down
6 changes: 3 additions & 3 deletions samples/cpp/benchmark/sync_benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# Sync Benchmark C++ Sample

This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://docs.openvino.ai/2024/omz_demos.html) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.

For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html)

## Requirements

| Options | Values |
| -------------------------------| -------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-0200](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf), |
| | [face-detection-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
Expand Down
6 changes: 3 additions & 3 deletions samples/cpp/benchmark/throughput_benchmark/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Throughput Benchmark C++ Sample

This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://docs.openvino.ai/2024/omz_demos.html) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.
This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options.

The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets ``uint8``, while the sample uses default model precision which is usually ``float32``.

Expand All @@ -10,8 +10,8 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [yolo-v3-tf](https://docs.openvino.ai/2024/omz_models_model_yolo_v3_tf.html), |
| | [face-detection-](https://docs.openvino.ai/2024/omz_models_model_face_detection_0200.html) |
| Validated Models | [yolo-v3-tf](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolo-v3-tf), |
| | [face-detection-](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) |
| Model Format | OpenVINO™ toolkit Intermediate Representation |
| | (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ For more detailed information on how this sample works, check the dedicated [art

| Options | Values |
| ----------------------------| -----------------------------------------------------------------------------------------------------------------------------------------|
| Validated Models | [person-detection-retail-0013](https://docs.openvino.ai/2024/omz_models_model_person_detection_retail_0013.html) |
| Validated Models | [person-detection-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-retail-0013) |
| Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) |
| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) |
| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) |
Expand Down
Loading

0 comments on commit ca93523

Please sign in to comment.