Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update checkpointing documentation to mark resume_from_checkpoint as deprecated (#20361) #20477

Merged
24 changes: 23 additions & 1 deletion docs/source-pytorch/common/checkpointing_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,13 @@ PyTorch Lightning checkpoints are fully usable in plain PyTorch.

----

.. important::

**Important Update: Deprecated Method**

Starting from PyTorch Lightning v1.0.0, the `resume_from_checkpoint` parameter has been deprecated. To resume training from a checkpoint, use the `ckpt_path` parameter in the `fit()` method.
WangYue0000 marked this conversation as resolved.
Show resolved Hide resolved
Please update your code accordingly to avoid potential compatibility issues.

************************
Contents of a checkpoint
************************
Expand Down Expand Up @@ -197,16 +204,31 @@ You can disable checkpointing by passing:

----


*********************
Resume training state
*********************

If you don't just want to load weights, but instead restore the full training, do the following:

.. warning::
WangYue0000 marked this conversation as resolved.
Show resolved Hide resolved

The parameter `resume_from_checkpoint` has been deprecated in recent versions of PyTorch Lightning.
WangYue0000 marked this conversation as resolved.
Show resolved Hide resolved
Please use the `ckpt_path` argument in the `fit()` method instead.

Incorrect (deprecated) usage:

.. code-block:: python

trainer = Trainer(resume_from_checkpoint="path/to/your/checkpoint.ckpt")
trainer.fit(model)

Correct usage:

.. code-block:: python

model = LitModel()
trainer = Trainer()

# automatically restores model, epoch, step, LR schedulers, etc...
trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt")
trainer.fit(model, ckpt_path="path/to/your/checkpoint.ckpt")
Loading