Releases: pfnet/pytorch-pfn-extras
v0.4.0
This release includes the following enhancements and bug-fixes:
- Support PyTorch 1.8.1
- Fix several bugs in
TabularDataset
- Fix several bugs with snapshot autoload feature
LogReport
now allows appending results using json-lines or yaml file formats- Add Batch-Normalization aware gradient checkpointing
- Add LRScheduler extension
In this release there are also the following backward breaking changes:
- Drop PyTorch 1.6 & Python 3.5 support
- Remove bundled DataLoader as the minimum supported version is PyTorch 1.7 whose DataLoader supports the same features
We have upstreamed ppe.nn.LazyLinear
and ppe.nn.LazyConv[123]d
(lazy modules), and they are now available in PyTorch 1.8! Use of torch.nn.LazyLinear
and torch.nn.LazyConv[123]d
instead of PPE implementations is now recommended. See torch.nn.LazyModuleMixin
for the details of the PyTorch lazy implementations.
See the list of merged pull-requests for the details.
v0.3.2
This release includes the following enhancements and bug-fixes:
- Support PyTorch 1.7.0
- Add custom
DistributedDataParallel
implementation to handletorch.util.checkpoint
and dynamic computational graph - Add metrics option to
Evaluator
extension to run metrics functions for every batch - Expose
ExtensionManager.models
andExtensionManager.optimizers
to be used from extensions - Add custom types for Optuna in the config system
See the list of merged pull-requests for the details.
v0.3.1
This release includes the following enhancements and bug-fixes:
- Add
pytorch_pfn_extras.cuda
APIs which adds interoperability with CuPy - Add extensions for Jupyter Notebook (
PrintReportNotebook
andProgressBarNotebook
) - Fix error when resuming training using
IgniteExtensionsManager
- Fix backward breaking changes (
updater
attribute removal inExtensionManager
) in v0.3.0 release
See the list of merged pull-requests for the details.
Compatibility Notes
Starting in v0.3.0, the updater
attribute of the ExtensionManager
, which is a pseudo interface for compatibility with Chainer's extensions, has been deprecated. Extensions using the attribute to access training statistics (e.g., epoch/iteration number) must be changed to directly use attributes of ExtensionManager
(e.g., ExtensionManager().epoch
). Also, if you are using the updater
in a snapshot filename template, you need to update it too (e.g., from snapshot_iter_{.updater.iteration}
to snapshot_iter_{.iteration}
). In this release, accessing the updater
attribute raises a DeprecationWarning
.
v0.3.0
This release includes the following enhancements and bug-fixes:
- Add
pytorch_pfn_extras.onnx
APIs which is an extension totorch.onnx
. - Add
pytorch_pfn_extras.nn.LazyBatchNorm(1,2,3)d
- Add
pytorch_pfn_extras.dataloaders.DataLoader
which reuses the worker process - Add
pytorch_pfn_extras.dataset.SharedDataset
- Add
pytorch_pfn_extras.dataset.TabularDataset
- Add
pytorch_pfn_extras.writing.TensorBoardWriter
- Add
step_optimizers
parameter toExtensionsManager.run_iteration()
- Fix memory leak in Reporter
See the list of merged pull-requests for the details.
Compatibility Notes
This release removes the updater
attribute from the ExtensionManager
which was a pseudo interface for compatibility with Chainer's extensions. Extensions using the attribute to access training statistics (e.g., epoch/iteration number) must be changed to directly use attributes of ExtensionManager
(e.g., ExtensionManager().epoch
).
v0.2.1
This release includes the next enhancements and bug-fixes:
- Add
pytorch_pfn_extras.nn.ExtendedSequential
which extendstorch.nn.Sequential
to supportrepeat
- Support transformation when taking / loading snapshot
- Improved example code (thanks @regonn!)
- Fix
IgniteEvaluator
reporting wrong metrics - Fix circular reference in between
Manager
andProgressBar
- Fix issues specific to Windows (thanks @take0212 for reporting!)
See the list of merged pull-requests for the details.