Skip to content

Commit

Permalink
Fine-Tuning Scheduler Tutorial Update for Lightning/PyTorch 2.2.0 (#298)
Browse files Browse the repository at this point in the history
  • Loading branch information
speediedan committed Jul 26, 2024
1 parent 684b63b commit 910c12d
Showing 1 changed file with 6 additions and 9 deletions.
15 changes: 6 additions & 9 deletions lightning_examples/finetuning-scheduler/finetuning-scheduler.py
Original file line number Diff line number Diff line change
Expand Up @@ -554,9 +554,7 @@ def train() -> None:
# the implicit schedule will limit fine-tuning to just the last 4 parameters of the model, which is only a small fraction
# of the parameters you'd want to tune for maximum performance. Since the implicit schedule is quite computationally
# intensive and most useful for exploring model behavior, leaving [max_depth](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html?highlight=max_depth#finetuning_scheduler.fts.FinetuningScheduler.params.max_depth) 1 allows us to demo implicit mode
# behavior while keeping the computational cost and runtime of this notebook reasonable. To review how a full implicit
# mode run compares to the ``nofts_baseline`` and ``fts_explicit`` scenarios, please see the the following
# [tensorboard experiment summary](https://tensorboard.dev/experiment/n7U8XhrzRbmvVzC4SQSpWw/).
# behavior while keeping the computational cost and runtime of this notebook reasonable.


# %%
Expand All @@ -579,16 +577,15 @@ def train() -> None:
# %% [markdown]
# ### Reviewing the Training Results
#
# See the [tensorboard experiment summaries](https://tensorboard.dev/experiment/n7U8XhrzRbmvVzC4SQSpWw/) to get a sense
# of the relative computational and performance tradeoffs associated with these [FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) configurations.
# The summary compares a full ``fts_implicit`` execution to ``fts_explicit`` and ``nofts_baseline`` scenarios using DDP
# It's worth considering the relative computational and performance tradeoffs associated with different [FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) configurations.
# The example below compares ``fts_implicit`` execution to ``fts_explicit`` and ``nofts_baseline`` scenarios using DDP
# training with 2 GPUs. The full logs/schedules for all three scenarios are available
# [here](https://drive.google.com/file/d/1LrUcisRLHeJgh_BDOOD_GUBPp5iHAkoR/view?usp=sharing) and the checkpoints
# produced in the scenarios [here](https://drive.google.com/file/d/1t7myBgcqcZ9ax_IT9QVk-vFH_l_o5UXB/view?usp=sharing)
# (caution, ~3.5GB).
#
# [![fts_explicit_accuracy](fts_explicit_accuracy.png){height="315px" width="492px"}](https://tensorboard.dev/experiment/n7U8XhrzRbmvVzC4SQSpWw/#scalars&_smoothingWeight=0&runSelectionState=eyJmdHNfZXhwbGljaXQiOnRydWUsIm5vZnRzX2Jhc2VsaW5lIjpmYWxzZSwiZnRzX2ltcGxpY2l0IjpmYWxzZX0%3D)
# [![nofts_baseline](nofts_baseline_accuracy.png){height="316px" width="505px"}](https://tensorboard.dev/experiment/n7U8XhrzRbmvVzC4SQSpWw/#scalars&_smoothingWeight=0&runSelectionState=eyJmdHNfZXhwbGljaXQiOmZhbHNlLCJub2Z0c19iYXNlbGluZSI6dHJ1ZSwiZnRzX2ltcGxpY2l0IjpmYWxzZX0%3D)
# ![fts_explicit_accuracy](fts_explicit_accuracy.png){height="315px" width="492px"}
# ![nofts_baseline](nofts_baseline_accuracy.png){height="316px" width="505px"}
#
# Note that given execution context differences, there could be a modest variation in performance from the tensorboard summaries generated by this notebook.
#
Expand All @@ -597,7 +594,7 @@ def train() -> None:
# greater fine-tuning flexibility for model exploration in research. For example, glancing at DeBERTa-v3's implicit training
# run, a critical tuning transition point is immediately apparent:
#
# [![implicit_training_transition](implicit_training_transition.png){height="272px" width="494px"}](https://tensorboard.dev/experiment/n7U8XhrzRbmvVzC4SQSpWw/#scalars&_smoothingWeight=0&runSelectionState=eyJmdHNfZXhwbGljaXQiOmZhbHNlLCJub2Z0c19iYXNlbGluZSI6ZmFsc2UsImZ0c19pbXBsaWNpdCI6dHJ1ZX0%3D)
# ![implicit_training_transition](implicit_training_transition.png){height="272px" width="494px"}
#
# Our `val_loss` begins a precipitous decline at step 3119 which corresponds to phase 17 in the schedule. Referring to our
# schedule, in phase 17 we're beginning tuning the attention parameters of our 10th encoder layer (of 11). Interesting!
Expand Down

0 comments on commit 910c12d

Please sign in to comment.