Shortcuts

Introduction to the Finetuning Scheduler

The FinetuningScheduler callback accelerates and enhances foundational model experimentation with flexible finetuning schedules. Training with the FinetuningScheduler callback is simple and confers a host of benefits:

  • it dramatically increases finetuning flexibility

  • expedites and facilitates exploration of model tuning dynamics

  • enables marginal performance improvements of finetuned models

Note

If you’re exploring using the FinetuningScheduler, this is a great place to start! You may also find the notebook-based tutorial useful and for those using the LightningCLI, there is a CLI-based example at the bottom of this introduction.

Setup

Setup is straightforward, just install from PyPI!

pip install finetuning-scheduler

Additional installation options (from source etc.) are discussed under “Additional installation options” in the README

Motivation

Fundamentally, the FinetuningScheduler callback enables multi-phase, scheduled finetuning of foundational models. Gradual unfreezing (i.e. thawing) can help maximize foundational model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning 1 2 3 .

FinetuningScheduler orchestrates the gradual unfreezing of models via a finetuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Finetuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met. See Early Stopping for more details on that callback’s configuration.

Basic Usage

If no finetuning schedule is user-provided, FinetuningScheduler will generate a default schedule and proceed to finetune according to the generated schedule, using default FTSEarlyStopping and FTSCheckpoint callbacks with monitor=val_loss.

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler

trainer = Trainer(callbacks=[FinetuningScheduler()])

The Default Finetuning Schedule

Schedule definition is facilitated via gen_ft_schedule() which dumps a default finetuning schedule (by default using a naive, 2-parameters per level heuristic) which can be adjusted as desired by the user and/or subsequently passed to the callback. Using the default/implicitly generated schedule will often be less computationally efficient than a user-defined finetuning schedule but can often serve as a good baseline for subsequent explicit schedule refinement and will marginally outperform many explicit schedules.

Specifying a Finetuning Schedule

To specify a finetuning schedule, it’s convenient to first generate the default schedule and then alter the thawed/unfrozen parameter groups associated with each finetuning phase as desired. Finetuning phases are zero-indexed and executed in ascending order.

  1. First, generate the default schedule to Trainer.log_dir. It will be named after your LightningModule subclass with the suffix _ft_schedule.yaml.

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler

trainer = Trainer(callbacks=[FinetuningScheduler(gen_ft_sched_only=True)])
  1. Alter the schedule as desired.

Changing the generated schedule for this boring model…

 1  0:
 2      params:
 3      - layer.3.bias
 4      - layer.3.weight
 5  1:
 6      params:
 7      - layer.2.bias
 8      - layer.2.weight
 9  2:
10      params:
11      - layer.1.bias
12      - layer.1.weight
13  3:
14      params:
15      - layer.0.bias
16      - layer.0.weight

… to have three finetuning phases instead of four:

 1  0:
 2      params:
 3      - layer.3.bias
 4      - layer.3.weight
 5  1:
 6      params:
 7      - layer.2.*
 8      - layer.1.bias
 9      - layer.1.weight
10  2:
11      params:
12      - layer.0.*
  1. Once the finetuning schedule has been altered as desired, pass it to FinetuningScheduler to commence scheduled training:

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler

trainer = Trainer(callbacks=[FinetuningScheduler(ft_schedule="/path/to/my/schedule/my_schedule.yaml")])

EarlyStopping and Epoch-Driven Phase Transition Criteria

By default, FTSEarlyStopping and epoch-driven transition criteria are composed. If a max_transition_epoch is specified for a given phase, the next finetuning phase will begin at that epoch unless FTSEarlyStopping criteria are met first. If epoch_transitions_only is True, FTSEarlyStopping will not be used and transitions will be exclusively epoch-driven.

Tip

Use of regex expressions can be convenient for specifying more complex schedules. Also, a per-phase base_max_lr can be specified:

 1 0:
 2   params: # the parameters for each phase definition can be fully specified
 3   - model.classifier.bias
 4   - model.classifier.weight
 5   max_transition_epoch: 3
 6 1:
 7   params: # or specified via a regex
 8   - model.albert.pooler.*
 9 2:
10   params:
11   - model.albert.encoder.*.ffn_output.*
12   max_transition_epoch: 9
13   lr: 1e-06 # per-phase maximum learning rates can be specified
14 3:
15   params: # both approaches to parameter specification can be used in the same phase
16   - model.albert.encoder.*.(ffn\.|attention|full*).*
17   - model.albert.encoder.embedding_hidden_mapping_in.bias
18   - model.albert.encoder.embedding_hidden_mapping_in.weight
19   - model.albert.embeddings.*

For a practical end-to-end example of using FinetuningScheduler in implicit versus explicit modes, see scheduled finetuning for SuperGLUE below or the notebook-based tutorial.

Resuming Scheduled Finetuning Training Sessions

Resumption of scheduled finetuning training is identical to the continuation of other training sessions with the caveat that the provided checkpoint must have been saved by a FinetuningScheduler session. FinetuningScheduler uses FTSCheckpoint (an extension of ModelCheckpoint) to maintain schedule state with special metadata.

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler

trainer = Trainer(callbacks=[FinetuningScheduler()], ckpt_path="some/path/to/my_checkpoint.ckpt")

Training will resume at the depth/level of the provided checkpoint according the specified schedule. Schedules can be altered between training sessions but schedule compatibility is left to the user for maximal flexibility. If executing a user-defined schedule, typically the same schedule should be provided for the original and resumed training sessions.

Tip

By default ( restore_best is True), FinetuningScheduler will attempt to restore the best available checkpoint before finetuning depth transitions.

trainer = Trainer(
    callbacks=[FinetuningScheduler()],
    ckpt_path="some/path/to/my_kth_best_checkpoint.ckpt",
)

Note that similar to the behavior of ModelCheckpoint, (specifically this PR), when resuming training with a different FTSCheckpoint dirpath from the provided checkpoint, the new training session’s checkpoint state will be re-initialized at the resumption depth with the provided checkpoint being set as the best checkpoint.

Finetuning all the way down!

There are plenty of options for customizing FinetuningScheduler’s behavior, see scheduled finetuning for SuperGLUE below for examples of composing different configurations.


Example: Scheduled Finetuning For SuperGLUE

A demonstration of the scheduled finetuning callback FinetuningScheduler using the RTE and BoolQ tasks of the SuperGLUE benchmark and the LightningCLI is available under ./fts_examples/.

Since this CLI-based example requires a few additional packages (e.g. transformers, sentencepiece), you should install them using the [examples] extra:

pip install finetuning-scheduler['examples']

There are three different demo schedule configurations composed with shared defaults (./config/fts_defaults.yaml) provided for the default ‘rte’ task. Note DDP (with auto-selected GPUs) is the default configuration so ensure you adjust the configuration files referenced below as desired for other configurations.

Note there will likely be minor variations in training paths and performance as packages (e.g. transformers, datasets, finetuning-scheduler itself etc.) evolve. The precise package versions and salient environmental configuration used in the building of this tutorial is available in the tensorboard summaries, logs and checkpoints referenced below if you’re interested.

# Generate a baseline without scheduled finetuning enabled:
python fts_superglue.py fit --config config/nofts_baseline.yaml

# Train with the default finetuning schedule:
python fts_superglue.py fit --config config/fts_implicit.yaml

# Train with a non-default finetuning schedule:
python fts_superglue.py fit --config config/fts_explicit.yaml

All three training scenarios use identical configurations with the exception of the provided finetuning schedule. See the tensorboard experiment summaries and table below for a characterization of the relative computational and performance tradeoffs associated with these FinetuningScheduler configurations.

FinetuningScheduler expands the space of possible finetuning schedules and the composition of more sophisticated schedules can yield marginal finetuning performance gains. That stated, it should be emphasized the primary utility of FinetuningScheduler is to grant greater finetuning flexibility for model exploration in research. For example, glancing at DeBERTa-v3’s implicit training run, a critical tuning transition point is immediately apparent:

Our val_loss begins a precipitous decline at step 3119 which corresponds to phase 17 in the schedule. Referring to our schedule, in phase 17 we’re beginning tuning the attention parameters of our 10th encoder layer (of 11). Interesting! Though beyond the scope of this documentation, it might be worth investigating these dynamics further and FinetuningScheduler allows one to do just that quite easily.

In addition to the tensorboard experiment summaries, full logs/schedules for all three scenarios are available as well as the checkpoints produced in the scenarios (caution, ~3.5GB).

Example Scenario
nofts_baseline
fts_implicit
fts_explicit
Finetuning Schedule

None

Default

User-defined

RTE Accuracy
(0.81, 0.84, 0.85)

Note that though this example is intended to capture a common usage scenario, substantial variation is expected among use cases and models. In summary, FinetuningScheduler provides increased finetuning flexibility that can be useful in a variety of contexts from exploring model tuning behavior to maximizing performance.

FinetuningScheduler Explicit Loss Animation

Note

The FinetuningScheduler callback is currently in beta.

Footnotes

1

Howard, J., & Ruder, S. (2018). Fine-tuned Language Models for Text Classification. ArXiv, abs/1801.06146.

2

Chronopoulou, A., Baziotis, C., & Potamianos, A. (2019). An embarrassingly simple approach for transfer learning from pretrained language models. arXiv preprint arXiv:1902.10547.

3

Peters, M. E., Ruder, S., & Smith, N. A. (2019). To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987.

Finetuning Scheduler API

fts

Finetuning Scheduler

fts_supporters

Finetuning Scheduler Supporters

LR Scheduler Reinitialization

Overview

In some contexts it can be useful to re-wrap your optimizer with new LR scheduler configurations at the beginning of one or more scheduled training phases. Among others, example use cases include:

  • implementing complex LR schedules along with multi-phase early-stopping

  • injecting new parameter group specific rates on a scheduled basis

  • programmatically exploring training behavioral dynamics with heterogenous schedulers and early-stopping

The FinetuningScheduler callback supports (versions >= 0.1.4) LR scheduler reinitialization in both explicit and implicit finetuning schedule modes (see the Finetuning Scheduler intro for more on basic usage modes). As LR scheduler reinitialization is likely to be applied most frequently in the context of explicitly defined finetuning schedules, we’ll cover configuration in that mode first.

Specifying LR Scheduler Configurations For Specific Finetuning Phases

When defining a finetuning schedule (see the intro for basic schedule specification), a new lr scheduler configuration can be applied to the existing optimizer at the beginning of a given phase by specifying the desired configuration in the new_lr_scheduler key. The new_lr_scheduler dictionary format is described in the annotated yaml schedule below and can be explored using the advanced usage example.

When specifying an LR scheduler configuration for a given phase, the new_lr_scheduler dictionary requires at minimum an lr_scheduler_init dictionary containing a class_path key indicating the class of the lr scheduler to be instantiated and wrapped around your optimizer. Currently, all _LRScheduler s are supported with the exception of ChainedScheduler and SequentialLR (due to the configuration complexity and semantic conflicts supporting them would introduce).

Any arguments you would like to pass to initialize the specified lr scheduler with should be specified in the init_args key of the lr_scheduler_init dictionary.

 1  0:
 2    params:
 3    - model.classifier.bias
 4    - model.classifier.weight
 5  1:
 6    params:
 7    - model.pooler.dense.bias
 8    - model.pooler.dense.weight
 9    - model.deberta.encoder.LayerNorm.bias
10    - model.deberta.encoder.LayerNorm.weight
11    new_lr_scheduler:
12      lr_scheduler_init:
13        class_path: torch.optim.lr_scheduler.StepLR
14        init_args:
15          step_size: 1
16          gamma: 0.7
17  ...

Optionally, one can include arguments to pass to PyTorch Lightning’s lr scheduler configuration (LRSchedulerConfig) in the pl_lrs_cfg dictionary.

 1  0:
 2    ...
 3  1:
 4    params:
 5    - model.pooler.dense.bias
 6    ...
 7    new_lr_scheduler:
 8      lr_scheduler_init:
 9        class_path: torch.optim.lr_scheduler.StepLR
10        init_args:
11          step_size: 1
12          ...
13      pl_lrs_cfg:
14        interval: epoch
15        frequency: 1
16        name: Explicit_Reinit_LR_Scheduler

If desired, one can also specify new initial learning rates to use for each of the existing parameter groups in the optimizer being wrapped via a list in the init_pg_lrs key.

1  ...
2  1:
3    params:
4    ...
5    new_lr_scheduler:
6      lr_scheduler_init:
7        ...
8      init_pg_lrs: [2.0e-06, 2.0e-06]

All lr scheduler reinitialization configurations specified in the finetuning schedule will have their configurations sanity-checked prior to training initiation.

Note

It is currently is up to the user to ensure the number of parameter groups listed in init_pg_lrs matches the number of optimizer parameter groups created in previous phases. This number of groups is dependent on a number of factors including the nodecay mapping of parameters specified in previous phases and isn’t yet introspected/simulated in the current FinetuningScheduler version.

Note that specifying LR scheduler reinitialization configurations is only supported for phases >= 1. This is because for finetuning phase 0, the LR scheduler configuration will be the scheduler that you initiate your training session with, usually via the configure_optimizer method of LightningModule.

Tip

If you want your learning rates logged on the same graph for each of the scheduler configurations defined in various phases, ensure that you provide the same name in the lr_scheduler configuration for each of the defined lr schedulers. For instance, in the lr reinitialization example, we provide:

 1  model:
 2    class_path: fts_examples.fts_superglue.RteBoolqModule
 3    init_args:
 4      lr_scheduler_init:
 5        class_path: torch.optim.lr_scheduler.LinearLR
 6        init_args:
 7          start_factor: 0.1
 8          total_iters: 4
 9      pl_lrs_cfg:
10        # use the same name for your initial lr scheduler
11        # configuration and your ``new_lr_scheduler`` configs
12        # if you want LearningRateMonitor to generate a single graph
13        name: Explicit_Reinit_LR_Scheduler

As you can observe in the explicit mode lr scheduler reinitialization example below, lr schedulers specified in different finetuning phases can be of differing types.

 1  0:
 2    params:
 3    - model.classifier.bias
 4    - model.classifier.weight
 5  1:
 6    params:
 7    - model.pooler.dense.bias
 8    - model.pooler.dense.weight
 9    - model.deberta.encoder.LayerNorm.bias
10    - model.deberta.encoder.LayerNorm.weight
11    new_lr_scheduler:
12      lr_scheduler_init:
13        class_path: torch.optim.lr_scheduler.StepLR
14        init_args:
15          step_size: 1
16          gamma: 0.7
17      pl_lrs_cfg:
18        interval: epoch
19        frequency: 1
20        name: Explicit_Reinit_LR_Scheduler
21      init_pg_lrs: [2.0e-06, 2.0e-06]
22  2:
23    params:
24    - model.deberta.encoder.rel_embeddings.weight
25    - model.deberta.encoder.layer.{0,11}.(output|attention|intermediate).*
26    - model.deberta.embeddings.LayerNorm.bias
27    - model.deberta.embeddings.LayerNorm.weight
28    new_lr_scheduler:
29      lr_scheduler_init:
30        class_path: torch.optim.lr_scheduler.CosineAnnealingWarmRestarts
31        init_args:
32          T_0: 3
33          T_mult: 2
34          eta_min: 1.0e-07
35      pl_lrs_cfg:
36        interval: epoch
37        frequency: 1
38        name: Explicit_Reinit_LR_Scheduler
39      init_pg_lrs: [1.0e-06, 1.0e-06, 2.0e-06, 2.0e-06]

Once a new lr scheduler is re-initialized, it will continue to be used for subsequent phases unless replaced with another lr scheduler configuration defined in a subsequent schedule phase.

LR Scheduler Reinitialization With Generated (Implicit Mode) Finetuning Schedules

One can also specify LR scheduler reinitialization in the context of implicit mode finetuning schedules. Since the finetuning schedule is automatically generated, the same LR scheduler configuration will be applied at each of the phase transitions. In implicit mode, the lr scheduler reconfiguration should be supplied to the reinit_lr_cfg parameter of FinetuningScheduler.

For example, configuring this dictionary via the LightningCLI, one could use:

 1  model:
 2    class_path: fts_examples.fts_superglue.RteBoolqModule
 3    init_args:
 4      lr_scheduler_init:
 5        class_path: torch.optim.lr_scheduler.StepLR
 6        init_args:
 7          step_size: 1
 8      pl_lrs_cfg:
 9        name: Implicit_Reinit_LR_Scheduler
10  trainer:
11    callbacks:
12      - class_path: finetuning_scheduler.FinetuningScheduler
13        init_args:
14          reinit_lr_cfg:
15            lr_scheduler_init:
16              class_path: torch.optim.lr_scheduler.StepLR
17              init_args:
18                step_size: 1
19                gamma: 0.7
20            pl_lrs_cfg:
21              interval: epoch
22              frequency: 1
23              name: Implicit_Reinit_LR_Scheduler

Note that an initial lr scheduler configuration should also still be provided per usual (again, typically via the configure_optimizer method of LightningModule) and the initial lr scheduler configuration can differ in lr scheduler type and configuration from the configuration specified in reinit_lr_cfg applied at each phase transition. Because the same schedule is applied at each phase transition, the init_pg_lrs list is not supported in an implicit finetuning context.

Application of LR scheduler reinitialization in both explicit and implicit modes may be best understood via examples, so we’ll proceed to those next.

Advanced Usage Examples: Explicit and Implicit Mode LR Scheduler Reinitialization

Demonstration LR scheduler reinitialization configurations for both explicit and implicit finetuning scheduling contexts are available under ./fts_examples/config/advanced/.

The LR scheduler reinitialization examples use the same code and have the same dependencies as the basic scheduled finetuning for SuperGLUE examples except PyTorch >= 1.10 is required for the explicit mode example (only because LinearLR was introduced in 1.10 and is used in the demo).

The two different demo schedule configurations are composed with shared defaults (./config/fts_defaults.yaml).

cd ./finetuning_scheduler/fts_examples/
# Demo LR scheduler reinitialization with an explicitly defined finetuning schedule:
python fts_superglue.py fit --config config/advanced/fts_explicit_reinit_lr.yaml

# Demo LR scheduler reinitialization with an implicitly defined finetuning schedule:
python fts_superglue.py fit --config config/advanced/fts_implicit_reinit_lr.yaml

Notice in the explicitly defined schedule scenario, we are using three distinct lr schedulers for three different training phases:

Phase 0

LR log for parameter group 1 (LinearLR initial target lr = 1.0e-05)

Phase 0 in yellow (passed to our LightningModule via the model definition in our LightningCLI configuration) uses a LinearLR scheduler (defined in ./config/advanced/fts_explicit_reinit_lr.yaml) with the initial lr defined via the shared initial optimizer configuration (defined in ./config/fts_defaults.yaml).

This is the effective phase 0 config (defined in ./config/advanced/fts_explicit_reinit_lr.yaml, applying defaults defined in ./config/fts_defaults.yaml):

 1  model:
 2    class_path: fts_examples.fts_superglue.RteBoolqModule
 3    init_args:
 4      optimizer_init:
 5        class_path: torch.optim.AdamW
 6        init_args:
 7          weight_decay: 1.0e-05
 8          eps: 1.0e-07
 9          lr: 1.0e-05
10      ...
11      lr_scheduler_init:
12        class_path: torch.optim.lr_scheduler.LinearLR
13        init_args:
14          start_factor: 0.1
15          total_iters: 4
16      pl_lrs_cfg:
17        interval: epoch
18        frequency: 1
19        name: Explicit_Reinit_LR_Scheduler

Phase 1 in blue uses a StepLR scheduler, including the specified initial lr for the existing parameter groups (2.0e-06).

LR log for parameter groups 1 and 3 respectively

pg1 starts at 2.0e-06

pg3 starts at the default of 1.0e-05

Explicit pg1
Explicit pg3

This is the phase 1 config (defined in our explicit schedule ./config/advanced/explicit_reinit_lr.yaml):

 1  ...
 2  1:
 3    params:
 4    - model.pooler.dense.bias
 5    - model.pooler.dense.weight
 6    - model.deberta.encoder.LayerNorm.bias
 7    - model.deberta.encoder.LayerNorm.weight
 8    new_lr_scheduler:
 9      lr_scheduler_init:
10        class_path: torch.optim.lr_scheduler.StepLR
11        init_args:
12          step_size: 1
13          gamma: 0.7
14      pl_lrs_cfg:
15        interval: epoch
16        frequency: 1
17        name: Explicit_Reinit_LR_Scheduler
18      init_pg_lrs: [2.0e-06, 2.0e-06]

Phase 2 in green uses a CosineAnnealingWarmRestarts scheduler, with the assigned initial lr for each of the parameter groups (1.0e-06 for pg1 and 2.0e-06 for pg3).

LR log for parameter groups 1 and 3 respectively

pg1 oscillates between 1.0e-06 and 1.0e-07

pg3 oscillates between 2.0e-06 and 1.0e-07

Explicit pg1
Explicit pg3

This is the phase 2 config (like all non-zero phases, defined in our explicit schedule ./config/advanced/explicit_reinit_lr.yaml):

 1  ...
 2  2:
 3    params:
 4    - model.deberta.encoder.rel_embeddings.weight
 5    - model.deberta.encoder.layer.{0,11}.(output|attention|intermediate).*
 6    - model.deberta.embeddings.LayerNorm.bias
 7    - model.deberta.embeddings.LayerNorm.weight
 8    new_lr_scheduler:
 9      lr_scheduler_init:
10        class_path: torch.optim.lr_scheduler.CosineAnnealingWarmRestarts
11        init_args:
12          T_0: 3
13          T_mult: 2
14          eta_min: 1.0e-07
15      pl_lrs_cfg:
16        interval: epoch
17        frequency: 1
18        name: Explicit_Reinit_LR_Scheduler
19      init_pg_lrs: [1.0e-06, 1.0e-06, 2.0e-06, 2.0e-06]

In the implicitly defined schedule scenario, the StepLR lr scheduler specified via reinit_lr_cfg (which happens to be the same as the initially defined lr scheduler in this case) is reinitialized at each phase transition and applied to all optimizer parameter groups.

 1  ...
 2  - class_path: finetuning_scheduler.FinetuningScheduler
 3    init_args:
 4      # note, we're not going to see great performance due
 5      # to the shallow depth, just demonstrating the lr scheduler
 6      # reinitialization behavior in implicit mode
 7      max_depth: 4
 8      # disable restore_best for lr pattern clarity
 9      restore_best: false
10      reinit_lr_cfg:
11        lr_scheduler_init:
12          class_path: torch.optim.lr_scheduler.StepLR
13          init_args:
14            step_size: 1
15            gamma: 0.7
16        pl_lrs_cfg:
17          interval: epoch
18          frequency: 1
19          name: Implicit_Reinit_LR_Scheduler
LR log for parameter groups 1 and 3 respectively
Explicit pg1
Explicit pg3

Note that we have disabled restore_best in both examples for clarity of lr patterns.

Note

LR reinitialization with FinetuningScheduler is currently in beta.

Contributor Covenant Code of Conduct

Our Pledge

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

Our Standards

Examples of behavior that contributes to creating a positive environment include:

  • Using welcoming and inclusive language

  • Being respectful of differing viewpoints and experiences

  • Gracefully accepting constructive criticism

  • Focusing on what is best for the community

  • Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

  • The use of sexualized language or imagery and unwelcome sexual attention or advances

  • Trolling, insulting/derogatory comments, and personal or political attacks

  • Public or private harassment

  • Publishing others’ private information, such as a physical or electronic address, without explicit permission

  • Other conduct which could reasonably be considered inappropriate in a professional setting

Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

Scope

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at waf2107@columbia.edu. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.

Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

Contributing

Welcome to the community! Finetuning Scheduler extends the most advanced DL research platform on the planet (PyTorch Lightning) and strives to support the latest, best practices and integrations that the amazing PyTorch team and other research organizations roll out!

As Finetuning Scheduler is an extension of PyTorch Lightning, the remainder of the contribution guidelines conform to (and many are drawn from) the PyTorch Lightning contribution documentation.

A giant thank you to the PyTorch Lightning team for their tireless effort building the immensely useful PyTorch Lightning project and their thoughtful feedback on and review of this extension.

Main Core Value: One less thing to remember

Simplify the API as much as possible from the user perspective. Any additions or improvements should minimize the things the user needs to remember.

Design Principles

We encourage all sorts of contributions you’re interested in adding! When coding for Finetuning Scheduler, please follow these principles.

No PyTorch Interference

We don’t want to add any abstractions on top of pure PyTorch. This gives researchers all the control they need without having to learn yet another framework.

Simple Internal Code

It’s useful for users to look at the code and understand very quickly what’s happening. Many users won’t be engineers. Thus we need to value clear, simple code over condensed ninja moves. While that’s super cool, this isn’t the project for that :)

Simple External API

What makes sense to you may not make sense to others. When creating an issue with an API change suggestion, please validate that it makes sense for others. Treat code changes the way you treat a startup: validate that it’s a needed feature, then add if it makes sense for many people.

Backward-compatible API

We all hate updating our deep learning packages because we don’t want to refactor a bunch of stuff. With the Finetuning Scheduler, we make sure every change we make which could break an API is backward compatible with good deprecation warnings.

You shouldn’t be afraid to upgrade the Finetuning Scheduler :)

Gain User Trust

As a researcher, you can’t have any part of your code going wrong. So, make thorough tests to ensure that every implementation of a new trick or subtle change is correct.


Contribution Types

We are always open to contributions of new features or bug fixes.

A lot of good work has already been done in project mechanics (requirements.txt, setup.py, pep8, badges, ci, etc…) so we’re in a good state there thanks to all the early contributors (even pre-beta release)!

Bug Fixes:
  1. If you find a bug please submit a GitHub issue.

    • Make sure the title explains the issue.

    • Describe your setup, what you are trying to do, expected vs. actual behaviour. Please add configs and code samples.

    • Add details on how to reproduce the issue - a minimal test case is always best, colab is also great. Note, that the sample code shall be minimal and if needed with publicly available data.

  2. Try to fix it or recommend a solution. We highly recommend to use test-driven approach:

    • Convert your minimal code example to a unit/integration test with assert on expected results.

    • Start by debugging the issue… You can run just this particular test in your IDE and draft a fix.

    • Verify that your test case fails on the main branch and only passes with the fix applied.

  3. Submit a PR!

Note, even if you do not find the solution, sending a PR with a test covering the issue is a valid contribution, and we can help you or finish it with you :]

New Features:
  1. Submit a GitHub issue - describe what is the motivation of such feature (adding the use case, or an example is helpful).

  2. Determine the feature scope with us.

  3. Submit a PR! We recommend test driven approach to adding new features as well:

    • Write a test for the functionality you want to add.

    • Write the functional code until the test passes.

  4. Add/update the relevant tests!

Test cases:

Want to keep Finetuning Scheduler healthy? Love seeing those green tests? So do we! How to we keep it that way? We write tests! We value tests contribution even more than new features.


Guidelines

Developments scripts

To build the documentation locally, simply execute the following commands from project root (only for Unix):

  • make clean cleans repo from temp/generated files

  • make docs builds documentation under docs/build/html

  • make test runs all project’s tests with coverage

Original code

All added or edited code shall be the own original work of the particular contributor. If you use some third-party implementation, all such blocks/functions/modules shall be properly referred and if possible also agreed by code’s author. For example - This code is inspired from http://....

Coding Style
  1. Use f-strings for output formation

  2. You can use pre-commit to make sure your code style is correct.

Documentation

We are using Sphinx with Napoleon extension. Moreover, we set Google style to follow with type convention.

See following short example of a sample function taking one position string and optional

from typing import Optional


def my_func(param_a: int, param_b: Optional[float] = None) -> str:
    """Sample function.

    Args:
        param_a: first parameter
        param_b: second parameter

    Return:
        sum of both numbers

    Example::

        Sample doctest example...
        >>> my_func(1, 2)
        3

    Note:
        If you want to add something.
    """
    p = param_b if param_b else 0
    return str(param_a + p)

When updating the docs make sure to build them first locally and visually inspect the html files (in the browser) for formatting errors. In certain cases, a missing blank line or a wrong indent can lead to a broken layout. Run these commands

pip install -r requirements/docs.txt
make clean
cd docs
make html

and open docs/build/html/index.html in your browser.

Notes:

  • You need to have LaTeX installed for rendering math equations. You can for example install TeXLive by doing one of the following:

    • on Ubuntu (Linux) run apt-get install texlive or otherwise follow the instructions on the TeXLive website

    • use the RTD docker image

  • with PL used class meta you need to use python 3.7 or higher

Testing

Local: Testing your work locally will help you speed up the process since it allows you to focus on particular (failing) test-cases. To setup a local development environment, install both local and test dependencies:

python -m pip install ".[all]"
python -m pip install pre-commit
pre-commit install

Note: if your computer does not have multi-GPU nor TPU these tests are skipped.

GitHub Actions: For convenience, you can also use your own GHActions building which will be triggered with each commit. This is useful if you do not test against all required dependency versions.

You can then run:

python -m pytest finetuning_scheduler tests fts_examples -v
Pull Request

We welcome any useful contribution! For your convenience here’s a recommended workflow:

  1. Think about what you want to do - fix a bug, repair docs, etc. If you want to implement a new feature or enhance an existing one.

    • Start by opening a GitHub issue to explain the feature and the motivation. In the case of features, ask yourself first - Is this NECESSARY for Finetuning Scheduler? There are some PRs that are just purely about adding engineering complexity which has no place in Finetuning Scheduler.

    • Core contributors will take a look (it might take some time - we are often overloaded with issues!) and discuss it.

    • Once an agreement was reached - start coding.

  2. Start your work locally.

    • Create a branch and prepare your changes.

    • Tip: do not work on your main branch directly, it may become complicated when you need to rebase.

    • Tip: give your PR a good name! It will be useful later when you may work on multiple tasks/PRs.

  3. Test your code!

    • It is always good practice to start coding by creating a test case, verifying it breaks with current behavior, and passes with your new changes.

    • Make sure your new tests cover all different edge cases.

    • Make sure all exceptions raised are tested.

    • Make sure all warnings raised are tested.

  4. If your PR is not ready for reviews, but you want to run it on our CI, open a “Draft PR” to let us know you don’t need feedback yet.

  5. When you feel ready for integrating your work, mark your PR “Ready for review”.

    • Your code should be readable and follow the project’s design principles.

    • Make sure all tests are passing and any new code is tested for (coverage!).

    • Make sure you link the GitHub issue to your PR.

    • Make sure any docs for that piece of code are updated, or added.

    • The code should be elegant and simple. No over-engineering or hard-to-read code.

    Do your best but don’t sweat about perfection! We do code-review to find any missed items. If you need help, don’t hesitate to ping the core team on the PR.

  6. Use tags in PR name for the following cases:

    • [blocked by #] if your work is dependent on other PRs.

    • [wip] when you start to re-edit your work, mark it so no one will accidentally merge it in meantime.

Question & Answer
How can I help/contribute?

All types of contributions are welcome - reporting bugs, fixing documentation, adding test cases, solving issues, and preparing bug fixes. To get started with code contributions, look for issues marked with the label good first issue or chose something close to your domain with the label help wanted. Before coding, make sure that the issue description is clear and comment on the issue so that we can assign it to you (or simply self-assign if you can).

Is there a recommendation for branch names?

We recommend you follow this convention <type>/<issue-id>_<short-name> where the types are: bugfix, feature, docs, or tests (but if you are using your own fork that’s optional).

How to add new tests?

We are using pytest with Finetuning Scheduler.

Here is the process to create a new test

    1. Find a file in tests/ which match what you want to test. If none, create one.

    1. Use this template to get started !

    1. Use BoringModel and derivatives to test out your code.

# TEST SHOULD BE IN YOUR FILE: tests/..../...py
# TEST CODE TEMPLATE

# [OPTIONAL] pytest decorator
# @pytest.mark.skipif(not torch.cuda.is_available(), reason="test requires GPU machine")
def test_explain_what_is_being_tested(tmpdir):
    """
    Test description about text reason to be
    """

    class ExtendedModel(BoringModel):
        ...

    model = ExtendedModel()

    # BoringModel is a functional model. You might want to set methods to None to test your behaviour
    # Example: model.training_step_end = None

    trainer = Trainer(default_root_dir=tmpdir, ...)  # will save everything within a tmpdir generated for this test
    trainer.fit(model)
    trainer.test()  # [OPTIONAL]

    # assert the behaviour is correct.
    assert ...

run our/your test with

python -m pytest tests/..../...py::test_explain_what_is_being_tested -v --capture=no

Finetuning Scheduler Governance

This document describes governance processes we follow in developing the Finetuning Scheduler.

Persons of Interest

BDFL

Role: All final decisions related to Finetuning Scheduler.

  • Dan Dale (speediedan) (Finetuning Scheduler author)

Releases

Release cadence TBD

Project Management and Decision Making

TBD

API Evolution

For API removal, renaming or other forms of backward-incompatible changes, the procedure is:

  1. A deprecation process is initiated at version X, producing warning messages at runtime and in the documentation.

  2. Calls to the deprecated API remain unchanged in their function during the deprecation phase.

  3. Two minor versions in the future at version X+2 the breaking change takes effect.

The “X+2” rule is a recommendation and not a strict requirement. Longer deprecation cycles may apply for some cases.

New API and features are declared as:

  • Experimental: Anything labelled as experimental or beta in the documentation is considered unstable and should

    not be used in production. The community is encouraged to test the feature and report issues directly on GitHub.

  • Stable: Everything not specifically labelled as experimental should be considered stable. Reported issues will be

    treated with priority.

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog.

[0.1.4] - 2022-05-24

[0.1.4] - Added
  • LR scheduler reinitialization functionality (#2)

  • advanced usage documentation

  • advanced scheduling examples

  • notebook-based tutorial link

  • enhanced cli-based example hparam logging among other code clarifications

[0.1.4] - Fixed
  • addressed URI length limit for custom badge

  • allow new deberta fast tokenizer conversion warning for transformers >= 4.19

[0.1.4] - Changed
[0.1.4] - Deprecated

[0.1.3] - 2022-05-04

[0.1.3] - Added
[0.1.3] - Changed
  • bumped latest tested PL patch version to 1.6.3

[0.1.3] - Fixed
[0.1.3] - Deprecated

[0.1.2] - 2022-04-27

[0.1.2] - Added
  • added multiple badges (docker, conda, zenodo)

  • added build status matrix to readme

[0.1.2] - Changed
  • bumped latest tested PL patch version to 1.6.2

  • updated citation cff configuration to include all version metadata

  • removed tag-based trigger for azure-pipelines multi-gpu job

[0.1.2] - Fixed
[0.1.2] - Deprecated

[0.1.1] - 2022-04-15

[0.1.1] - Added
  • added conda-forge package

  • added docker release and pypi workflows

  • additional badges for readme, testing enhancements for oldest/newest pl patch versions

[0.1.1] - Changed
  • bumped latest tested PL patch version to 1.6.1, CLI example depends on PL logger fix (#12609)

[0.1.1] - Deprecated
[0.1.1] - Fixed
  • Addressed version prefix issue with readme transformation for pypi

[0.1.0] - 2022-04-07

[0.1.0] - Added
  • None (initial release)

[0.1.0] - Changed
  • None (initial release)

[0.1.0] - Deprecated
  • None (initial release)

[0.1.0] - Fixed
  • None (initial release)

Indices and tables


© Copyright Copyright (c) 2021-2022, Dan Dale. Revision 04fad4a5.

Built with Sphinx using a theme provided by Read the Docs.
Read the Docs v: v0.1.4
Versions
latest
stable
v0.1.4
v0.1.3
v0.1.2
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.