Configuration
Utilities for parsing configuration from YAML files and merging them with argparse CLI args (See Getting started for a concrete example).
For this I use OmegaConf and I have extended it with integration for argparse.
This way we can tweak models and run investigative experiments with weird configurations from the CLI, without polluting our configuration files.
When some configuration shows promise then we can create a configuration file out of it with detailed description and set it in stone for reproducibility.
The whole process is transparent, if you follow the conventions.
generate_example_config(parser, output_file, args=None)
parse_config Parse a provided YAML config file and command line args and merge them
During experimentation we want ideally to have a configuration file with the model and training configuration, but also be able to run quick experiments using command line args. This function allows you to double dip, by overriding values in a YAML config file through user provided command line arguments.
The precedence for merging is as follows * default cli args values < config file values < user provided cli args
E.g.:
- if you don't include a value in your configuration it will take the default value from the argparse arguments
- if you provide a cli arg (e.g. run the script with --bsz 64) it will override the value in the config file
Note we use an extended OmegaConf istance to achieve this (see slp.config.omegaconf.OmegaConf)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parser |
ArgumentParser |
The argument parser you want to use |
required |
output_file |
str |
Configuration file name or file descriptor to save example configuration |
required |
args |
Optional[List[str]] |
Optional input sys.argv style args. Useful for testing. Use this only for testing. By default it uses sys.argv[1:] |
None |
Source code in slp/config/config_parser.py
def generate_example_config(
parser: argparse.ArgumentParser,
output_file: str,
args: Optional[List[str]] = None,
) -> None:
"""parse_config Parse a provided YAML config file and command line args and merge them
During experimentation we want ideally to have a configuration file with the model and training configuration,
but also be able to run quick experiments using command line args.
This function allows you to double dip, by overriding values in a YAML config file through user provided command line arguments.
The precedence for merging is as follows
* default cli args values < config file values < user provided cli args
E.g.:
* if you don't include a value in your configuration it will take the default value from the argparse arguments
* if you provide a cli arg (e.g. run the script with --bsz 64) it will override the value in the config file
Note we use an extended OmegaConf istance to achieve this (see slp.config.omegaconf.OmegaConf)
Args:
parser (argparse.ArgumentParser): The argument parser you want to use
output_file (Union[str, IO]): Configuration file name or file descriptor to save example configuration
args (Optional[List[str]]): Optional input sys.argv style args. Useful for testing.
Use this only for testing. By default it uses sys.argv[1:]
"""
config = parse_config(parser, None, include_none=True)
OmegaConf.save(config, output_file)
make_cli_parser(parser, datamodule_cls)
make_cli_parser Augment an argument parser for slp with the default arguments
Default arguments for training, logging, optimization etc. are added to the input {parser}. If you use make_cli_parser, the following command line arguments will be included
!!! usage "my_script.py [-h] [--hidden MODEL.INTERMEDIATE_HIDDEN]"
[--optimizer {Adam,AdamW,SGD,Adadelta,Adagrad,Adamax,ASGD,RMSprop}]
[--lr OPTIM.LR] [--weight-decay OPTIM.WEIGHT_DECAY]
[--lr-scheduler] [--lr-factor LR_SCHEDULE.FACTOR]
[--lr-patience LR_SCHEDULE.PATIENCE]
[--lr-cooldown LR_SCHEDULE.COOLDOWN]
[--min-lr LR_SCHEDULE.MIN_LR] [--seed SEED] [--config CONFIG]
[--experiment-name TRAINER.EXPERIMENT_NAME]
[--run-id TRAINER.RUN_ID]
[--experiment-group TRAINER.EXPERIMENT_GROUP]
[--experiments-folder TRAINER.EXPERIMENTS_FOLDER]
[--save-top-k TRAINER.SAVE_TOP_K]
[--patience TRAINER.PATIENCE]
[--wandb-project TRAINER.WANDB_PROJECT]
[--tags [TRAINER.TAGS [TRAINER.TAGS ...]]]
[--stochastic_weight_avg] [--gpus TRAINER.GPUS]
[--val-interval TRAINER.CHECK_VAL_EVERY_N_EPOCH]
[--clip-grad-norm TRAINER.GRADIENT_CLIP_VAL]
[--epochs TRAINER.MAX_EPOCHS] [--steps TRAINER.MAX_STEPS]
[--tbtt_steps TRAINER.TRUNCATED_BPTT_STEPS] [--debug]
[--offline] [--early-stop-on TRAINER.EARLY_STOP_ON]
[--early-stop-mode {min,max}] [--num-trials TUNE.NUM_TRIALS]
[--gpus-per-trial TUNE.GPUS_PER_TRIAL]
[--cpus-per-trial TUNE.CPUS_PER_TRIAL]
[--tune-metric TUNE.METRIC] [--tune-mode {max,min}]
[--val-percent DATA.VAL_PERCENT]
[--test-percent DATA.TEST_PERCENT] [--bsz DATA.BATCH_SIZE]
[--bsz-eval DATA.BATCH_SIZE_EVAL]
[--num-workers DATA.NUM_WORKERS] [--no-pin-memory]
[--drop-last] [--no-shuffle-eval]
optional arguments:
-h, --help show this help message and exit
--hidden MODEL.INTERMEDIATE_HIDDEN
Intermediate hidden layers for linear module
--optimizer {Adam,AdamW,SGD,Adadelta,Adagrad,Adamax,ASGD,RMSprop}
Which optimizer to use
--lr OPTIM.LR Learning rate
--weight-decay OPTIM.WEIGHT_DECAY
Learning rate
--lr-scheduler Use learning rate scheduling. Currently only
ReduceLROnPlateau is supported out of the box
--lr-factor LR_SCHEDULE.FACTOR
Multiplicative factor by which LR is reduced. Used if
--lr-scheduler is provided.
--lr-patience LR_SCHEDULE.PATIENCE
Number of epochs with no improvement after which
learning rate will be reduced. Used if --lr-scheduler
is provided.
--lr-cooldown LR_SCHEDULE.COOLDOWN
Number of epochs to wait before resuming normal
operation after lr has been reduced. Used if --lr-
scheduler is provided.
--min-lr LR_SCHEDULE.MIN_LR
Minimum lr for LR scheduling. Used if --lr-scheduler
is provided.
--seed SEED Seed for reproducibility
--config CONFIG Path to YAML configuration file
--experiment-name TRAINER.EXPERIMENT_NAME
Name of the running experiment
--run-id TRAINER.RUN_ID
Unique identifier for the current run. If not provided
it is inferred from datetime.now()
--experiment-group TRAINER.EXPERIMENT_GROUP
Group of current experiment. Useful when evaluating
for different seeds / cross-validation etc.
--experiments-folder TRAINER.EXPERIMENTS_FOLDER
Top-level folder where experiment results &
checkpoints are saved
--save-top-k TRAINER.SAVE_TOP_K
Save checkpoints for top k models
--patience TRAINER.PATIENCE
Number of epochs to wait before early stopping
--wandb-project TRAINER.WANDB_PROJECT
Wandb project under which results are saved
--tags [TRAINER.TAGS [TRAINER.TAGS ...]]
Tags for current run to make results searchable.
--stochastic_weight_avg
Use Stochastic weight averaging.
--gpus TRAINER.GPUS Number of GPUs to use
--val-interval TRAINER.CHECK_VAL_EVERY_N_EPOCH
Run validation every n epochs
--clip-grad-norm TRAINER.GRADIENT_CLIP_VAL
Clip gradients with ||grad(w)|| >= args.clip_grad_norm
--epochs TRAINER.MAX_EPOCHS
Maximum number of training epochs
--steps TRAINER.MAX_STEPS
Maximum number of training steps
--tbtt_steps TRAINER.TRUNCATED_BPTT_STEPS
Truncated Back-propagation-through-time steps.
--debug If true, we run a full run on a small subset of the
input data and overfit 10 training batches
--offline If true, forces offline execution of wandb logger
--early-stop-on TRAINER.EARLY_STOP_ON
Metric for early stopping
--early-stop-mode {min,max}
Minimize or maximize early stopping metric
--num-trials TUNE.NUM_TRIALS
Number of trials to run for hyperparameter tuning
--gpus-per-trial TUNE.GPUS_PER_TRIAL
How many gpus to use for each trial. If gpus_per_trial
< 1 multiple trials are packed in the same gpu
--cpus-per-trial TUNE.CPUS_PER_TRIAL
How many cpus to use for each trial.
--tune-metric TUNE.METRIC
Tune this metric. Need to be one of the keys of
metrics_map passed into make_trainer_for_ray_tune.
--tune-mode {max,min}
Maximize or minimize metric
--val-percent DATA.VAL_PERCENT
Percent of validation data to be randomly split from
the training set, if no validation set is provided
--test-percent DATA.TEST_PERCENT
Percent of test data to be randomly split from the
training set, if no test set is provided
--bsz DATA.BATCH_SIZE
Training batch size
--bsz-eval DATA.BATCH_SIZE_EVAL
Evaluation batch size
--num-workers DATA.NUM_WORKERS
Number of workers to be used in the DataLoader
--no-pin-memory Don't pin data to GPU memory when transferring
--drop-last Drop last incomplete batch
--no-shuffle-eval Don't shuffle val & test sets
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parser |
ArgumentParser |
A parent argument to be augmented |
required |
datamodule_cls |
LightningDataModule |
A data module class that injects arguments through the add_argparse_args method |
required |
Returns:
Type | Description |
---|---|
ArgumentParser |
argparse.ArgumentParser: The augmented command line parser |
Examples:
>>> import argparse
>>> from slp.plbind.dm import PLDataModuleFromDatasets
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int) # Create parser with model arguments and anything else you need
>>> parser = make_cli_parser(parser, PLDataModuleFromDatasets)
>>> args = parser.parse_args(args=["--bsz", "64", "--lr", "0.01"])
>>> args.data.batch_size
64
>>> args.optim.lr
0.01
Source code in slp/config/config_parser.py
def make_cli_parser(
parser: argparse.ArgumentParser, datamodule_cls: pl.LightningDataModule
) -> argparse.ArgumentParser:
"""make_cli_parser Augment an argument parser for slp with the default arguments
Default arguments for training, logging, optimization etc. are added to the input {parser}.
If you use make_cli_parser, the following command line arguments will be included
usage: my_script.py [-h] [--hidden MODEL.INTERMEDIATE_HIDDEN]
[--optimizer {Adam,AdamW,SGD,Adadelta,Adagrad,Adamax,ASGD,RMSprop}]
[--lr OPTIM.LR] [--weight-decay OPTIM.WEIGHT_DECAY]
[--lr-scheduler] [--lr-factor LR_SCHEDULE.FACTOR]
[--lr-patience LR_SCHEDULE.PATIENCE]
[--lr-cooldown LR_SCHEDULE.COOLDOWN]
[--min-lr LR_SCHEDULE.MIN_LR] [--seed SEED] [--config CONFIG]
[--experiment-name TRAINER.EXPERIMENT_NAME]
[--run-id TRAINER.RUN_ID]
[--experiment-group TRAINER.EXPERIMENT_GROUP]
[--experiments-folder TRAINER.EXPERIMENTS_FOLDER]
[--save-top-k TRAINER.SAVE_TOP_K]
[--patience TRAINER.PATIENCE]
[--wandb-project TRAINER.WANDB_PROJECT]
[--tags [TRAINER.TAGS [TRAINER.TAGS ...]]]
[--stochastic_weight_avg] [--gpus TRAINER.GPUS]
[--val-interval TRAINER.CHECK_VAL_EVERY_N_EPOCH]
[--clip-grad-norm TRAINER.GRADIENT_CLIP_VAL]
[--epochs TRAINER.MAX_EPOCHS] [--steps TRAINER.MAX_STEPS]
[--tbtt_steps TRAINER.TRUNCATED_BPTT_STEPS] [--debug]
[--offline] [--early-stop-on TRAINER.EARLY_STOP_ON]
[--early-stop-mode {min,max}] [--num-trials TUNE.NUM_TRIALS]
[--gpus-per-trial TUNE.GPUS_PER_TRIAL]
[--cpus-per-trial TUNE.CPUS_PER_TRIAL]
[--tune-metric TUNE.METRIC] [--tune-mode {max,min}]
[--val-percent DATA.VAL_PERCENT]
[--test-percent DATA.TEST_PERCENT] [--bsz DATA.BATCH_SIZE]
[--bsz-eval DATA.BATCH_SIZE_EVAL]
[--num-workers DATA.NUM_WORKERS] [--no-pin-memory]
[--drop-last] [--no-shuffle-eval]
optional arguments:
-h, --help show this help message and exit
--hidden MODEL.INTERMEDIATE_HIDDEN
Intermediate hidden layers for linear module
--optimizer {Adam,AdamW,SGD,Adadelta,Adagrad,Adamax,ASGD,RMSprop}
Which optimizer to use
--lr OPTIM.LR Learning rate
--weight-decay OPTIM.WEIGHT_DECAY
Learning rate
--lr-scheduler Use learning rate scheduling. Currently only
ReduceLROnPlateau is supported out of the box
--lr-factor LR_SCHEDULE.FACTOR
Multiplicative factor by which LR is reduced. Used if
--lr-scheduler is provided.
--lr-patience LR_SCHEDULE.PATIENCE
Number of epochs with no improvement after which
learning rate will be reduced. Used if --lr-scheduler
is provided.
--lr-cooldown LR_SCHEDULE.COOLDOWN
Number of epochs to wait before resuming normal
operation after lr has been reduced. Used if --lr-
scheduler is provided.
--min-lr LR_SCHEDULE.MIN_LR
Minimum lr for LR scheduling. Used if --lr-scheduler
is provided.
--seed SEED Seed for reproducibility
--config CONFIG Path to YAML configuration file
--experiment-name TRAINER.EXPERIMENT_NAME
Name of the running experiment
--run-id TRAINER.RUN_ID
Unique identifier for the current run. If not provided
it is inferred from datetime.now()
--experiment-group TRAINER.EXPERIMENT_GROUP
Group of current experiment. Useful when evaluating
for different seeds / cross-validation etc.
--experiments-folder TRAINER.EXPERIMENTS_FOLDER
Top-level folder where experiment results &
checkpoints are saved
--save-top-k TRAINER.SAVE_TOP_K
Save checkpoints for top k models
--patience TRAINER.PATIENCE
Number of epochs to wait before early stopping
--wandb-project TRAINER.WANDB_PROJECT
Wandb project under which results are saved
--tags [TRAINER.TAGS [TRAINER.TAGS ...]]
Tags for current run to make results searchable.
--stochastic_weight_avg
Use Stochastic weight averaging.
--gpus TRAINER.GPUS Number of GPUs to use
--val-interval TRAINER.CHECK_VAL_EVERY_N_EPOCH
Run validation every n epochs
--clip-grad-norm TRAINER.GRADIENT_CLIP_VAL
Clip gradients with ||grad(w)|| >= args.clip_grad_norm
--epochs TRAINER.MAX_EPOCHS
Maximum number of training epochs
--steps TRAINER.MAX_STEPS
Maximum number of training steps
--tbtt_steps TRAINER.TRUNCATED_BPTT_STEPS
Truncated Back-propagation-through-time steps.
--debug If true, we run a full run on a small subset of the
input data and overfit 10 training batches
--offline If true, forces offline execution of wandb logger
--early-stop-on TRAINER.EARLY_STOP_ON
Metric for early stopping
--early-stop-mode {min,max}
Minimize or maximize early stopping metric
--num-trials TUNE.NUM_TRIALS
Number of trials to run for hyperparameter tuning
--gpus-per-trial TUNE.GPUS_PER_TRIAL
How many gpus to use for each trial. If gpus_per_trial
< 1 multiple trials are packed in the same gpu
--cpus-per-trial TUNE.CPUS_PER_TRIAL
How many cpus to use for each trial.
--tune-metric TUNE.METRIC
Tune this metric. Need to be one of the keys of
metrics_map passed into make_trainer_for_ray_tune.
--tune-mode {max,min}
Maximize or minimize metric
--val-percent DATA.VAL_PERCENT
Percent of validation data to be randomly split from
the training set, if no validation set is provided
--test-percent DATA.TEST_PERCENT
Percent of test data to be randomly split from the
training set, if no test set is provided
--bsz DATA.BATCH_SIZE
Training batch size
--bsz-eval DATA.BATCH_SIZE_EVAL
Evaluation batch size
--num-workers DATA.NUM_WORKERS
Number of workers to be used in the DataLoader
--no-pin-memory Don't pin data to GPU memory when transferring
--drop-last Drop last incomplete batch
--no-shuffle-eval Don't shuffle val & test sets
Args:
parser (argparse.ArgumentParser): A parent argument to be augmented
datamodule_cls (pytorch_lightning.LightningDataModule): A data module class that injects arguments through the add_argparse_args method
Returns:
argparse.ArgumentParser: The augmented command line parser
Examples:
>>> import argparse
>>> from slp.plbind.dm import PLDataModuleFromDatasets
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int) # Create parser with model arguments and anything else you need
>>> parser = make_cli_parser(parser, PLDataModuleFromDatasets)
>>> args = parser.parse_args(args=["--bsz", "64", "--lr", "0.01"])
>>> args.data.batch_size
64
>>> args.optim.lr
0.01
"""
parser = add_optimizer_args(parser)
parser = add_trainer_args(parser)
parser = add_tune_args(parser)
parser = datamodule_cls.add_argparse_args(parser)
return parser
parse_config(parser, config_file, args=None, include_none=False)
parse_config Parse a provided YAML config file and command line args and merge them
During experimentation we want ideally to have a configuration file with the model and training configuration, but also be able to run quick experiments using command line args. This function allows you to double dip, by overriding values in a YAML config file through user provided command line arguments.
The precedence for merging is as follows * default cli args values < config file values < user provided cli args
E.g.:
- if you don't include a value in your configuration it will take the default value from the argparse arguments
- if you provide a cli arg (e.g. run the script with --bsz 64) it will override the value in the config file
Note we use an extended OmegaConf istance to achieve this (see slp.config.omegaconf.OmegaConf)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parser |
ArgumentParser |
The argument parser you want to use |
required |
config_file |
Union[str, IO] |
Configuration file name or file descriptor |
required |
args |
Optional[List[str]] |
Optional input sys.argv style args. Useful for testing. Use this only for testing. By default it uses sys.argv[1:] |
None |
Returns:
Type | Description |
---|---|
Union[omegaconf.listconfig.ListConfig, omegaconf.dictconfig.DictConfig] |
OmegaConf.DictConfig: The parsed configuration as an OmegaConf DictConfig object |
Examples:
>>> import io
>>> from slp.config.config_parser import parse_config
>>> mock_config_file = io.StringIO('''
model:
hidden: 100
''')
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int, default=20)
>>> cfg = parse_config(parser, mock_config_file)
{'model': {'hidden': 100}}
>>> type(cfg)
<class 'omegaconf.dictconfig.DictConfig'>
>>> cfg = parse_config(parser, mock_config_file, args=["--hidden", "200"])
{'model': {'hidden': 200}}
>>> mock_config_file = io.StringIO('''
random_value: hello
''')
>>> cfg = parse_config(parser, mock_config_file)
{'model': {'hidden': 20}, 'random_value': 'hello'}
Source code in slp/config/config_parser.py
def parse_config(
parser: argparse.ArgumentParser,
config_file: Optional[Union[str, IO]],
args: Optional[List[str]] = None,
include_none: bool = False,
) -> Union[ListConfig, DictConfig]:
"""parse_config Parse a provided YAML config file and command line args and merge them
During experimentation we want ideally to have a configuration file with the model and training configuration,
but also be able to run quick experiments using command line args.
This function allows you to double dip, by overriding values in a YAML config file through user provided command line arguments.
The precedence for merging is as follows
* default cli args values < config file values < user provided cli args
E.g.:
* if you don't include a value in your configuration it will take the default value from the argparse arguments
* if you provide a cli arg (e.g. run the script with --bsz 64) it will override the value in the config file
Note we use an extended OmegaConf istance to achieve this (see slp.config.omegaconf.OmegaConf)
Args:
parser (argparse.ArgumentParser): The argument parser you want to use
config_file (Union[str, IO]): Configuration file name or file descriptor
args (Optional[List[str]]): Optional input sys.argv style args. Useful for testing.
Use this only for testing. By default it uses sys.argv[1:]
Returns:
OmegaConf.DictConfig: The parsed configuration as an OmegaConf DictConfig object
Examples:
>>> import io
>>> from slp.config.config_parser import parse_config
>>> mock_config_file = io.StringIO('''
model:
hidden: 100
''')
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int, default=20)
>>> cfg = parse_config(parser, mock_config_file)
{'model': {'hidden': 100}}
>>> type(cfg)
<class 'omegaconf.dictconfig.DictConfig'>
>>> cfg = parse_config(parser, mock_config_file, args=["--hidden", "200"])
{'model': {'hidden': 200}}
>>> mock_config_file = io.StringIO('''
random_value: hello
''')
>>> cfg = parse_config(parser, mock_config_file)
{'model': {'hidden': 20}, 'random_value': 'hello'}
"""
# Merge Configurations Precedence: default kwarg values < default argparse values < config file values < user provided CLI args values
if config_file is not None:
dict_config = OmegaConf.from_yaml(config_file) # type: ignore
else:
dict_config = OmegaConf.create({})
user_cli, default_cli = OmegaConf.from_argparse(parser, include_none=include_none)
config = OmegaConf.merge(default_cli, dict_config, user_cli)
logger.info("Running with the following configuration")
logger.info(f"\n{OmegaConf.to_yaml(config)}")
return config
SPECIAL_TOKENS
SPECIAL_TOKENS Special Tokens for NLP applications
Default special tokens values and indices (compatible with BERT):
* [PAD]: 0
* [MASK]: 1
* [UNK]: 2
* [BOS]: 3
* [EOS]: 4
* [CLS]: 5
* [SEP]: 6
* [PAUSE]: 7
OmegaConfExtended
OmegaConfExtended Extended OmegaConf class, to include argparse style CLI arguments
Unfortunately the original authors are not interested into providing integration with argparse (https://github.com/omry/omegaconf/issues/569), so we have to get by with this extension
from_argparse(parser, args=None, include_none=False)
staticmethod
from_argparse Static method to convert argparse arguments into OmegaConf DictConfig objects
We parse the command line arguments and separate the user provided values and the default values. This is useful for merging with a config file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
parser |
ArgumentParser |
Parser for argparse arguments |
required |
args |
Optional[List[str]] |
Optional input sys.argv style args. Useful for testing. Use this only for testing. By default it uses sys.argv[1:] |
None |
Returns:
Type | Description |
---|---|
Tuple[omegaconf.dictconfig.DictConfig, omegaconf.dictconfig.DictConfig] |
Tuple[omegaconf.DictConfig, omegaconf.DictConfig]: (user provided cli args, default cli args) as a tuple of omegaconf.DictConfigs |
Examples:
>>> import argparse
>>> from slp.config.omegaconf import OmegaConfExtended
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int, default=20)
>>> user_provided_args, default_args = OmegaConfExtended.from_argparse(parser, args=["--hidden", "100"])
>>> user_provided_args
{'model': {'hidden': 100}}
>>> default_args
{}
>>> user_provided_args, default_args = OmegaConfExtended.from_argparse(parser)
>>> user_provided_args
{}
>>> default_args
{'model': {'hidden': 20}}
Source code in slp/config/omegaconf.py
@staticmethod
def from_argparse(
parser: argparse.ArgumentParser,
args: Optional[List[str]] = None,
include_none: bool = False,
) -> Tuple[DictConfig, DictConfig]:
"""from_argparse Static method to convert argparse arguments into OmegaConf DictConfig objects
We parse the command line arguments and separate the user provided values and the default values.
This is useful for merging with a config file.
Args:
parser (argparse.ArgumentParser): Parser for argparse arguments
args (Optional[List[str]]): Optional input sys.argv style args. Useful for testing.
Use this only for testing. By default it uses sys.argv[1:]
Returns:
Tuple[omegaconf.DictConfig, omegaconf.DictConfig]: (user provided cli args, default cli args) as a tuple of omegaconf.DictConfigs
Examples:
>>> import argparse
>>> from slp.config.omegaconf import OmegaConfExtended
>>> parser = argparse.ArgumentParser("My cool model")
>>> parser.add_argument("--hidden", dest="model.hidden", type=int, default=20)
>>> user_provided_args, default_args = OmegaConfExtended.from_argparse(parser, args=["--hidden", "100"])
>>> user_provided_args
{'model': {'hidden': 100}}
>>> default_args
{}
>>> user_provided_args, default_args = OmegaConfExtended.from_argparse(parser)
>>> user_provided_args
{}
>>> default_args
{'model': {'hidden': 20}}
"""
dest_to_arg = {v.dest: k for k, v in parser._option_string_actions.items()}
all_args = vars(parser.parse_args(args=args))
provided_args = {}
default_args = {}
for k, v in all_args.items():
if dest_to_arg[k] in sys.argv:
provided_args[k] = v
else:
default_args[k] = v
provided = OmegaConf.create(_nest(provided_args, include_none=include_none))
defaults = OmegaConf.create(_nest(default_args, include_none=include_none))
return provided, defaults
from_yaml(file_)
staticmethod
Alias for OmegaConf.load OmegaConf.from_yaml got removed at some point. Bring it back
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_ |
Union[str, pathlib.Path, IO[Any]] |
file to load or file descriptor |
required |
Returns:
Type | Description |
---|---|
Union[omegaconf.dictconfig.DictConfig, omegaconf.listconfig.ListConfig] |
Union[DictConfig, ListConfig]: The loaded configuration |
Source code in slp/config/omegaconf.py
@staticmethod
def from_yaml(
file_: Union[str, pathlib.Path, IO[Any]]
) -> Union[DictConfig, ListConfig]:
"""Alias for OmegaConf.load
OmegaConf.from_yaml got removed at some point. Bring it back
Args:
file_ (Union[str, pathlib.Path, IO[Any]]): file to load or file descriptor
Returns:
Union[DictConfig, ListConfig]: The loaded configuration
"""
return OmegaConfExtended.load(file_)