Text Prediction - Customization and Hyperparameter Search¶
This advanced tutorial teaches you how to control the hyperparameter
tuning process in TextPredictor by specifying:
A custom search space of candidate hyperparameter values to consider.
Which hyperparameter optimization (HPO) method should be used to actually search through this space.
import numpy as np
import warnings
import autogluon as ag
warnings.filterwarnings('ignore')
np.random.seed(123)
Stanford Sentiment Treebank Data¶
For demonstration, we use the Stanford Sentiment Treebank (SST) dataset.
from autogluon.core import TabularDataset
subsample_size = 1000 # subsample for faster demo, you may try specifying larger value
train_data = TabularDataset('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet')
test_data = TabularDataset('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet')
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head(10)
| sentence | label | |
|---|---|---|
| 43787 | very pleasing at its best moments | 1 |
| 16159 | , american chai is enough to make you put away... | 0 |
| 59015 | too much like an infomercial for ram dass 's l... | 0 |
| 5108 | a stirring visual sequence | 1 |
| 67052 | cool visual backmasking | 1 |
| 35938 | hard ground | 0 |
| 49879 | the striking , quietly vulnerable personality ... | 1 |
| 51591 | pan nalin 's exposition is beautiful and myste... | 1 |
| 56780 | wonderfully loopy | 1 |
| 28518 | most beautiful , evocative | 1 |
Configuring the TextPredictor¶
Pre-configured Hyperparameters¶
We provided a series of pre-configured hyperparameters. You may list the
keys from ag_text_presets via list_presets.
from autogluon.text import ag_text_presets, list_presets
list_presets()
{'simple_presets': ['default',
'lower_quality_fast_train',
'medium_quality_faster_train',
'best_quality'],
'advanced_presets': ['electra_small_fuse_late',
'electra_base_fuse_late',
'electra_large_fuse_late',
'roberta_base_fuse_late',
'multi_cased_bert_base_fuse_late',
'electra_base_fuse_early',
'electra_base_all_text']}
There are two kinds of presets. The simple_presets are pre-defined
configurations recommended for most users, which allow you specify
whether you care more about predictive accuracy ('best_quality') or
more about training/inference speed ('lower_quality_fast_train')
The advanced_presets are pre-configured networks using different
Transformer backbones such as ELECTRA, RoBERTa, or Multilingual BERT,
and different feature fusion strategies. For example,
electra_small_fuse_late means we use the ELECTRA-small model as the
network backbone for text fields and use the late fusion strategy
described in “What’s happening inside?”. The default
preset is the same as electra_base_fuse_late. Now let’s train a
model on our data with specified presets.
from autogluon.text import TextPredictor
predictor = TextPredictor(path='ag_text_sst_electra_small', eval_metric='acc', label='label')
predictor.set_verbosity(0)
predictor.fit(train_data, presets='electra_small_fuse_late', time_limit=60, seed=123)
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_electra_small/task0/training.log
<autogluon.text.text_prediction.predictor.predictor.TextPredictor at 0x7f42d175b9a0>
Below we report both f1 and acc metrics for our predictions.
Note that if you really want to obtain the best F1 score, you should set
eval_metric='f1' when constructing the TextPredictor.
predictor.evaluate(test_data, metrics=['f1', 'acc'])
{'f1': 0.7820965842167256, 'acc': 0.7878440366972477}
To view the pre-registered hyperparameters, you can call
ag_text_presets.create(presets_name), e.g.,
import pprint
pprint.pprint(ag_text_presets.create('electra_small_fuse_late'))
{'models': {'MultimodalTextModel': {'backend': 'gluonnlp_v0',
'search_space': {'model.backbone.name': 'google_electra_small',
'model.network.agg_net.agg_type': 'concat',
'model.network.aggregate_categorical': True,
'model.use_avg_nbest': True,
'optimization.batch_size': 128,
'optimization.layerwise_lr_decay': 0.8,
'optimization.lr': Categorical[0.0001],
'optimization.nbest': 3,
'optimization.num_train_epochs': 10,
'optimization.per_device_batch_size': 8,
'optimization.wd': 0.0001,
'preprocessing.categorical.convert_to_text': False,
'preprocessing.numerical.convert_to_text': False}}},
'tune_kwargs': {'num_trials': 1,
'scheduler_options': None,
'search_options': None,
'search_strategy': 'local',
'searcher': 'local_random'}}
Another way to specify a custom TextPredictor configuration is via the
hyperparameters argument.
predictor = TextPredictor(path='ag_text_customize1', eval_metric='acc', label='label')
predictor.fit(train_data, hyperparameters=ag_text_presets.create('electra_small_fuse_late'),
time_limit=30, seed=123)
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/training.log
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/results_local.jsonl.
[Iter 1/70, Epoch 0] train loss=8.12e-01, gnorm=7.50e+00, lr=1.43e-05, #samples processed=128, #sample per second=337.60. ETA=0.44min
[Iter 2/70, Epoch 0] train loss=9.91e-01, gnorm=7.33e+00, lr=2.86e-05, #samples processed=128, #sample per second=517.55. ETA=0.36min
[Iter 2/70, Epoch 0] Validation f1=7.1478e-01, mcc=1.5253e-01, roc_auc=5.8768e-01, accuracy=5.8500e-01, log_loss=7.1969e-01, Time computing validation-score=0.118s, Total time spent=0.01min. Found improved model=True, Improved top-3 models=True
[Iter 3/70, Epoch 0] train loss=8.28e-01, gnorm=4.83e+00, lr=4.29e-05, #samples processed=128, #sample per second=284.98. ETA=0.40min
[Iter 4/70, Epoch 0] train loss=7.38e-01, gnorm=6.18e+00, lr=5.71e-05, #samples processed=128, #sample per second=583.27. ETA=0.36min
[Iter 4/70, Epoch 0] Validation f1=7.2483e-01, mcc=1.9467e-01, roc_auc=6.8788e-01, accuracy=5.9000e-01, log_loss=6.9877e-01, Time computing validation-score=0.119s, Total time spent=0.03min. Found improved model=True, Improved top-3 models=True
[Iter 5/70, Epoch 0] train loss=7.61e-01, gnorm=4.41e+00, lr=7.14e-05, #samples processed=128, #sample per second=251.78. ETA=0.39min
[Iter 6/70, Epoch 0] train loss=6.67e-01, gnorm=2.78e+00, lr=8.57e-05, #samples processed=128, #sample per second=562.24. ETA=0.36min
[Iter 6/70, Epoch 0] Validation f1=4.6988e-01, mcc=1.8355e-01, roc_auc=6.8758e-01, accuracy=5.6000e-01, log_loss=6.8424e-01, Time computing validation-score=0.119s, Total time spent=0.04min. Found improved model=False, Improved top-3 models=True
[Iter 7/70, Epoch 0] train loss=8.15e-01, gnorm=5.79e+00, lr=1.00e-04, #samples processed=128, #sample per second=318.21. ETA=0.37min
[Iter 8/70, Epoch 1] train loss=6.81e-01, gnorm=3.96e+00, lr=9.84e-05, #samples processed=128, #sample per second=574.12. ETA=0.34min
[Iter 8/70, Epoch 1] Validation f1=7.1711e-01, mcc=1.3551e-01, roc_auc=7.6747e-01, accuracy=5.7000e-01, log_loss=6.8363e-01, Time computing validation-score=0.118s, Total time spent=0.05min. Found improved model=False, Improved top-3 models=True
[Iter 9/70, Epoch 1] train loss=7.53e-01, gnorm=4.29e+00, lr=9.68e-05, #samples processed=128, #sample per second=282.67. ETA=0.35min
[Iter 10/70, Epoch 1] train loss=6.53e-01, gnorm=3.71e+00, lr=9.52e-05, #samples processed=128, #sample per second=598.49. ETA=0.33min
[Iter 10/70, Epoch 1] Validation f1=7.3571e-01, mcc=2.6739e-01, roc_auc=7.7747e-01, accuracy=6.3000e-01, log_loss=6.1811e-01, Time computing validation-score=0.121s, Total time spent=0.06min. Found improved model=True, Improved top-3 models=True
[Iter 11/70, Epoch 1] train loss=6.27e-01, gnorm=1.77e+00, lr=9.37e-05, #samples processed=128, #sample per second=230.95. ETA=0.35min
[Iter 12/70, Epoch 1] train loss=6.27e-01, gnorm=3.18e+00, lr=9.21e-05, #samples processed=128, #sample per second=596.67. ETA=0.33min
[Iter 12/70, Epoch 1] Validation f1=7.5385e-01, mcc=3.5976e-01, roc_auc=8.0626e-01, accuracy=6.8000e-01, log_loss=5.8020e-01, Time computing validation-score=0.118s, Total time spent=0.07min. Found improved model=True, Improved top-3 models=True
[Iter 13/70, Epoch 1] train loss=6.22e-01, gnorm=1.92e+00, lr=9.05e-05, #samples processed=128, #sample per second=230.35. ETA=0.34min
[Iter 14/70, Epoch 1] train loss=6.25e-01, gnorm=3.05e+00, lr=8.89e-05, #samples processed=128, #sample per second=597.07. ETA=0.32min
[Iter 14/70, Epoch 1] Validation f1=7.5277e-01, mcc=3.4118e-01, roc_auc=8.2475e-01, accuracy=6.6500e-01, log_loss=5.6995e-01, Time computing validation-score=0.118s, Total time spent=0.09min. Found improved model=False, Improved top-3 models=True
[Iter 15/70, Epoch 2] train loss=5.97e-01, gnorm=3.06e+00, lr=8.73e-05, #samples processed=128, #sample per second=281.40. ETA=0.33min
[Iter 16/70, Epoch 2] train loss=4.95e-01, gnorm=3.44e+00, lr=8.57e-05, #samples processed=128, #sample per second=586.38. ETA=0.31min
[Iter 16/70, Epoch 2] Validation f1=7.8947e-01, mcc=5.1291e-01, roc_auc=8.3505e-01, accuracy=7.6000e-01, log_loss=5.0573e-01, Time computing validation-score=0.118s, Total time spent=0.10min. Found improved model=True, Improved top-3 models=True
[Iter 17/70, Epoch 2] train loss=5.09e-01, gnorm=2.69e+00, lr=8.41e-05, #samples processed=128, #sample per second=235.76. ETA=0.32min
[Iter 18/70, Epoch 2] train loss=4.50e-01, gnorm=2.05e+00, lr=8.25e-05, #samples processed=128, #sample per second=556.97. ETA=0.30min
[Iter 18/70, Epoch 2] Validation f1=7.5000e-01, mcc=3.3049e-01, roc_auc=8.5253e-01, accuracy=6.6000e-01, log_loss=6.1880e-01, Time computing validation-score=0.118s, Total time spent=0.11min. Found improved model=False, Improved top-3 models=False
[Iter 19/70, Epoch 2] train loss=5.70e-01, gnorm=7.80e+00, lr=8.10e-05, #samples processed=128, #sample per second=376.68. ETA=0.30min
[Iter 20/70, Epoch 2] train loss=4.79e-01, gnorm=6.47e+00, lr=7.94e-05, #samples processed=128, #sample per second=586.26. ETA=0.29min
[Iter 20/70, Epoch 2] Validation f1=8.2353e-01, mcc=5.7790e-01, roc_auc=8.6899e-01, accuracy=7.9000e-01, log_loss=4.6943e-01, Time computing validation-score=0.118s, Total time spent=0.12min. Found improved model=True, Improved top-3 models=True
[Iter 21/70, Epoch 2] train loss=3.62e-01, gnorm=2.34e+00, lr=7.78e-05, #samples processed=128, #sample per second=233.86. ETA=0.29min
[Iter 22/70, Epoch 3] train loss=4.85e-01, gnorm=5.64e+00, lr=7.62e-05, #samples processed=128, #sample per second=568.10. ETA=0.28min
[Iter 22/70, Epoch 3] Validation f1=8.3258e-01, mcc=6.2592e-01, roc_auc=8.8414e-01, accuracy=8.1500e-01, log_loss=4.2885e-01, Time computing validation-score=0.119s, Total time spent=0.13min. Found improved model=True, Improved top-3 models=True
[Iter 23/70, Epoch 3] train loss=3.66e-01, gnorm=7.86e+00, lr=7.46e-05, #samples processed=128, #sample per second=235.35. ETA=0.28min
[Iter 24/70, Epoch 3] train loss=4.42e-01, gnorm=2.98e+00, lr=7.30e-05, #samples processed=128, #sample per second=598.96. ETA=0.27min
[Iter 24/70, Epoch 3] Validation f1=8.0000e-01, mcc=4.9902e-01, roc_auc=8.9091e-01, accuracy=7.4000e-01, log_loss=5.9853e-01, Time computing validation-score=0.119s, Total time spent=0.14min. Found improved model=False, Improved top-3 models=False
[Iter 25/70, Epoch 3] train loss=4.39e-01, gnorm=9.09e+00, lr=7.14e-05, #samples processed=128, #sample per second=375.58. ETA=0.26min
[Iter 26/70, Epoch 3] train loss=4.99e-01, gnorm=8.40e+00, lr=6.98e-05, #samples processed=128, #sample per second=604.05. ETA=0.25min
[Iter 26/70, Epoch 3] Validation f1=8.3817e-01, mcc=6.1207e-01, roc_auc=9.0404e-01, accuracy=8.0500e-01, log_loss=4.6940e-01, Time computing validation-score=0.119s, Total time spent=0.15min. Found improved model=False, Improved top-3 models=True
[Iter 27/70, Epoch 3] train loss=3.76e-01, gnorm=2.99e+00, lr=6.83e-05, #samples processed=128, #sample per second=274.95. ETA=0.25min
[Iter 28/70, Epoch 3] train loss=2.89e-01, gnorm=4.31e+00, lr=6.67e-05, #samples processed=128, #sample per second=572.93. ETA=0.24min
[Iter 28/70, Epoch 3] Validation f1=8.4716e-01, mcc=6.4595e-01, roc_auc=9.1222e-01, accuracy=8.2500e-01, log_loss=3.8855e-01, Time computing validation-score=0.118s, Total time spent=0.17min. Found improved model=True, Improved top-3 models=True
[Iter 29/70, Epoch 4] train loss=4.07e-01, gnorm=3.82e+00, lr=6.51e-05, #samples processed=128, #sample per second=235.77. ETA=0.24min
[Iter 30/70, Epoch 4] train loss=3.36e-01, gnorm=4.39e+00, lr=6.35e-05, #samples processed=128, #sample per second=576.51. ETA=0.23min
[Iter 30/70, Epoch 4] Validation f1=8.5088e-01, mcc=6.5595e-01, roc_auc=9.1717e-01, accuracy=8.3000e-01, log_loss=3.8304e-01, Time computing validation-score=0.118s, Total time spent=0.18min. Found improved model=True, Improved top-3 models=True
[Iter 31/70, Epoch 4] train loss=2.70e-01, gnorm=2.92e+00, lr=6.19e-05, #samples processed=128, #sample per second=226.21. ETA=0.23min
[Iter 32/70, Epoch 4] train loss=3.40e-01, gnorm=4.62e+00, lr=6.03e-05, #samples processed=128, #sample per second=560.67. ETA=0.22min
[Iter 32/70, Epoch 4] Validation f1=8.5586e-01, mcc=6.7625e-01, roc_auc=9.1909e-01, accuracy=8.4000e-01, log_loss=3.6101e-01, Time computing validation-score=0.119s, Total time spent=0.19min. Found improved model=True, Improved top-3 models=True
[Iter 33/70, Epoch 4] train loss=2.81e-01, gnorm=4.18e+00, lr=5.87e-05, #samples processed=128, #sample per second=228.21. ETA=0.22min
[Iter 34/70, Epoch 4] train loss=3.37e-01, gnorm=4.10e+00, lr=5.71e-05, #samples processed=128, #sample per second=592.90. ETA=0.21min
[Iter 34/70, Epoch 4] Validation f1=8.3951e-01, mcc=6.1432e-01, roc_auc=9.2535e-01, accuracy=8.0500e-01, log_loss=4.4675e-01, Time computing validation-score=0.119s, Total time spent=0.20min. Found improved model=False, Improved top-3 models=False
[Iter 35/70, Epoch 4] train loss=2.38e-01, gnorm=4.59e+00, lr=5.56e-05, #samples processed=128, #sample per second=374.26. ETA=0.21min
[Iter 36/70, Epoch 5] train loss=2.89e-01, gnorm=5.77e+00, lr=5.40e-05, #samples processed=128, #sample per second=537.56. ETA=0.20min
[Iter 36/70, Epoch 5] Validation f1=8.3740e-01, mcc=6.0758e-01, roc_auc=9.2768e-01, accuracy=8.0000e-01, log_loss=4.5780e-01, Time computing validation-score=0.122s, Total time spent=0.21min. Found improved model=False, Improved top-3 models=False
[Iter 37/70, Epoch 5] train loss=2.30e-01, gnorm=5.15e+00, lr=5.24e-05, #samples processed=128, #sample per second=370.82. ETA=0.19min
[Iter 38/70, Epoch 5] train loss=2.87e-01, gnorm=3.15e+00, lr=5.08e-05, #samples processed=128, #sample per second=577.55. ETA=0.18min
[Iter 38/70, Epoch 5] Validation f1=8.5586e-01, mcc=6.7625e-01, roc_auc=9.2859e-01, accuracy=8.4000e-01, log_loss=3.6213e-01, Time computing validation-score=0.119s, Total time spent=0.22min. Found improved model=True, Improved top-3 models=True
[Iter 39/70, Epoch 5] train loss=2.54e-01, gnorm=5.52e+00, lr=4.92e-05, #samples processed=128, #sample per second=232.46. ETA=0.18min
[Iter 40/70, Epoch 5] train loss=3.06e-01, gnorm=3.96e+00, lr=4.76e-05, #samples processed=128, #sample per second=582.19. ETA=0.17min
[Iter 40/70, Epoch 5] Validation f1=8.5973e-01, mcc=6.8659e-01, roc_auc=9.2980e-01, accuracy=8.4500e-01, log_loss=3.5579e-01, Time computing validation-score=0.118s, Total time spent=0.24min. Found improved model=True, Improved top-3 models=True
[Iter 41/70, Epoch 5] train loss=2.34e-01, gnorm=3.01e+00, lr=4.60e-05, #samples processed=128, #sample per second=225.50. ETA=0.17min
[Iter 42/70, Epoch 5] train loss=2.02e-01, gnorm=4.99e+00, lr=4.44e-05, #samples processed=128, #sample per second=568.64. ETA=0.16min
[Iter 42/70, Epoch 5] Validation f1=8.6726e-01, mcc=6.9642e-01, roc_auc=9.3010e-01, accuracy=8.5000e-01, log_loss=3.7531e-01, Time computing validation-score=0.119s, Total time spent=0.25min. Found improved model=True, Improved top-3 models=True
[Iter 43/70, Epoch 6] train loss=2.02e-01, gnorm=3.00e+00, lr=4.29e-05, #samples processed=128, #sample per second=233.82. ETA=0.16min
[Iter 44/70, Epoch 6] train loss=1.67e-01, gnorm=3.03e+00, lr=4.13e-05, #samples processed=128, #sample per second=573.44. ETA=0.15min
[Iter 44/70, Epoch 6] Validation f1=8.6580e-01, mcc=6.8771e-01, roc_auc=9.3071e-01, accuracy=8.4500e-01, log_loss=3.9351e-01, Time computing validation-score=0.118s, Total time spent=0.26min. Found improved model=False, Improved top-3 models=True
[Iter 45/70, Epoch 6] train loss=2.47e-01, gnorm=4.18e+00, lr=3.97e-05, #samples processed=128, #sample per second=284.03. ETA=0.15min
[Iter 46/70, Epoch 6] train loss=2.24e-01, gnorm=3.66e+00, lr=3.81e-05, #samples processed=128, #sample per second=571.05. ETA=0.14min
[Iter 46/70, Epoch 6] Validation f1=8.4167e-01, mcc=6.2160e-01, roc_auc=9.3202e-01, accuracy=8.1000e-01, log_loss=4.2485e-01, Time computing validation-score=0.119s, Total time spent=0.27min. Found improved model=False, Improved top-3 models=False
[Iter 47/70, Epoch 6] train loss=1.42e-01, gnorm=4.37e+00, lr=3.65e-05, #samples processed=128, #sample per second=376.00. ETA=0.13min
[Iter 48/70, Epoch 6] train loss=1.73e-01, gnorm=6.99e+00, lr=3.49e-05, #samples processed=128, #sample per second=590.66. ETA=0.13min
[Iter 48/70, Epoch 6] Validation f1=8.5841e-01, mcc=6.7605e-01, roc_auc=9.3444e-01, accuracy=8.4000e-01, log_loss=3.7702e-01, Time computing validation-score=0.118s, Total time spent=0.28min. Found improved model=False, Improved top-3 models=False
[Iter 49/70, Epoch 6] train loss=2.14e-01, gnorm=4.04e+00, lr=3.33e-05, #samples processed=128, #sample per second=368.62. ETA=0.12min
[Iter 50/70, Epoch 7] train loss=1.65e-01, gnorm=4.47e+00, lr=3.17e-05, #samples processed=128, #sample per second=568.72. ETA=0.12min
[Iter 50/70, Epoch 7] Validation f1=8.6239e-01, mcc=6.9772e-01, roc_auc=9.3414e-01, accuracy=8.5000e-01, log_loss=3.5093e-01, Time computing validation-score=0.118s, Total time spent=0.29min. Found improved model=True, Improved top-3 models=True
[Iter 51/70, Epoch 7] train loss=1.32e-01, gnorm=5.40e+00, lr=3.02e-05, #samples processed=128, #sample per second=237.23. ETA=0.11min
[Iter 52/70, Epoch 7] train loss=2.06e-01, gnorm=9.37e+00, lr=2.86e-05, #samples processed=128, #sample per second=569.42. ETA=0.10min
[Iter 52/70, Epoch 7] Validation f1=8.5714e-01, mcc=6.7601e-01, roc_auc=9.3465e-01, accuracy=8.4000e-01, log_loss=3.6297e-01, Time computing validation-score=0.118s, Total time spent=0.30min. Found improved model=False, Improved top-3 models=False
[Iter 53/70, Epoch 7] train loss=1.28e-01, gnorm=4.58e+00, lr=2.70e-05, #samples processed=128, #sample per second=381.68. ETA=0.10min
[Iter 54/70, Epoch 7] train loss=1.45e-01, gnorm=3.99e+00, lr=2.54e-05, #samples processed=128, #sample per second=549.78. ETA=0.09min
[Iter 54/70, Epoch 7] Validation f1=8.6087e-01, mcc=6.7700e-01, roc_auc=9.3505e-01, accuracy=8.4000e-01, log_loss=4.1500e-01, Time computing validation-score=0.121s, Total time spent=0.31min. Found improved model=False, Improved top-3 models=False
[Iter 55/70, Epoch 7] train loss=1.92e-01, gnorm=3.91e+00, lr=2.38e-05, #samples processed=128, #sample per second=372.08. ETA=0.09min
[Iter 56/70, Epoch 7] train loss=1.85e-01, gnorm=6.21e+00, lr=2.22e-05, #samples processed=128, #sample per second=568.39. ETA=0.08min
[Iter 56/70, Epoch 7] Validation f1=8.5000e-01, mcc=6.4268e-01, roc_auc=9.3444e-01, accuracy=8.2000e-01, log_loss=4.6364e-01, Time computing validation-score=0.119s, Total time spent=0.32min. Found improved model=False, Improved top-3 models=False
[Iter 57/70, Epoch 8] train loss=1.59e-01, gnorm=4.48e+00, lr=2.06e-05, #samples processed=128, #sample per second=373.63. ETA=0.07min
[Iter 58/70, Epoch 8] train loss=1.50e-01, gnorm=6.33e+00, lr=1.90e-05, #samples processed=128, #sample per second=596.10. ETA=0.07min
[Iter 58/70, Epoch 8] Validation f1=8.5957e-01, mcc=6.6951e-01, roc_auc=9.3495e-01, accuracy=8.3500e-01, log_loss=4.4035e-01, Time computing validation-score=0.118s, Total time spent=0.33min. Found improved model=False, Improved top-3 models=False
[Iter 59/70, Epoch 8] train loss=1.49e-01, gnorm=3.71e+00, lr=1.75e-05, #samples processed=128, #sample per second=380.92. ETA=0.06min
[Iter 60/70, Epoch 8] train loss=7.61e-02, gnorm=3.91e+00, lr=1.59e-05, #samples processed=128, #sample per second=568.56. ETA=0.06min
[Iter 60/70, Epoch 8] Validation f1=8.5590e-01, mcc=6.6642e-01, roc_auc=9.3434e-01, accuracy=8.3500e-01, log_loss=4.1555e-01, Time computing validation-score=0.118s, Total time spent=0.34min. Found improved model=False, Improved top-3 models=False
[Iter 61/70, Epoch 8] train loss=1.51e-01, gnorm=3.67e+00, lr=1.43e-05, #samples processed=128, #sample per second=369.26. ETA=0.05min
[Iter 62/70, Epoch 8] train loss=1.65e-01, gnorm=3.16e+00, lr=1.27e-05, #samples processed=128, #sample per second=556.19. ETA=0.04min
[Iter 62/70, Epoch 8] Validation f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.3404e-01, accuracy=8.4000e-01, log_loss=3.9252e-01, Time computing validation-score=0.118s, Total time spent=0.35min. Found improved model=False, Improved top-3 models=False
[Iter 63/70, Epoch 8] train loss=1.30e-01, gnorm=3.62e+00, lr=1.11e-05, #samples processed=128, #sample per second=382.58. ETA=0.04min
[Iter 64/70, Epoch 9] train loss=1.44e-01, gnorm=2.99e+00, lr=9.52e-06, #samples processed=128, #sample per second=590.55. ETA=0.03min
[Iter 64/70, Epoch 9] Validation f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.3364e-01, accuracy=8.4000e-01, log_loss=3.8500e-01, Time computing validation-score=0.117s, Total time spent=0.36min. Found improved model=False, Improved top-3 models=False
[Iter 65/70, Epoch 9] train loss=1.96e-01, gnorm=4.47e+00, lr=7.94e-06, #samples processed=128, #sample per second=384.73. ETA=0.03min
[Iter 66/70, Epoch 9] train loss=1.25e-01, gnorm=3.28e+00, lr=6.35e-06, #samples processed=128, #sample per second=586.18. ETA=0.02min
[Iter 66/70, Epoch 9] Validation f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.3343e-01, accuracy=8.4000e-01, log_loss=3.8543e-01, Time computing validation-score=0.123s, Total time spent=0.37min. Found improved model=False, Improved top-3 models=False
[Iter 67/70, Epoch 9] train loss=1.28e-01, gnorm=5.95e+00, lr=4.76e-06, #samples processed=128, #sample per second=357.11. ETA=0.02min
[Iter 68/70, Epoch 9] train loss=1.51e-01, gnorm=4.37e+00, lr=3.17e-06, #samples processed=128, #sample per second=527.41. ETA=0.01min
[Iter 68/70, Epoch 9] Validation f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.3404e-01, accuracy=8.4000e-01, log_loss=3.9105e-01, Time computing validation-score=0.119s, Total time spent=0.38min. Found improved model=False, Improved top-3 models=False
[Iter 69/70, Epoch 9] train loss=1.09e-01, gnorm=3.69e+00, lr=1.59e-06, #samples processed=128, #sample per second=366.43. ETA=0.01min
[Iter 70/70, Epoch 9] train loss=1.59e-01, gnorm=4.69e+00, lr=0.00e+00, #samples processed=128, #sample per second=579.29. ETA=0.00min
[Iter 70/70, Epoch 9] Validation f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.3414e-01, accuracy=8.4000e-01, log_loss=3.9298e-01, Time computing validation-score=0.117s, Total time spent=0.39min. Found improved model=False, Improved top-3 models=False
Training completed. Auto-saving to "ag_text_customize1/". For loading the model, you can use predictor = TextPredictor.load("ag_text_customize1/")
<autogluon.text.text_prediction.predictor.predictor.TextPredictor at 0x7f42cc6e9d60>
Custom Hyperparameter Values¶
The pre-registered configurations provide reasonable default hyperparameters. A common workflow is to first train a model with one of the presets and then tune some hyperparameters to see if the performance can be further improved. In the example below, we set the number of training epochs to 5 and the learning rate to be 5E-5.
hyperparameters = ag_text_presets.create('electra_small_fuse_late')
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.num_train_epochs'] = 5
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.lr'] = ag.core.space.Categorical(5E-5)
predictor = TextPredictor(path='ag_text_customize2', eval_metric='acc', label='label')
predictor.fit(train_data, hyperparameters=hyperparameters, time_limit=30, seed=123)
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/training.log
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/results_local.jsonl.
[Iter 1/35, Epoch 0] train loss=8.41e-01, gnorm=6.98e+00, lr=1.25e-05, #samples processed=128, #sample per second=355.03. ETA=0.20min
[Iter 2/35, Epoch 0] train loss=8.91e-01, gnorm=7.04e+00, lr=2.50e-05, #samples processed=128, #sample per second=557.25. ETA=0.16min
[Iter 2/35, Epoch 0] Validation f1=7.1480e-01, mcc=1.9360e-01, roc_auc=5.8747e-01, accuracy=6.0500e-01, log_loss=7.0601e-01, Time computing validation-score=0.120s, Total time spent=0.01min. Found improved model=True, Improved top-3 models=True
[Iter 3/35, Epoch 0] train loss=8.24e-01, gnorm=4.26e+00, lr=3.75e-05, #samples processed=128, #sample per second=277.18. ETA=0.19min
[Iter 4/35, Epoch 0] train loss=7.94e-01, gnorm=5.95e+00, lr=5.00e-05, #samples processed=128, #sample per second=576.82. ETA=0.16min
[Iter 4/35, Epoch 0] Validation f1=7.2241e-01, mcc=1.7854e-01, roc_auc=6.8667e-01, accuracy=5.8500e-01, log_loss=7.1489e-01, Time computing validation-score=0.120s, Total time spent=0.02min. Found improved model=False, Improved top-3 models=True
[Iter 5/35, Epoch 0] train loss=7.59e-01, gnorm=3.86e+00, lr=4.84e-05, #samples processed=128, #sample per second=323.20. ETA=0.17min
[Iter 6/35, Epoch 0] train loss=7.23e-01, gnorm=4.00e+00, lr=4.68e-05, #samples processed=128, #sample per second=566.56. ETA=0.15min
[Iter 6/35, Epoch 0] Validation f1=6.6346e-01, mcc=3.0358e-01, roc_auc=6.9131e-01, accuracy=6.5000e-01, log_loss=6.3602e-01, Time computing validation-score=0.118s, Total time spent=0.04min. Found improved model=True, Improved top-3 models=True
[Iter 7/35, Epoch 0] train loss=7.66e-01, gnorm=3.59e+00, lr=4.52e-05, #samples processed=128, #sample per second=250.89. ETA=0.16min
[Iter 8/35, Epoch 1] train loss=7.31e-01, gnorm=5.56e+00, lr=4.35e-05, #samples processed=128, #sample per second=566.87. ETA=0.15min
[Iter 8/35, Epoch 1] Validation f1=6.4646e-01, mcc=3.1585e-01, roc_auc=7.0596e-01, accuracy=6.5000e-01, log_loss=6.3103e-01, Time computing validation-score=0.118s, Total time spent=0.05min. Found improved model=True, Improved top-3 models=True
[Iter 9/35, Epoch 1] train loss=7.32e-01, gnorm=3.79e+00, lr=4.19e-05, #samples processed=128, #sample per second=231.74. ETA=0.15min
[Iter 10/35, Epoch 1] train loss=6.77e-01, gnorm=3.03e+00, lr=4.03e-05, #samples processed=128, #sample per second=581.80. ETA=0.14min
[Iter 10/35, Epoch 1] Validation f1=7.2241e-01, mcc=1.7854e-01, roc_auc=7.4485e-01, accuracy=5.8500e-01, log_loss=6.5272e-01, Time computing validation-score=0.119s, Total time spent=0.06min. Found improved model=False, Improved top-3 models=False
[Iter 11/35, Epoch 1] train loss=6.48e-01, gnorm=3.44e+00, lr=3.87e-05, #samples processed=128, #sample per second=368.32. ETA=0.14min
[Iter 12/35, Epoch 1] train loss=6.05e-01, gnorm=3.46e+00, lr=3.71e-05, #samples processed=128, #sample per second=578.69. ETA=0.13min
[Iter 12/35, Epoch 1] Validation f1=7.2185e-01, mcc=1.7438e-01, roc_auc=7.5465e-01, accuracy=5.8000e-01, log_loss=6.6024e-01, Time computing validation-score=0.119s, Total time spent=0.07min. Found improved model=False, Improved top-3 models=False
[Iter 13/35, Epoch 1] train loss=7.24e-01, gnorm=4.58e+00, lr=3.55e-05, #samples processed=128, #sample per second=353.63. ETA=0.12min
[Iter 14/35, Epoch 1] train loss=6.74e-01, gnorm=4.04e+00, lr=3.39e-05, #samples processed=128, #sample per second=589.16. ETA=0.11min
[Iter 14/35, Epoch 1] Validation f1=7.4194e-01, mcc=3.4987e-01, roc_auc=7.6152e-01, accuracy=6.8000e-01, log_loss=5.9471e-01, Time computing validation-score=0.118s, Total time spent=0.08min. Found improved model=True, Improved top-3 models=True
[Iter 15/35, Epoch 2] train loss=6.10e-01, gnorm=2.29e+00, lr=3.23e-05, #samples processed=128, #sample per second=231.21. ETA=0.11min
[Iter 16/35, Epoch 2] train loss=6.42e-01, gnorm=3.78e+00, lr=3.06e-05, #samples processed=128, #sample per second=587.82. ETA=0.11min
[Iter 16/35, Epoch 2] Validation f1=7.0531e-01, mcc=3.9516e-01, roc_auc=7.7364e-01, accuracy=6.9500e-01, log_loss=5.8792e-01, Time computing validation-score=0.119s, Total time spent=0.09min. Found improved model=True, Improved top-3 models=True
[Iter 17/35, Epoch 2] train loss=6.70e-01, gnorm=4.12e+00, lr=2.90e-05, #samples processed=128, #sample per second=236.14. ETA=0.10min
[Iter 18/35, Epoch 2] train loss=6.17e-01, gnorm=4.08e+00, lr=2.74e-05, #samples processed=128, #sample per second=556.57. ETA=0.10min
[Iter 18/35, Epoch 2] Validation f1=7.4708e-01, mcc=3.4501e-01, roc_auc=7.9747e-01, accuracy=6.7500e-01, log_loss=5.7028e-01, Time computing validation-score=0.120s, Total time spent=0.11min. Found improved model=False, Improved top-3 models=True
[Iter 19/35, Epoch 2] train loss=5.82e-01, gnorm=2.86e+00, lr=2.58e-05, #samples processed=128, #sample per second=280.43. ETA=0.09min
[Iter 20/35, Epoch 2] train loss=6.00e-01, gnorm=2.90e+00, lr=2.42e-05, #samples processed=128, #sample per second=583.62. ETA=0.08min
[Iter 20/35, Epoch 2] Validation f1=7.5735e-01, mcc=3.5610e-01, roc_auc=8.1495e-01, accuracy=6.7000e-01, log_loss=5.7067e-01, Time computing validation-score=0.119s, Total time spent=0.12min. Found improved model=False, Improved top-3 models=False
[Iter 21/35, Epoch 2] train loss=5.94e-01, gnorm=3.72e+00, lr=2.26e-05, #samples processed=128, #sample per second=364.60. ETA=0.08min
[Iter 22/35, Epoch 3] train loss=5.40e-01, gnorm=2.39e+00, lr=2.10e-05, #samples processed=128, #sample per second=568.28. ETA=0.07min
[Iter 22/35, Epoch 3] Validation f1=7.6680e-01, mcc=4.0855e-01, roc_auc=8.2121e-01, accuracy=7.0500e-01, log_loss=5.4653e-01, Time computing validation-score=0.118s, Total time spent=0.13min. Found improved model=True, Improved top-3 models=True
[Iter 23/35, Epoch 3] train loss=5.59e-01, gnorm=3.32e+00, lr=1.94e-05, #samples processed=128, #sample per second=235.64. ETA=0.07min
[Iter 24/35, Epoch 3] train loss=5.90e-01, gnorm=2.55e+00, lr=1.77e-05, #samples processed=128, #sample per second=591.23. ETA=0.06min
[Iter 24/35, Epoch 3] Validation f1=7.7043e-01, mcc=4.1333e-01, roc_auc=8.2980e-01, accuracy=7.0500e-01, log_loss=5.3667e-01, Time computing validation-score=0.119s, Total time spent=0.14min. Found improved model=True, Improved top-3 models=True
[Iter 25/35, Epoch 3] train loss=5.20e-01, gnorm=2.52e+00, lr=1.61e-05, #samples processed=128, #sample per second=240.71. ETA=0.06min
[Iter 26/35, Epoch 3] train loss=5.64e-01, gnorm=2.70e+00, lr=1.45e-05, #samples processed=128, #sample per second=583.40. ETA=0.05min
[Iter 26/35, Epoch 3] Validation f1=7.7778e-01, mcc=4.4077e-01, roc_auc=8.3596e-01, accuracy=7.2000e-01, log_loss=5.2590e-01, Time computing validation-score=0.122s, Total time spent=0.15min. Found improved model=True, Improved top-3 models=True
[Iter 27/35, Epoch 3] train loss=6.12e-01, gnorm=2.92e+00, lr=1.29e-05, #samples processed=128, #sample per second=236.19. ETA=0.05min
[Iter 28/35, Epoch 3] train loss=4.77e-01, gnorm=2.95e+00, lr=1.13e-05, #samples processed=128, #sample per second=573.58. ETA=0.04min
[Iter 28/35, Epoch 3] Validation f1=7.7953e-01, mcc=4.4320e-01, roc_auc=8.4232e-01, accuracy=7.2000e-01, log_loss=5.2194e-01, Time computing validation-score=0.118s, Total time spent=0.17min. Found improved model=True, Improved top-3 models=True
[Iter 29/35, Epoch 4] train loss=5.40e-01, gnorm=3.52e+00, lr=9.68e-06, #samples processed=128, #sample per second=230.21. ETA=0.04min
[Iter 30/35, Epoch 4] train loss=5.37e-01, gnorm=4.75e+00, lr=8.06e-06, #samples processed=128, #sample per second=576.77. ETA=0.03min
[Iter 30/35, Epoch 4] Validation f1=8.0162e-01, mcc=5.1170e-01, roc_auc=8.4505e-01, accuracy=7.5500e-01, log_loss=5.1153e-01, Time computing validation-score=0.118s, Total time spent=0.18min. Found improved model=True, Improved top-3 models=True
[Iter 31/35, Epoch 4] train loss=5.08e-01, gnorm=2.97e+00, lr=6.45e-06, #samples processed=128, #sample per second=235.52. ETA=0.02min
[Iter 32/35, Epoch 4] train loss=4.80e-01, gnorm=2.80e+00, lr=4.84e-06, #samples processed=128, #sample per second=560.44. ETA=0.02min
[Iter 32/35, Epoch 4] Validation f1=8.0000e-01, mcc=5.0963e-01, roc_auc=8.4808e-01, accuracy=7.5500e-01, log_loss=5.0338e-01, Time computing validation-score=0.118s, Total time spent=0.19min. Found improved model=True, Improved top-3 models=True
[Iter 33/35, Epoch 4] train loss=5.00e-01, gnorm=2.88e+00, lr=3.23e-06, #samples processed=128, #sample per second=238.32. ETA=0.01min
[Iter 34/35, Epoch 4] train loss=5.48e-01, gnorm=2.86e+00, lr=1.61e-06, #samples processed=128, #sample per second=577.73. ETA=0.01min
[Iter 34/35, Epoch 4] Validation f1=8.0328e-01, mcc=5.1939e-01, roc_auc=8.4894e-01, accuracy=7.6000e-01, log_loss=5.0012e-01, Time computing validation-score=0.120s, Total time spent=0.20min. Found improved model=True, Improved top-3 models=True
[Iter 35/35, Epoch 4] train loss=5.23e-01, gnorm=2.88e+00, lr=0.00e+00, #samples processed=128, #sample per second=230.19. ETA=0.00min
[Iter 35/35, Epoch 4] Validation f1=8.0328e-01, mcc=5.1939e-01, roc_auc=8.4894e-01, accuracy=7.6000e-01, log_loss=5.0012e-01, Time computing validation-score=0.122s, Total time spent=0.21min. Found improved model=True, Improved top-3 models=True
Training completed. Auto-saving to "ag_text_customize2/". For loading the model, you can use predictor = TextPredictor.load("ag_text_customize2/")
<autogluon.text.text_prediction.predictor.predictor.TextPredictor at 0x7f42b1776a90>
Register Your Own Configuration¶
You can also register your custom hyperparameter settings as new presets
in ag_text_presets. Below, the electra_small_fuse_late_train5
preset uses ELECTRA-small as its backbone and trains for 5 epochs with a
weight-decay of 1E-2.
@ag_text_presets.register()
def electra_small_fuse_late_train5():
hyperparameters = ag_text_presets.create('electra_small_fuse_late')
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.num_train_epochs'] = 5
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.wd'] = 1E-2
return hyperparameters
predictor = TextPredictor(path='ag_text_customize3', eval_metric='acc', label='label')
predictor.fit(train_data, presets='electra_small_fuse_late_train5', time_limit=60, seed=123)
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/training.log
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/results_local.jsonl.
[Iter 1/35, Epoch 0] train loss=8.42e-01, gnorm=9.55e+00, lr=2.50e-05, #samples processed=128, #sample per second=346.05. ETA=0.21min
[Iter 2/35, Epoch 0] train loss=8.79e-01, gnorm=5.45e+00, lr=5.00e-05, #samples processed=128, #sample per second=562.53. ETA=0.16min
[Iter 2/35, Epoch 0] Validation f1=7.1661e-01, mcc=1.3643e-01, roc_auc=6.3030e-01, accuracy=5.6500e-01, log_loss=8.1869e-01, Time computing validation-score=0.117s, Total time spent=0.01min. Found improved model=True, Improved top-3 models=True
[Iter 3/35, Epoch 0] train loss=8.48e-01, gnorm=7.03e+00, lr=7.50e-05, #samples processed=128, #sample per second=289.31. ETA=0.19min
[Iter 4/35, Epoch 0] train loss=7.85e-01, gnorm=5.45e+00, lr=1.00e-04, #samples processed=128, #sample per second=580.64. ETA=0.16min
[Iter 4/35, Epoch 0] Validation f1=6.5741e-01, mcc=2.5574e-01, roc_auc=6.7535e-01, accuracy=6.3000e-01, log_loss=6.4491e-01, Time computing validation-score=0.119s, Total time spent=0.03min. Found improved model=True, Improved top-3 models=True
[Iter 5/35, Epoch 0] train loss=7.43e-01, gnorm=5.34e+00, lr=9.68e-05, #samples processed=128, #sample per second=254.12. ETA=0.18min
[Iter 6/35, Epoch 0] train loss=6.54e-01, gnorm=2.65e+00, lr=9.35e-05, #samples processed=128, #sample per second=558.61. ETA=0.16min
[Iter 6/35, Epoch 0] Validation f1=7.4074e-01, mcc=3.0151e-01, roc_auc=7.1859e-01, accuracy=6.5000e-01, log_loss=6.2666e-01, Time computing validation-score=0.118s, Total time spent=0.04min. Found improved model=True, Improved top-3 models=True
[Iter 7/35, Epoch 0] train loss=7.04e-01, gnorm=2.44e+00, lr=9.03e-05, #samples processed=128, #sample per second=251.88. ETA=0.17min
[Iter 8/35, Epoch 1] train loss=7.08e-01, gnorm=2.43e+00, lr=8.71e-05, #samples processed=128, #sample per second=574.25. ETA=0.15min
[Iter 8/35, Epoch 1] Validation f1=7.3381e-01, mcc=2.6318e-01, roc_auc=7.7333e-01, accuracy=6.3000e-01, log_loss=6.1222e-01, Time computing validation-score=0.117s, Total time spent=0.05min. Found improved model=False, Improved top-3 models=True
[Iter 9/35, Epoch 1] train loss=6.68e-01, gnorm=2.42e+00, lr=8.39e-05, #samples processed=128, #sample per second=280.18. ETA=0.15min
[Iter 10/35, Epoch 1] train loss=6.25e-01, gnorm=3.26e+00, lr=8.06e-05, #samples processed=128, #sample per second=582.72. ETA=0.14min
[Iter 10/35, Epoch 1] Validation f1=7.5385e-01, mcc=3.5976e-01, roc_auc=8.1354e-01, accuracy=6.8000e-01, log_loss=5.8282e-01, Time computing validation-score=0.121s, Total time spent=0.06min. Found improved model=True, Improved top-3 models=True
[Iter 11/35, Epoch 1] train loss=5.99e-01, gnorm=2.17e+00, lr=7.74e-05, #samples processed=128, #sample per second=220.25. ETA=0.15min
[Iter 12/35, Epoch 1] train loss=5.59e-01, gnorm=2.42e+00, lr=7.42e-05, #samples processed=128, #sample per second=503.94. ETA=0.14min
[Iter 12/35, Epoch 1] Validation f1=7.4627e-01, mcc=3.2324e-01, roc_auc=8.2384e-01, accuracy=6.6000e-01, log_loss=5.7129e-01, Time computing validation-score=0.120s, Total time spent=0.07min. Found improved model=False, Improved top-3 models=True
[Iter 13/35, Epoch 1] train loss=5.34e-01, gnorm=2.60e+00, lr=7.10e-05, #samples processed=128, #sample per second=276.47. ETA=0.13min
[Iter 14/35, Epoch 1] train loss=5.79e-01, gnorm=2.64e+00, lr=6.77e-05, #samples processed=128, #sample per second=597.77. ETA=0.12min
[Iter 14/35, Epoch 1] Validation f1=7.8947e-01, mcc=5.1291e-01, roc_auc=8.3040e-01, accuracy=7.6000e-01, log_loss=5.1126e-01, Time computing validation-score=0.121s, Total time spent=0.09min. Found improved model=True, Improved top-3 models=True
[Iter 15/35, Epoch 2] train loss=5.67e-01, gnorm=3.72e+00, lr=6.45e-05, #samples processed=128, #sample per second=226.37. ETA=0.12min
[Iter 16/35, Epoch 2] train loss=4.67e-01, gnorm=2.98e+00, lr=6.13e-05, #samples processed=128, #sample per second=585.02. ETA=0.11min
[Iter 16/35, Epoch 2] Validation f1=7.8400e-01, mcc=4.6057e-01, roc_auc=8.5384e-01, accuracy=7.3000e-01, log_loss=5.1336e-01, Time computing validation-score=0.117s, Total time spent=0.10min. Found improved model=False, Improved top-3 models=True
[Iter 17/35, Epoch 2] train loss=5.04e-01, gnorm=2.60e+00, lr=5.81e-05, #samples processed=128, #sample per second=289.56. ETA=0.11min
[Iter 18/35, Epoch 2] train loss=4.72e-01, gnorm=4.36e+00, lr=5.48e-05, #samples processed=128, #sample per second=558.37. ETA=0.10min
[Iter 18/35, Epoch 2] Validation f1=7.7567e-01, mcc=4.2311e-01, roc_auc=8.6889e-01, accuracy=7.0500e-01, log_loss=5.3510e-01, Time computing validation-score=0.120s, Total time spent=0.11min. Found improved model=False, Improved top-3 models=True
[Iter 19/35, Epoch 2] train loss=4.78e-01, gnorm=5.83e+00, lr=5.16e-05, #samples processed=128, #sample per second=260.93. ETA=0.10min
[Iter 20/35, Epoch 2] train loss=4.74e-01, gnorm=2.84e+00, lr=4.84e-05, #samples processed=128, #sample per second=561.46. ETA=0.09min
[Iter 20/35, Epoch 2] Validation f1=8.2791e-01, mcc=6.2894e-01, roc_auc=8.7727e-01, accuracy=8.1500e-01, log_loss=4.3660e-01, Time computing validation-score=0.117s, Total time spent=0.12min. Found improved model=True, Improved top-3 models=True
[Iter 21/35, Epoch 2] train loss=4.13e-01, gnorm=5.01e+00, lr=4.52e-05, #samples processed=128, #sample per second=233.60. ETA=0.08min
[Iter 22/35, Epoch 3] train loss=5.20e-01, gnorm=7.42e+00, lr=4.19e-05, #samples processed=128, #sample per second=569.36. ETA=0.08min
[Iter 22/35, Epoch 3] Validation f1=8.5068e-01, mcc=6.6636e-01, roc_auc=8.8828e-01, accuracy=8.3500e-01, log_loss=4.1868e-01, Time computing validation-score=0.119s, Total time spent=0.14min. Found improved model=True, Improved top-3 models=True
[Iter 23/35, Epoch 3] train loss=4.97e-01, gnorm=8.24e+00, lr=3.87e-05, #samples processed=128, #sample per second=236.67. ETA=0.07min
[Iter 24/35, Epoch 3] train loss=4.68e-01, gnorm=3.33e+00, lr=3.55e-05, #samples processed=128, #sample per second=579.41. ETA=0.07min
[Iter 24/35, Epoch 3] Validation f1=8.0000e-01, mcc=5.0081e-01, roc_auc=8.9566e-01, accuracy=7.4500e-01, log_loss=4.8857e-01, Time computing validation-score=0.119s, Total time spent=0.15min. Found improved model=False, Improved top-3 models=False
[Iter 25/35, Epoch 3] train loss=3.36e-01, gnorm=4.77e+00, lr=3.23e-05, #samples processed=128, #sample per second=374.20. ETA=0.06min
[Iter 26/35, Epoch 3] train loss=4.96e-01, gnorm=6.20e+00, lr=2.90e-05, #samples processed=128, #sample per second=556.20. ETA=0.05min
[Iter 26/35, Epoch 3] Validation f1=8.0000e-01, mcc=5.0081e-01, roc_auc=9.0081e-01, accuracy=7.4500e-01, log_loss=5.0114e-01, Time computing validation-score=0.121s, Total time spent=0.16min. Found improved model=False, Improved top-3 models=False
[Iter 27/35, Epoch 3] train loss=3.41e-01, gnorm=4.19e+00, lr=2.58e-05, #samples processed=128, #sample per second=356.83. ETA=0.05min
[Iter 28/35, Epoch 3] train loss=3.15e-01, gnorm=3.54e+00, lr=2.26e-05, #samples processed=128, #sample per second=539.03. ETA=0.04min
[Iter 28/35, Epoch 3] Validation f1=8.3682e-01, mcc=6.1015e-01, roc_auc=9.0354e-01, accuracy=8.0500e-01, log_loss=4.3627e-01, Time computing validation-score=0.118s, Total time spent=0.17min. Found improved model=False, Improved top-3 models=True
[Iter 29/35, Epoch 4] train loss=4.15e-01, gnorm=5.04e+00, lr=1.94e-05, #samples processed=128, #sample per second=292.14. ETA=0.04min
[Iter 30/35, Epoch 4] train loss=3.99e-01, gnorm=2.99e+00, lr=1.61e-05, #samples processed=128, #sample per second=545.71. ETA=0.03min
[Iter 30/35, Epoch 4] Validation f1=8.4821e-01, mcc=6.5571e-01, roc_auc=9.0323e-01, accuracy=8.3000e-01, log_loss=3.9331e-01, Time computing validation-score=0.119s, Total time spent=0.18min. Found improved model=False, Improved top-3 models=True
[Iter 31/35, Epoch 4] train loss=3.94e-01, gnorm=3.39e+00, lr=1.29e-05, #samples processed=128, #sample per second=279.85. ETA=0.02min
[Iter 32/35, Epoch 4] train loss=3.30e-01, gnorm=4.69e+00, lr=9.68e-06, #samples processed=128, #sample per second=560.82. ETA=0.02min
[Iter 32/35, Epoch 4] Validation f1=8.5202e-01, mcc=6.6596e-01, roc_auc=9.0404e-01, accuracy=8.3500e-01, log_loss=3.8706e-01, Time computing validation-score=0.118s, Total time spent=0.19min. Found improved model=True, Improved top-3 models=True
[Iter 33/35, Epoch 4] train loss=3.23e-01, gnorm=3.36e+00, lr=6.45e-06, #samples processed=128, #sample per second=228.54. ETA=0.01min
[Iter 34/35, Epoch 4] train loss=3.76e-01, gnorm=5.74e+00, lr=3.23e-06, #samples processed=128, #sample per second=594.64. ETA=0.01min
[Iter 34/35, Epoch 4] Validation f1=8.5202e-01, mcc=6.6596e-01, roc_auc=9.0525e-01, accuracy=8.3500e-01, log_loss=3.8596e-01, Time computing validation-score=0.119s, Total time spent=0.20min. Found improved model=True, Improved top-3 models=True
[Iter 35/35, Epoch 4] train loss=4.81e-01, gnorm=8.72e+00, lr=0.00e+00, #samples processed=128, #sample per second=228.89. ETA=0.00min
[Iter 35/35, Epoch 4] Validation f1=8.5202e-01, mcc=6.6596e-01, roc_auc=9.0525e-01, accuracy=8.3500e-01, log_loss=3.8596e-01, Time computing validation-score=0.118s, Total time spent=0.21min. Found improved model=True, Improved top-3 models=True
Training completed. Auto-saving to "ag_text_customize3/". For loading the model, you can use predictor = TextPredictor.load("ag_text_customize3/")
<autogluon.text.text_prediction.predictor.predictor.TextPredictor at 0x7f42b1654d60>
HPO over a Customized Search Space via Bayesian Optimization¶
To control which hyperparameter values are considered during fit(),
we specify the hyperparameters argument. Rather than specifying a
particular fixed value for a hyperparameter, we can specify a space of
values to search over via ag.core.space. We can also specify which
HPO method to use for the search via search_strategy. By default, we
will use Bayesian
Optimization as the searcher.
In this example, we search for good values of the following
hyperparameters:
warmup
number of hidden units in the final MLP layer that maps aggregated features to output prediction
learning rate
weight decay
def electra_small_basic_demo_hpo():
hparams = ag_text_presets.create('electra_small_fuse_late')
search_space = hparams['models']['MultimodalTextModel']['search_space']
search_space['optimization.per_device_batch_size'] = 8
search_space['model.network.agg_net.mid_units'] = ag.core.space.Int(32, 128)
search_space['optimization.warmup_portion'] = ag.core.space.Categorical(0.1, 0.2)
search_space['optimization.lr'] = ag.core.space.Real(1E-5, 2E-4)
search_space['optimization.wd'] = ag.core.space.Categorical(1E-4, 1E-3, 1E-2)
search_space['optimization.num_train_epochs'] = 5
return hparams
We can now call fit() with hyperparameter-tuning over our custom
search space. Below num_trials controls the maximal number of
different hyperparameter configurations for which AutoGluon will train
models (4 models are trained under different hyperparameter
configurations in this case). To achieve good performance in your
applications, you should use larger values of num_trials, which may
identify superior hyperparameter values but will require longer
runtimes.
predictor_sst_rs = TextPredictor(path='ag_text_sst_random_search', label='label', eval_metric='acc')
predictor_sst_rs.set_verbosity(0)
predictor_sst_rs.fit(train_data,
hyperparameters=electra_small_basic_demo_hpo(),
time_limit=60 * 2,
num_trials=4,
seed=123)
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
0%| | 0/4 [00:00<?, ?it/s]
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task0/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task1/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task2/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task3/training.log
<autogluon.text.text_prediction.predictor.predictor.TextPredictor at 0x7f42b1198f70>
We can again evaluate our model’s performance on separate test data.
test_score = predictor_sst_rs.evaluate(test_data, metrics=['acc', 'f1'])
print('Best Config = {}'.format(predictor_sst_rs.results['best_config']))
print('Total Time = {}s'.format(predictor_sst_rs.results['total_time']))
print('Accuracy = {:.2f}%'.format(test_score['acc'] * 100))
print('F1 = {:.2f}%'.format(test_score['f1'] * 100))
Best Config = {'model.backbone.name': 'google_electra_small', 'optimization.batch_size': 128, 'optimization.per_device_batch_size': 8, 'optimization.num_train_epochs': 5, 'optimization.lr': 0.00013271988148266467, 'optimization.wd': 0.01, 'optimization.layerwise_lr_decay': 0.8, 'model.use_avg_nbest': True, 'optimization.nbest': 3, 'model.network.agg_net.agg_type': 'concat', 'model.network.aggregate_categorical': True, 'preprocessing.categorical.convert_to_text': False, 'preprocessing.numerical.convert_to_text': False, 'model.network.agg_net.mid_units': 115, 'optimization.warmup_portion': 0.1}
Total Time = 50.66215443611145s
Accuracy = 78.10%
F1 = 78.56%