.. _sec_textprediction_customization:
Text Prediction - Customization and Hyperparameter Search
=========================================================
This advanced tutorial teaches you how to control the hyperparameter
tuning process in ``TextPredictor`` by specifying:
- A custom search space of candidate hyperparameter values to consider.
- Which hyperparameter optimization (HPO) method should be used to
actually search through this space.
.. code:: python
import numpy as np
import warnings
import autogluon as ag
warnings.filterwarnings('ignore')
np.random.seed(123)
Stanford Sentiment Treebank Data
--------------------------------
For demonstration, we use the Stanford Sentiment Treebank
(`SST `__) dataset.
.. code:: python
from autogluon.core.utils.loaders.load_pd import load
subsample_size = 1000 # subsample for faster demo, you may try specifying larger value
train_data = load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet')
test_data = load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet')
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head(10)
.. raw:: html
|
sentence |
label |
43787 |
very pleasing at its best moments |
1 |
16159 |
, american chai is enough to make you put away... |
0 |
59015 |
too much like an infomercial for ram dass 's l... |
0 |
5108 |
a stirring visual sequence |
1 |
67052 |
cool visual backmasking |
1 |
35938 |
hard ground |
0 |
49879 |
the striking , quietly vulnerable personality ... |
1 |
51591 |
pan nalin 's exposition is beautiful and myste... |
1 |
56780 |
wonderfully loopy |
1 |
28518 |
most beautiful , evocative |
1 |
Configuring the TextPredictor
-----------------------------
Pre-configured Hyperparameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We provided a series of pre-configured hyperparameters. You may list the
keys from ``ag_text_presets`` via ``list_presets``.
.. code:: python
from autogluon.text import ag_text_presets, list_presets
list_presets()
.. parsed-literal::
:class: output
{'simple_presets': ['default',
'lower_quality_fast_train',
'medium_quality_faster_train',
'best_quality'],
'advanced_presets': ['electra_small_fuse_late',
'electra_base_fuse_late',
'electra_large_fuse_late',
'roberta_base_fuse_late',
'multi_cased_bert_base_fuse_late',
'electra_base_fuse_early',
'electra_base_all_text']}
There are two kinds of presets. The ``simple_presets`` are pre-defined
configurations recommended for most users, which allow you specify
whether you care more about predictive accuracy (``'best_quality'``) or
more about training/inference speed (``'lower_quality_fast_train'``)
The ``advanced_presets`` are pre-configured networks using different
Transformer backbones such as ELECTRA, RoBERTa, or Multilingual BERT,
and different feature fusion strategies. For example,
``electra_small_fuse_late`` means we use the ELECTRA-small model as the
network backbone for text fields and use the late fusion strategy
described in ":ref:`sec_textprediction_architecture`". The ``default``
preset is the same as ``electra_base_fuse_late``. Now let's train a
model on our data with specified ``presets``.
.. code:: python
from autogluon.text import TextPredictor
predictor = TextPredictor(path='ag_text_sst_electra_small', eval_metric='acc', label='label')
predictor.set_verbosity(0)
predictor.fit(train_data, presets='electra_small_fuse_late', time_limit=60, seed=123)
.. parsed-literal::
:class: output
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_electra_small/task0/training.log
.. parsed-literal::
:class: output
Below we report both ``f1`` and ``acc`` metrics for our predictions.
Note that if you really want to obtain the best F1 score, you should set
``eval_metric='f1'`` when constructing the TextPredictor.
.. code:: python
predictor.evaluate(test_data, metrics=['f1', 'acc'])
.. parsed-literal::
:class: output
{'f1': 0.7720504009163803, 'acc': 0.7717889908256881}
To view the pre-registered hyperparameters, you can call
``ag_text_presets.create(presets_name)``, e.g.,
.. code:: python
import pprint
pprint.pprint(ag_text_presets.create('electra_small_fuse_late'))
.. parsed-literal::
:class: output
{'models': {'MultimodalTextModel': {'backend': 'gluonnlp_v0',
'search_space': {'model.backbone.name': 'google_electra_small',
'model.network.agg_net.agg_type': 'concat',
'model.network.aggregate_categorical': True,
'model.use_avg_nbest': True,
'optimization.batch_size': 128,
'optimization.layerwise_lr_decay': 0.8,
'optimization.lr': Categorical[0.0001],
'optimization.nbest': 3,
'optimization.num_train_epochs': 10,
'optimization.per_device_batch_size': 8,
'optimization.wd': 0.0001,
'preprocessing.categorical.convert_to_text': False,
'preprocessing.numerical.convert_to_text': False}}},
'tune_kwargs': {'num_trials': 1,
'scheduler_options': None,
'search_options': None,
'search_strategy': 'local',
'searcher': 'random'}}
Another way to specify a custom TextPredictor configuration is via the
``hyperparameters`` argument.
.. code:: python
predictor = TextPredictor(path='ag_text_customize1', eval_metric='acc', label='label')
predictor.fit(train_data, hyperparameters=ag_text_presets.create('electra_small_fuse_late'),
time_limit=30, seed=123)
.. parsed-literal::
:class: output
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
.. parsed-literal::
:class: output
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/training.log
.. parsed-literal::
:class: output
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize1/task0/results_local.jsonl.
[Iter 1/70, Epoch 0] train loss=8.31e-01, gnorm=4.82e+00, lr=1.43e-05, #samples processed=128, #sample per second=325.77. ETA=0.45min
[Iter 2/70, Epoch 0] train loss=8.13e-01, gnorm=4.19e+00, lr=2.86e-05, #samples processed=128, #sample per second=524.16. ETA=0.36min
[Iter 2/70, Epoch 0] valid f1=5.8937e-01, mcc=1.5384e-01, roc_auc=5.8414e-01, accuracy=5.7500e-01, log_loss=7.0208e-01, time spent=0.126s, total time spent=0.01min. Find new best=True, Find new top-3=True
[Iter 3/70, Epoch 0] train loss=7.37e-01, gnorm=3.71e+00, lr=4.29e-05, #samples processed=128, #sample per second=265.50. ETA=0.42min
[Iter 4/70, Epoch 0] train loss=7.37e-01, gnorm=4.50e+00, lr=5.71e-05, #samples processed=128, #sample per second=543.25. ETA=0.37min
[Iter 4/70, Epoch 0] valid f1=7.1972e-01, mcc=1.8196e-01, roc_auc=6.4202e-01, accuracy=5.9500e-01, log_loss=6.8579e-01, time spent=0.126s, total time spent=0.03min. Find new best=True, Find new top-3=True
[Iter 5/70, Epoch 0] train loss=8.01e-01, gnorm=5.41e+00, lr=7.14e-05, #samples processed=128, #sample per second=222.14. ETA=0.42min
[Iter 6/70, Epoch 0] train loss=6.87e-01, gnorm=2.67e+00, lr=8.57e-05, #samples processed=128, #sample per second=536.96. ETA=0.39min
[Iter 6/70, Epoch 0] valid f1=7.0968e-01, mcc=2.6294e-01, roc_auc=6.8717e-01, accuracy=6.4000e-01, log_loss=6.4424e-01, time spent=0.127s, total time spent=0.04min. Find new best=True, Find new top-3=True
[Iter 7/70, Epoch 0] train loss=6.62e-01, gnorm=3.46e+00, lr=1.00e-04, #samples processed=128, #sample per second=237.30. ETA=0.41min
[Iter 8/70, Epoch 1] train loss=7.12e-01, gnorm=4.66e+00, lr=9.84e-05, #samples processed=128, #sample per second=538.87. ETA=0.38min
[Iter 8/70, Epoch 1] valid f1=7.1373e-01, mcc=2.5322e-01, roc_auc=7.2182e-01, accuracy=6.3500e-01, log_loss=6.2712e-01, time spent=0.126s, total time spent=0.05min. Find new best=False, Find new top-3=True
[Iter 9/70, Epoch 1] train loss=6.80e-01, gnorm=2.92e+00, lr=9.68e-05, #samples processed=128, #sample per second=273.16. ETA=0.39min
[Iter 10/70, Epoch 1] train loss=5.93e-01, gnorm=2.05e+00, lr=9.52e-05, #samples processed=128, #sample per second=531.86. ETA=0.37min
[Iter 10/70, Epoch 1] valid f1=7.2574e-01, mcc=3.3715e-01, roc_auc=7.7636e-01, accuracy=6.7500e-01, log_loss=5.8749e-01, time spent=0.126s, total time spent=0.07min. Find new best=True, Find new top-3=True
[Iter 11/70, Epoch 1] train loss=5.74e-01, gnorm=2.41e+00, lr=9.37e-05, #samples processed=128, #sample per second=208.25. ETA=0.38min
[Iter 12/70, Epoch 1] train loss=6.02e-01, gnorm=2.22e+00, lr=9.21e-05, #samples processed=128, #sample per second=523.07. ETA=0.36min
[Iter 12/70, Epoch 1] valid f1=7.2862e-01, mcc=2.6265e-01, roc_auc=8.0475e-01, accuracy=6.3500e-01, log_loss=5.8242e-01, time spent=0.127s, total time spent=0.08min. Find new best=False, Find new top-3=True
[Iter 13/70, Epoch 1] train loss=6.77e-01, gnorm=3.16e+00, lr=9.05e-05, #samples processed=128, #sample per second=254.93. ETA=0.37min
[Iter 14/70, Epoch 1] train loss=5.15e-01, gnorm=2.33e+00, lr=8.89e-05, #samples processed=128, #sample per second=531.24. ETA=0.35min
[Iter 14/70, Epoch 1] valid f1=7.7637e-01, mcc=4.6241e-01, roc_auc=8.1949e-01, accuracy=7.3500e-01, log_loss=5.2983e-01, time spent=0.127s, total time spent=0.09min. Find new best=True, Find new top-3=True
[Iter 15/70, Epoch 2] train loss=5.50e-01, gnorm=3.56e+00, lr=8.73e-05, #samples processed=128, #sample per second=222.94. ETA=0.36min
[Iter 16/70, Epoch 2] train loss=5.61e-01, gnorm=2.51e+00, lr=8.57e-05, #samples processed=128, #sample per second=545.35. ETA=0.34min
[Iter 16/70, Epoch 2] valid f1=7.3381e-01, mcc=2.6318e-01, roc_auc=8.3646e-01, accuracy=6.3000e-01, log_loss=6.1986e-01, time spent=0.127s, total time spent=0.10min. Find new best=False, Find new top-3=False
[Iter 17/70, Epoch 2] train loss=6.14e-01, gnorm=8.26e+00, lr=8.41e-05, #samples processed=128, #sample per second=363.89. ETA=0.33min
[Iter 18/70, Epoch 2] train loss=5.15e-01, gnorm=7.41e+00, lr=8.25e-05, #samples processed=128, #sample per second=525.48. ETA=0.32min
[Iter 18/70, Epoch 2] valid f1=8.0543e-01, mcc=5.6525e-01, roc_auc=8.5444e-01, accuracy=7.8500e-01, log_loss=4.7075e-01, time spent=0.125s, total time spent=0.12min. Find new best=True, Find new top-3=True
[Iter 19/70, Epoch 2] train loss=4.49e-01, gnorm=5.24e+00, lr=8.10e-05, #samples processed=128, #sample per second=222.81. ETA=0.32min
[Iter 20/70, Epoch 2] train loss=5.12e-01, gnorm=6.13e+00, lr=7.94e-05, #samples processed=128, #sample per second=520.73. ETA=0.31min
[Iter 20/70, Epoch 2] valid f1=8.1356e-01, mcc=5.5581e-01, roc_auc=8.6939e-01, accuracy=7.8000e-01, log_loss=4.6242e-01, time spent=0.125s, total time spent=0.13min. Find new best=False, Find new top-3=True
[Iter 21/70, Epoch 2] train loss=3.57e-01, gnorm=2.60e+00, lr=7.78e-05, #samples processed=128, #sample per second=272.59. ETA=0.31min
[Iter 22/70, Epoch 3] train loss=4.02e-01, gnorm=9.30e+00, lr=7.62e-05, #samples processed=128, #sample per second=547.91. ETA=0.30min
[Iter 22/70, Epoch 3] valid f1=7.7470e-01, mcc=4.3081e-01, roc_auc=8.7414e-01, accuracy=7.1500e-01, log_loss=5.8920e-01, time spent=0.125s, total time spent=0.14min. Find new best=False, Find new top-3=False
[Iter 23/70, Epoch 3] train loss=4.02e-01, gnorm=9.74e+00, lr=7.46e-05, #samples processed=128, #sample per second=352.49. ETA=0.29min
[Iter 24/70, Epoch 3] train loss=4.18e-01, gnorm=6.18e+00, lr=7.30e-05, #samples processed=128, #sample per second=508.06. ETA=0.28min
[Iter 24/70, Epoch 3] valid f1=8.3412e-01, mcc=6.5230e-01, roc_auc=8.7838e-01, accuracy=8.2500e-01, log_loss=4.3791e-01, time spent=0.126s, total time spent=0.15min. Find new best=True, Find new top-3=True
[Iter 25/70, Epoch 3] train loss=3.32e-01, gnorm=6.96e+00, lr=7.14e-05, #samples processed=128, #sample per second=211.74. ETA=0.28min
[Iter 26/70, Epoch 3] train loss=3.98e-01, gnorm=1.17e+01, lr=6.98e-05, #samples processed=128, #sample per second=523.75. ETA=0.27min
[Iter 26/70, Epoch 3] valid f1=8.4878e-01, mcc=6.9938e-01, roc_auc=8.9131e-01, accuracy=8.4500e-01, log_loss=4.1922e-01, time spent=0.126s, total time spent=0.17min. Find new best=True, Find new top-3=True
[Iter 27/70, Epoch 3] train loss=4.41e-01, gnorm=1.23e+01, lr=6.83e-05, #samples processed=128, #sample per second=219.52. ETA=0.27min
[Iter 28/70, Epoch 3] train loss=3.01e-01, gnorm=3.04e+00, lr=6.67e-05, #samples processed=128, #sample per second=554.85. ETA=0.26min
[Iter 28/70, Epoch 3] valid f1=8.1818e-01, mcc=5.6011e-01, roc_auc=9.0131e-01, accuracy=7.8000e-01, log_loss=4.8614e-01, time spent=0.126s, total time spent=0.18min. Find new best=False, Find new top-3=False
[Iter 29/70, Epoch 4] train loss=3.65e-01, gnorm=1.12e+01, lr=6.51e-05, #samples processed=128, #sample per second=350.73. ETA=0.26min
[Iter 30/70, Epoch 4] train loss=3.73e-01, gnorm=5.31e+00, lr=6.35e-05, #samples processed=128, #sample per second=544.95. ETA=0.25min
[Iter 30/70, Epoch 4] valid f1=8.2051e-01, mcc=5.7562e-01, roc_auc=9.0596e-01, accuracy=7.9000e-01, log_loss=4.2250e-01, time spent=0.124s, total time spent=0.19min. Find new best=False, Find new top-3=True
[Iter 31/70, Epoch 4] train loss=2.65e-01, gnorm=4.36e+00, lr=6.19e-05, #samples processed=128, #sample per second=264.14. ETA=0.24min
[Iter 32/70, Epoch 4] train loss=2.33e-01, gnorm=8.15e+00, lr=6.03e-05, #samples processed=128, #sample per second=525.54. ETA=0.23min
[Iter 32/70, Epoch 4] valid f1=8.5714e-01, mcc=7.0353e-01, roc_auc=9.0939e-01, accuracy=8.5000e-01, log_loss=3.7999e-01, time spent=0.126s, total time spent=0.20min. Find new best=True, Find new top-3=True
[Iter 33/70, Epoch 4] train loss=4.06e-01, gnorm=8.15e+00, lr=5.87e-05, #samples processed=128, #sample per second=215.20. ETA=0.23min
[Iter 34/70, Epoch 4] train loss=3.08e-01, gnorm=3.41e+00, lr=5.71e-05, #samples processed=128, #sample per second=547.13. ETA=0.22min
[Iter 34/70, Epoch 4] valid f1=8.4071e-01, mcc=6.3533e-01, roc_auc=9.0990e-01, accuracy=8.2000e-01, log_loss=3.9707e-01, time spent=0.127s, total time spent=0.21min. Find new best=False, Find new top-3=False
[Iter 35/70, Epoch 4] train loss=2.80e-01, gnorm=3.41e+00, lr=5.56e-05, #samples processed=128, #sample per second=348.18. ETA=0.22min
[Iter 36/70, Epoch 5] train loss=2.88e-01, gnorm=4.33e+00, lr=5.40e-05, #samples processed=128, #sample per second=545.63. ETA=0.21min
[Iter 36/70, Epoch 5] valid f1=8.4211e-01, mcc=6.3551e-01, roc_auc=9.0889e-01, accuracy=8.2000e-01, log_loss=4.1751e-01, time spent=0.125s, total time spent=0.22min. Find new best=False, Find new top-3=False
[Iter 37/70, Epoch 5] train loss=2.12e-01, gnorm=3.63e+00, lr=5.24e-05, #samples processed=128, #sample per second=356.05. ETA=0.20min
[Iter 38/70, Epoch 5] train loss=2.33e-01, gnorm=3.06e+00, lr=5.08e-05, #samples processed=128, #sample per second=532.12. ETA=0.19min
[Iter 38/70, Epoch 5] valid f1=8.5345e-01, mcc=6.5732e-01, roc_auc=9.0727e-01, accuracy=8.3000e-01, log_loss=4.3992e-01, time spent=0.126s, total time spent=0.23min. Find new best=False, Find new top-3=True
[Iter 39/70, Epoch 5] train loss=2.26e-01, gnorm=3.42e+00, lr=4.92e-05, #samples processed=128, #sample per second=267.93. ETA=0.19min
[Iter 40/70, Epoch 5] train loss=2.17e-01, gnorm=4.65e+00, lr=4.76e-05, #samples processed=128, #sample per second=535.74. ETA=0.18min
[Iter 40/70, Epoch 5] valid f1=8.5845e-01, mcc=6.8722e-01, roc_auc=9.0475e-01, accuracy=8.4500e-01, log_loss=4.1233e-01, time spent=0.127s, total time spent=0.25min. Find new best=False, Find new top-3=True
[Iter 41/70, Epoch 5] train loss=1.90e-01, gnorm=7.97e+00, lr=4.60e-05, #samples processed=128, #sample per second=239.94. ETA=0.18min
[Iter 42/70, Epoch 5] train loss=1.88e-01, gnorm=4.85e+00, lr=4.44e-05, #samples processed=128, #sample per second=541.25. ETA=0.17min
[Iter 42/70, Epoch 5] valid f1=8.6364e-01, mcc=6.9697e-01, roc_auc=9.0707e-01, accuracy=8.5000e-01, log_loss=4.1839e-01, time spent=0.127s, total time spent=0.26min. Find new best=True, Find new top-3=True
[Iter 43/70, Epoch 6] train loss=1.89e-01, gnorm=4.44e+00, lr=4.29e-05, #samples processed=128, #sample per second=212.09. ETA=0.17min
[Iter 44/70, Epoch 6] train loss=2.00e-01, gnorm=4.06e+00, lr=4.13e-05, #samples processed=128, #sample per second=544.37. ETA=0.16min
[Iter 44/70, Epoch 6] valid f1=8.4388e-01, mcc=6.2941e-01, roc_auc=9.1071e-01, accuracy=8.1500e-01, log_loss=4.8892e-01, time spent=0.130s, total time spent=0.27min. Find new best=False, Find new top-3=False
[Iter 45/70, Epoch 6] train loss=1.63e-01, gnorm=6.06e+00, lr=3.97e-05, #samples processed=128, #sample per second=338.05. ETA=0.15min
[Iter 46/70, Epoch 6] train loss=2.00e-01, gnorm=7.47e+00, lr=3.81e-05, #samples processed=128, #sample per second=490.46. ETA=0.15min
[Iter 46/70, Epoch 6] valid f1=8.5837e-01, mcc=6.6817e-01, roc_auc=9.1586e-01, accuracy=8.3500e-01, log_loss=4.6460e-01, time spent=0.128s, total time spent=0.28min. Find new best=False, Find new top-3=False
[Iter 47/70, Epoch 6] train loss=1.40e-01, gnorm=3.26e+00, lr=3.65e-05, #samples processed=128, #sample per second=344.86. ETA=0.14min
[Iter 48/70, Epoch 6] train loss=1.80e-01, gnorm=3.34e+00, lr=3.49e-05, #samples processed=128, #sample per second=519.34. ETA=0.13min
[Iter 48/70, Epoch 6] valid f1=8.5965e-01, mcc=6.7638e-01, roc_auc=9.1788e-01, accuracy=8.4000e-01, log_loss=4.1985e-01, time spent=0.127s, total time spent=0.29min. Find new best=False, Find new top-3=False
[Iter 49/70, Epoch 6] train loss=2.38e-01, gnorm=4.38e+00, lr=3.33e-05, #samples processed=128, #sample per second=348.42. ETA=0.13min
[Iter 50/70, Epoch 7] train loss=1.85e-01, gnorm=5.04e+00, lr=3.17e-05, #samples processed=128, #sample per second=533.62. ETA=0.12min
[Iter 50/70, Epoch 7] valid f1=8.4821e-01, mcc=6.5571e-01, roc_auc=9.1859e-01, accuracy=8.3000e-01, log_loss=4.1371e-01, time spent=0.126s, total time spent=0.30min. Find new best=False, Find new top-3=False
[Iter 51/70, Epoch 7] train loss=2.63e-01, gnorm=4.87e+00, lr=3.02e-05, #samples processed=128, #sample per second=355.94. ETA=0.11min
[Iter 52/70, Epoch 7] train loss=1.83e-01, gnorm=6.88e+00, lr=2.86e-05, #samples processed=128, #sample per second=548.91. ETA=0.11min
[Iter 52/70, Epoch 7] valid f1=8.5202e-01, mcc=6.6596e-01, roc_auc=9.1606e-01, accuracy=8.3500e-01, log_loss=4.1814e-01, time spent=0.126s, total time spent=0.31min. Find new best=False, Find new top-3=False
[Iter 53/70, Epoch 7] train loss=1.09e-01, gnorm=2.50e+00, lr=2.70e-05, #samples processed=128, #sample per second=353.51. ETA=0.10min
[Iter 54/70, Epoch 7] train loss=1.51e-01, gnorm=4.36e+00, lr=2.54e-05, #samples processed=128, #sample per second=503.40. ETA=0.10min
[Iter 54/70, Epoch 7] valid f1=8.5088e-01, mcc=6.5595e-01, roc_auc=9.1596e-01, accuracy=8.3000e-01, log_loss=4.3797e-01, time spent=0.126s, total time spent=0.32min. Find new best=False, Find new top-3=False
[Iter 55/70, Epoch 7] train loss=1.29e-01, gnorm=3.74e+00, lr=2.38e-05, #samples processed=128, #sample per second=355.78. ETA=0.09min
[Iter 56/70, Epoch 7] train loss=1.91e-01, gnorm=3.88e+00, lr=2.22e-05, #samples processed=128, #sample per second=522.70. ETA=0.08min
[Iter 56/70, Epoch 7] valid f1=8.4874e-01, mcc=6.4071e-01, roc_auc=9.1434e-01, accuracy=8.2000e-01, log_loss=5.0057e-01, time spent=0.126s, total time spent=0.33min. Find new best=False, Find new top-3=False
[Iter 57/70, Epoch 8] train loss=1.51e-01, gnorm=5.72e+00, lr=2.06e-05, #samples processed=128, #sample per second=351.38. ETA=0.08min
[Iter 58/70, Epoch 8] train loss=7.43e-02, gnorm=3.77e+00, lr=1.90e-05, #samples processed=128, #sample per second=533.32. ETA=0.07min
[Iter 58/70, Epoch 8] valid f1=8.3817e-01, mcc=6.1207e-01, roc_auc=9.1404e-01, accuracy=8.0500e-01, log_loss=5.2010e-01, time spent=0.127s, total time spent=0.34min. Find new best=False, Find new top-3=False
[Iter 59/70, Epoch 8] train loss=1.23e-01, gnorm=7.34e+00, lr=1.75e-05, #samples processed=128, #sample per second=351.95. ETA=0.06min
[Iter 60/70, Epoch 8] train loss=1.83e-01, gnorm=4.68e+00, lr=1.59e-05, #samples processed=128, #sample per second=550.58. ETA=0.06min
[Iter 60/70, Epoch 8] valid f1=8.4615e-01, mcc=6.3774e-01, roc_auc=9.1384e-01, accuracy=8.2000e-01, log_loss=4.9200e-01, time spent=0.127s, total time spent=0.35min. Find new best=False, Find new top-3=False
[Iter 61/70, Epoch 8] train loss=1.58e-01, gnorm=4.18e+00, lr=1.43e-05, #samples processed=128, #sample per second=346.36. ETA=0.05min
[Iter 62/70, Epoch 8] train loss=1.71e-01, gnorm=4.24e+00, lr=1.27e-05, #samples processed=128, #sample per second=530.44. ETA=0.05min
[Iter 62/70, Epoch 8] valid f1=8.5217e-01, mcc=6.5649e-01, roc_auc=9.1323e-01, accuracy=8.3000e-01, log_loss=4.8211e-01, time spent=0.125s, total time spent=0.36min. Find new best=False, Find new top-3=False
[Iter 63/70, Epoch 8] train loss=7.98e-02, gnorm=2.68e+00, lr=1.11e-05, #samples processed=128, #sample per second=355.03. ETA=0.04min
[Iter 64/70, Epoch 9] train loss=1.17e-01, gnorm=4.09e+00, lr=9.52e-06, #samples processed=128, #sample per second=536.38. ETA=0.03min
[Iter 64/70, Epoch 9] valid f1=8.5217e-01, mcc=6.5649e-01, roc_auc=9.1323e-01, accuracy=8.3000e-01, log_loss=4.7869e-01, time spent=0.126s, total time spent=0.37min. Find new best=False, Find new top-3=False
[Iter 65/70, Epoch 9] train loss=1.71e-01, gnorm=4.15e+00, lr=7.94e-06, #samples processed=128, #sample per second=362.23. ETA=0.03min
[Iter 66/70, Epoch 9] train loss=1.57e-01, gnorm=4.97e+00, lr=6.35e-06, #samples processed=128, #sample per second=529.00. ETA=0.02min
[Iter 66/70, Epoch 9] valid f1=8.5217e-01, mcc=6.5649e-01, roc_auc=9.1374e-01, accuracy=8.3000e-01, log_loss=4.7953e-01, time spent=0.126s, total time spent=0.38min. Find new best=False, Find new top-3=False
[Iter 67/70, Epoch 9] train loss=9.79e-02, gnorm=3.25e+00, lr=4.76e-06, #samples processed=128, #sample per second=351.22. ETA=0.02min
[Iter 68/70, Epoch 9] train loss=1.68e-01, gnorm=6.15e+00, lr=3.17e-06, #samples processed=128, #sample per second=533.19. ETA=0.01min
[Iter 68/70, Epoch 9] valid f1=8.5217e-01, mcc=6.5649e-01, roc_auc=9.1404e-01, accuracy=8.3000e-01, log_loss=4.7707e-01, time spent=0.126s, total time spent=0.39min. Find new best=False, Find new top-3=False
[Iter 69/70, Epoch 9] train loss=1.29e-01, gnorm=5.85e+00, lr=1.59e-06, #samples processed=128, #sample per second=341.20. ETA=0.01min
[Iter 70/70, Epoch 9] train loss=1.30e-01, gnorm=3.74e+00, lr=0.00e+00, #samples processed=128, #sample per second=534.68. ETA=0.00min
[Iter 70/70, Epoch 9] valid f1=8.5217e-01, mcc=6.5649e-01, roc_auc=9.1384e-01, accuracy=8.3000e-01, log_loss=4.7779e-01, time spent=0.126s, total time spent=0.40min. Find new best=False, Find new top-3=False
Training completed. Auto-saving to "ag_text_customize1/". For loading the model, you can use `predictor = TextPredictor.load("ag_text_customize1/")`
.. parsed-literal::
:class: output
Custom Hyperparameter Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The pre-registered configurations provide reasonable default
hyperparameters. A common workflow is to first train a model with one of
the presets and then tune some hyperparameters to see if the performance
can be further improved. In the example below, we set the number of
training epochs to 5 and the learning rate to be 5E-5.
.. code:: python
hyperparameters = ag_text_presets.create('electra_small_fuse_late')
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.num_train_epochs'] = 5
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.lr'] = ag.core.space.Categorical(5E-5)
predictor = TextPredictor(path='ag_text_customize2', eval_metric='acc', label='label')
predictor.fit(train_data, hyperparameters=hyperparameters, time_limit=30, seed=123)
.. parsed-literal::
:class: output
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
.. parsed-literal::
:class: output
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/training.log
.. parsed-literal::
:class: output
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize2/task0/results_local.jsonl.
[Iter 1/35, Epoch 0] train loss=7.38e-01, gnorm=5.64e+00, lr=1.25e-05, #samples processed=128, #sample per second=325.62. ETA=0.22min
[Iter 2/35, Epoch 0] train loss=7.09e-01, gnorm=3.53e+00, lr=2.50e-05, #samples processed=128, #sample per second=534.28. ETA=0.17min
[Iter 2/35, Epoch 0] valid f1=7.1864e-01, mcc=1.6217e-01, roc_auc=5.9899e-01, accuracy=5.8500e-01, log_loss=7.1998e-01, time spent=0.129s, total time spent=0.01min. Find new best=True, Find new top-3=True
[Iter 3/35, Epoch 0] train loss=7.33e-01, gnorm=4.52e+00, lr=3.75e-05, #samples processed=128, #sample per second=255.76. ETA=0.20min
[Iter 4/35, Epoch 0] train loss=7.25e-01, gnorm=4.32e+00, lr=5.00e-05, #samples processed=128, #sample per second=513.21. ETA=0.18min
[Iter 4/35, Epoch 0] valid f1=6.9388e-01, mcc=2.3067e-01, roc_auc=6.2232e-01, accuracy=6.2500e-01, log_loss=6.6960e-01, time spent=0.127s, total time spent=0.03min. Find new best=True, Find new top-3=True
[Iter 5/35, Epoch 0] train loss=6.76e-01, gnorm=3.11e+00, lr=4.84e-05, #samples processed=128, #sample per second=238.39. ETA=0.19min
[Iter 6/35, Epoch 0] train loss=6.90e-01, gnorm=3.23e+00, lr=4.68e-05, #samples processed=128, #sample per second=567.84. ETA=0.17min
[Iter 6/35, Epoch 0] valid f1=6.2201e-01, mcc=2.1207e-01, roc_auc=6.3576e-01, accuracy=6.0500e-01, log_loss=6.7987e-01, time spent=0.126s, total time spent=0.04min. Find new best=False, Find new top-3=True
[Iter 7/35, Epoch 0] train loss=7.69e-01, gnorm=5.90e+00, lr=4.52e-05, #samples processed=128, #sample per second=305.01. ETA=0.17min
[Iter 8/35, Epoch 1] train loss=6.64e-01, gnorm=3.26e+00, lr=4.35e-05, #samples processed=128, #sample per second=538.04. ETA=0.16min
[Iter 8/35, Epoch 1] valid f1=7.3188e-01, mcc=2.5953e-01, roc_auc=6.7030e-01, accuracy=6.3000e-01, log_loss=6.7947e-01, time spent=0.125s, total time spent=0.05min. Find new best=True, Find new top-3=True
[Iter 9/35, Epoch 1] train loss=6.58e-01, gnorm=3.86e+00, lr=4.19e-05, #samples processed=128, #sample per second=210.07. ETA=0.16min
[Iter 10/35, Epoch 1] train loss=7.45e-01, gnorm=5.21e+00, lr=4.03e-05, #samples processed=128, #sample per second=556.55. ETA=0.15min
[Iter 10/35, Epoch 1] valid f1=7.2464e-01, mcc=2.3278e-01, roc_auc=6.9101e-01, accuracy=6.2000e-01, log_loss=6.6419e-01, time spent=0.126s, total time spent=0.06min. Find new best=False, Find new top-3=True
[Iter 11/35, Epoch 1] train loss=6.23e-01, gnorm=2.82e+00, lr=3.87e-05, #samples processed=128, #sample per second=268.45. ETA=0.15min
[Iter 12/35, Epoch 1] train loss=6.25e-01, gnorm=2.77e+00, lr=3.71e-05, #samples processed=128, #sample per second=545.12. ETA=0.14min
[Iter 12/35, Epoch 1] valid f1=7.1937e-01, mcc=2.7496e-01, roc_auc=7.0162e-01, accuracy=6.4500e-01, log_loss=6.3211e-01, time spent=0.127s, total time spent=0.08min. Find new best=True, Find new top-3=True
[Iter 13/35, Epoch 1] train loss=6.06e-01, gnorm=2.24e+00, lr=3.55e-05, #samples processed=128, #sample per second=211.79. ETA=0.14min
[Iter 14/35, Epoch 1] train loss=6.30e-01, gnorm=3.48e+00, lr=3.39e-05, #samples processed=128, #sample per second=555.36. ETA=0.13min
[Iter 14/35, Epoch 1] valid f1=7.2374e-01, mcc=2.7669e-01, roc_auc=7.1758e-01, accuracy=6.4500e-01, log_loss=6.2688e-01, time spent=0.129s, total time spent=0.09min. Find new best=True, Find new top-3=True
[Iter 15/35, Epoch 2] train loss=5.46e-01, gnorm=2.36e+00, lr=3.23e-05, #samples processed=128, #sample per second=201.16. ETA=0.13min
[Iter 16/35, Epoch 2] train loss=6.22e-01, gnorm=2.51e+00, lr=3.06e-05, #samples processed=128, #sample per second=516.39. ETA=0.12min
[Iter 16/35, Epoch 2] valid f1=7.2000e-01, mcc=2.8511e-01, roc_auc=7.2636e-01, accuracy=6.5000e-01, log_loss=6.1622e-01, time spent=0.127s, total time spent=0.11min. Find new best=True, Find new top-3=True
[Iter 17/35, Epoch 2] train loss=6.12e-01, gnorm=2.20e+00, lr=2.90e-05, #samples processed=128, #sample per second=215.48. ETA=0.12min
[Iter 18/35, Epoch 2] train loss=6.10e-01, gnorm=2.45e+00, lr=2.74e-05, #samples processed=128, #sample per second=528.05. ETA=0.11min
[Iter 18/35, Epoch 2] valid f1=7.3846e-01, mcc=3.1334e-01, roc_auc=7.4465e-01, accuracy=6.6000e-01, log_loss=6.1371e-01, time spent=0.127s, total time spent=0.12min. Find new best=True, Find new top-3=True
[Iter 19/35, Epoch 2] train loss=5.73e-01, gnorm=2.84e+00, lr=2.58e-05, #samples processed=128, #sample per second=204.56. ETA=0.11min
[Iter 20/35, Epoch 2] train loss=5.42e-01, gnorm=2.28e+00, lr=2.42e-05, #samples processed=128, #sample per second=544.39. ETA=0.10min
[Iter 20/35, Epoch 2] valid f1=7.5200e-01, mcc=3.7284e-01, roc_auc=7.5737e-01, accuracy=6.9000e-01, log_loss=5.9792e-01, time spent=0.127s, total time spent=0.14min. Find new best=True, Find new top-3=True
[Iter 21/35, Epoch 2] train loss=6.10e-01, gnorm=2.50e+00, lr=2.26e-05, #samples processed=128, #sample per second=210.37. ETA=0.09min
[Iter 22/35, Epoch 3] train loss=5.68e-01, gnorm=2.51e+00, lr=2.10e-05, #samples processed=128, #sample per second=557.78. ETA=0.08min
[Iter 22/35, Epoch 3] valid f1=7.5502e-01, mcc=3.8310e-01, roc_auc=7.7061e-01, accuracy=6.9500e-01, log_loss=5.8858e-01, time spent=0.128s, total time spent=0.15min. Find new best=True, Find new top-3=True
[Iter 23/35, Epoch 3] train loss=5.42e-01, gnorm=2.45e+00, lr=1.94e-05, #samples processed=128, #sample per second=213.85. ETA=0.08min
[Iter 24/35, Epoch 3] train loss=5.36e-01, gnorm=2.65e+00, lr=1.77e-05, #samples processed=128, #sample per second=533.82. ETA=0.07min
[Iter 24/35, Epoch 3] valid f1=7.6667e-01, mcc=4.3196e-01, roc_auc=7.7737e-01, accuracy=7.2000e-01, log_loss=5.7161e-01, time spent=0.126s, total time spent=0.16min. Find new best=True, Find new top-3=True
[Iter 25/35, Epoch 3] train loss=5.54e-01, gnorm=2.44e+00, lr=1.61e-05, #samples processed=128, #sample per second=220.04. ETA=0.07min
[Iter 26/35, Epoch 3] train loss=5.33e-01, gnorm=2.53e+00, lr=1.45e-05, #samples processed=128, #sample per second=546.63. ETA=0.06min
[Iter 26/35, Epoch 3] valid f1=7.6793e-01, mcc=4.4153e-01, roc_auc=7.8424e-01, accuracy=7.2500e-01, log_loss=5.6036e-01, time spent=0.131s, total time spent=0.18min. Find new best=True, Find new top-3=True
[Iter 27/35, Epoch 3] train loss=5.59e-01, gnorm=3.12e+00, lr=1.29e-05, #samples processed=128, #sample per second=202.68. ETA=0.05min
[Iter 28/35, Epoch 3] train loss=5.20e-01, gnorm=2.68e+00, lr=1.13e-05, #samples processed=128, #sample per second=545.43. ETA=0.05min
[Iter 28/35, Epoch 3] valid f1=7.7824e-01, mcc=4.6312e-01, roc_auc=7.8808e-01, accuracy=7.3500e-01, log_loss=5.5665e-01, time spent=0.127s, total time spent=0.19min. Find new best=True, Find new top-3=True
[Iter 29/35, Epoch 4] train loss=5.12e-01, gnorm=2.26e+00, lr=9.68e-06, #samples processed=128, #sample per second=201.05. ETA=0.04min
[Iter 30/35, Epoch 4] train loss=5.04e-01, gnorm=2.49e+00, lr=8.06e-06, #samples processed=128, #sample per second=544.20. ETA=0.03min
[Iter 30/35, Epoch 4] valid f1=7.7178e-01, mcc=4.4293e-01, roc_auc=7.9283e-01, accuracy=7.2500e-01, log_loss=5.5510e-01, time spent=0.127s, total time spent=0.20min. Find new best=False, Find new top-3=True
[Iter 31/35, Epoch 4] train loss=5.13e-01, gnorm=2.46e+00, lr=6.45e-06, #samples processed=128, #sample per second=260.03. ETA=0.03min
[Iter 32/35, Epoch 4] train loss=5.24e-01, gnorm=2.91e+00, lr=4.84e-06, #samples processed=128, #sample per second=540.03. ETA=0.02min
[Iter 32/35, Epoch 4] valid f1=7.7551e-01, mcc=4.4525e-01, roc_auc=7.9677e-01, accuracy=7.2500e-01, log_loss=5.5575e-01, time spent=0.127s, total time spent=0.22min. Find new best=False, Find new top-3=True
[Iter 33/35, Epoch 4] train loss=4.92e-01, gnorm=2.86e+00, lr=3.23e-06, #samples processed=128, #sample per second=254.93. ETA=0.01min
[Iter 34/35, Epoch 4] train loss=5.26e-01, gnorm=3.08e+00, lr=1.61e-06, #samples processed=128, #sample per second=535.52. ETA=0.01min
[Iter 34/35, Epoch 4] valid f1=7.7733e-01, mcc=4.4679e-01, roc_auc=7.9778e-01, accuracy=7.2500e-01, log_loss=5.5710e-01, time spent=0.127s, total time spent=0.23min. Find new best=False, Find new top-3=True
[Iter 35/35, Epoch 4] train loss=5.12e-01, gnorm=3.91e+00, lr=0.00e+00, #samples processed=128, #sample per second=268.39. ETA=0.00min
[Iter 35/35, Epoch 4] valid f1=7.7733e-01, mcc=4.4679e-01, roc_auc=7.9778e-01, accuracy=7.2500e-01, log_loss=5.5710e-01, time spent=0.127s, total time spent=0.24min. Find new best=False, Find new top-3=True
Training completed. Auto-saving to "ag_text_customize2/". For loading the model, you can use `predictor = TextPredictor.load("ag_text_customize2/")`
.. parsed-literal::
:class: output
Register Your Own Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can also register your custom hyperparameter settings as new presets
in ``ag_text_presets``. Below, the ``electra_small_fuse_late_train5``
preset uses ELECTRA-small as its backbone and trains for 5 epochs with a
weight-decay of 1E-2.
.. code:: python
@ag_text_presets.register()
def electra_small_fuse_late_train5():
hyperparameters = ag_text_presets.create('electra_small_fuse_late')
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.num_train_epochs'] = 5
hyperparameters['models']['MultimodalTextModel']['search_space']['optimization.wd'] = 1E-2
return hyperparameters
predictor = TextPredictor(path='ag_text_customize3', eval_metric='acc', label='label')
predictor.fit(train_data, presets='electra_small_fuse_late_train5', time_limit=60, seed=123)
.. parsed-literal::
:class: output
Problem Type="binary"
Column Types:
- "sentence": text
- "label": categorical
The GluonNLP V0 backend is used. We will use 8 cpus and 1 gpus to train each trial.
.. parsed-literal::
:class: output
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/training.log
.. parsed-literal::
:class: output
Fitting and transforming the train data...
Done! Preprocessor saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/preprocessor.pkl
Process dev set...
Done!
Max length for chunking text: 64, Stochastic chunk: Train-False/Test-False, Test #repeat: 1.
#Total Params/Fixed Params=13516290/0
Using gradient accumulation. Global batch size = 128
Local training results will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_customize3/task0/results_local.jsonl.
[Iter 1/35, Epoch 0] train loss=7.43e-01, gnorm=4.44e+00, lr=2.50e-05, #samples processed=128, #sample per second=315.79. ETA=0.23min
[Iter 2/35, Epoch 0] train loss=7.51e-01, gnorm=4.15e+00, lr=5.00e-05, #samples processed=128, #sample per second=538.19. ETA=0.18min
[Iter 2/35, Epoch 0] valid f1=6.9014e-01, mcc=6.8735e-02, roc_auc=5.6475e-01, accuracy=5.6000e-01, log_loss=6.9261e-01, time spent=0.127s, total time spent=0.01min. Find new best=True, Find new top-3=True
[Iter 3/35, Epoch 0] train loss=7.77e-01, gnorm=3.66e+00, lr=7.50e-05, #samples processed=128, #sample per second=270.55. ETA=0.20min
[Iter 4/35, Epoch 0] train loss=7.00e-01, gnorm=4.18e+00, lr=1.00e-04, #samples processed=128, #sample per second=534.95. ETA=0.18min
[Iter 4/35, Epoch 0] valid f1=5.8511e-01, mcc=2.4933e-01, roc_auc=6.4071e-01, accuracy=6.1000e-01, log_loss=6.7574e-01, time spent=0.127s, total time spent=0.03min. Find new best=True, Find new top-3=True
[Iter 5/35, Epoch 0] train loss=7.25e-01, gnorm=7.36e+00, lr=9.68e-05, #samples processed=128, #sample per second=231.78. ETA=0.19min
[Iter 6/35, Epoch 0] train loss=6.79e-01, gnorm=4.67e+00, lr=9.35e-05, #samples processed=128, #sample per second=527.42. ETA=0.17min
[Iter 6/35, Epoch 0] valid f1=7.3103e-01, mcc=2.3451e-01, roc_auc=6.6980e-01, accuracy=6.1000e-01, log_loss=8.2085e-01, time spent=0.130s, total time spent=0.04min. Find new best=True, Find new top-3=True
[Iter 7/35, Epoch 0] train loss=7.93e-01, gnorm=7.22e+00, lr=9.03e-05, #samples processed=128, #sample per second=229.63. ETA=0.18min
[Iter 8/35, Epoch 1] train loss=6.56e-01, gnorm=5.59e+00, lr=8.71e-05, #samples processed=128, #sample per second=502.97. ETA=0.17min
[Iter 8/35, Epoch 1] valid f1=7.1595e-01, mcc=2.5392e-01, roc_auc=6.9485e-01, accuracy=6.3500e-01, log_loss=6.7277e-01, time spent=0.131s, total time spent=0.06min. Find new best=True, Find new top-3=True
[Iter 9/35, Epoch 1] train loss=5.95e-01, gnorm=2.42e+00, lr=8.39e-05, #samples processed=128, #sample per second=210.27. ETA=0.17min
[Iter 10/35, Epoch 1] train loss=6.21e-01, gnorm=3.11e+00, lr=8.06e-05, #samples processed=128, #sample per second=520.70. ETA=0.16min
[Iter 10/35, Epoch 1] valid f1=6.7593e-01, mcc=2.9601e-01, roc_auc=7.3495e-01, accuracy=6.5000e-01, log_loss=6.0251e-01, time spent=0.129s, total time spent=0.07min. Find new best=True, Find new top-3=True
[Iter 11/35, Epoch 1] train loss=6.23e-01, gnorm=3.29e+00, lr=7.74e-05, #samples processed=128, #sample per second=207.65. ETA=0.16min
[Iter 12/35, Epoch 1] train loss=6.38e-01, gnorm=2.13e+00, lr=7.42e-05, #samples processed=128, #sample per second=534.21. ETA=0.15min
[Iter 12/35, Epoch 1] valid f1=7.2542e-01, mcc=2.0033e-01, roc_auc=7.9687e-01, accuracy=5.9500e-01, log_loss=6.2050e-01, time spent=0.126s, total time spent=0.08min. Find new best=False, Find new top-3=False
[Iter 13/35, Epoch 1] train loss=5.41e-01, gnorm=1.97e+00, lr=7.10e-05, #samples processed=128, #sample per second=346.21. ETA=0.14min
[Iter 14/35, Epoch 1] train loss=6.22e-01, gnorm=4.36e+00, lr=6.77e-05, #samples processed=128, #sample per second=544.83. ETA=0.13min
[Iter 14/35, Epoch 1] valid f1=7.5362e-01, mcc=3.3980e-01, roc_auc=8.1354e-01, accuracy=6.6000e-01, log_loss=5.7949e-01, time spent=0.126s, total time spent=0.09min. Find new best=True, Find new top-3=True
[Iter 15/35, Epoch 2] train loss=5.82e-01, gnorm=2.01e+00, lr=6.45e-05, #samples processed=128, #sample per second=212.22. ETA=0.13min
[Iter 16/35, Epoch 2] train loss=5.01e-01, gnorm=2.16e+00, lr=6.13e-05, #samples processed=128, #sample per second=539.88. ETA=0.12min
[Iter 16/35, Epoch 2] valid f1=7.8140e-01, mcc=5.2831e-01, roc_auc=8.2646e-01, accuracy=7.6500e-01, log_loss=5.2474e-01, time spent=0.129s, total time spent=0.11min. Find new best=True, Find new top-3=True
[Iter 17/35, Epoch 2] train loss=6.06e-01, gnorm=3.43e+00, lr=5.81e-05, #samples processed=128, #sample per second=206.98. ETA=0.12min
[Iter 18/35, Epoch 2] train loss=4.98e-01, gnorm=4.19e+00, lr=5.48e-05, #samples processed=128, #sample per second=510.08. ETA=0.11min
[Iter 18/35, Epoch 2] valid f1=7.8543e-01, mcc=4.6842e-01, roc_auc=8.3869e-01, accuracy=7.3500e-01, log_loss=5.2525e-01, time spent=0.126s, total time spent=0.12min. Find new best=False, Find new top-3=True
[Iter 19/35, Epoch 2] train loss=4.74e-01, gnorm=3.24e+00, lr=5.16e-05, #samples processed=128, #sample per second=276.97. ETA=0.10min
[Iter 20/35, Epoch 2] train loss=4.72e-01, gnorm=5.40e+00, lr=4.84e-05, #samples processed=128, #sample per second=525.27. ETA=0.10min
[Iter 20/35, Epoch 2] valid f1=7.9508e-01, mcc=4.9802e-01, roc_auc=8.4697e-01, accuracy=7.5000e-01, log_loss=5.1752e-01, time spent=0.127s, total time spent=0.13min. Find new best=False, Find new top-3=True
[Iter 21/35, Epoch 2] train loss=3.85e-01, gnorm=2.74e+00, lr=4.52e-05, #samples processed=128, #sample per second=253.74. ETA=0.09min
[Iter 22/35, Epoch 3] train loss=4.00e-01, gnorm=2.41e+00, lr=4.19e-05, #samples processed=128, #sample per second=542.30. ETA=0.08min
[Iter 22/35, Epoch 3] valid f1=8.2727e-01, mcc=6.1616e-01, roc_auc=8.5768e-01, accuracy=8.1000e-01, log_loss=4.7117e-01, time spent=0.125s, total time spent=0.15min. Find new best=True, Find new top-3=True
[Iter 23/35, Epoch 3] train loss=4.46e-01, gnorm=2.88e+00, lr=3.87e-05, #samples processed=128, #sample per second=225.72. ETA=0.08min
[Iter 24/35, Epoch 3] train loss=4.13e-01, gnorm=3.04e+00, lr=3.55e-05, #samples processed=128, #sample per second=553.27. ETA=0.07min
[Iter 24/35, Epoch 3] valid f1=8.3105e-01, mcc=6.2667e-01, roc_auc=8.6313e-01, accuracy=8.1500e-01, log_loss=4.6168e-01, time spent=0.126s, total time spent=0.16min. Find new best=True, Find new top-3=True
[Iter 25/35, Epoch 3] train loss=4.71e-01, gnorm=5.28e+00, lr=3.23e-05, #samples processed=128, #sample per second=218.73. ETA=0.07min
[Iter 26/35, Epoch 3] train loss=3.73e-01, gnorm=2.99e+00, lr=2.90e-05, #samples processed=128, #sample per second=543.94. ETA=0.06min
[Iter 26/35, Epoch 3] valid f1=8.3117e-01, mcc=6.0547e-01, roc_auc=8.6848e-01, accuracy=8.0500e-01, log_loss=4.7462e-01, time spent=0.128s, total time spent=0.17min. Find new best=False, Find new top-3=True
[Iter 27/35, Epoch 3] train loss=3.06e-01, gnorm=2.78e+00, lr=2.58e-05, #samples processed=128, #sample per second=242.94. ETA=0.05min
[Iter 28/35, Epoch 3] train loss=3.48e-01, gnorm=3.07e+00, lr=2.26e-05, #samples processed=128, #sample per second=544.77. ETA=0.05min
[Iter 28/35, Epoch 3] valid f1=8.3051e-01, mcc=5.9744e-01, roc_auc=8.7020e-01, accuracy=8.0000e-01, log_loss=4.8971e-01, time spent=0.126s, total time spent=0.18min. Find new best=False, Find new top-3=False
[Iter 29/35, Epoch 4] train loss=4.12e-01, gnorm=7.44e+00, lr=1.94e-05, #samples processed=128, #sample per second=352.73. ETA=0.04min
[Iter 30/35, Epoch 4] train loss=5.11e-01, gnorm=5.52e+00, lr=1.61e-05, #samples processed=128, #sample per second=531.14. ETA=0.03min
[Iter 30/35, Epoch 4] valid f1=8.3983e-01, mcc=6.2603e-01, roc_auc=8.7182e-01, accuracy=8.1500e-01, log_loss=4.7299e-01, time spent=0.126s, total time spent=0.20min. Find new best=True, Find new top-3=True
[Iter 31/35, Epoch 4] train loss=3.66e-01, gnorm=3.34e+00, lr=1.29e-05, #samples processed=128, #sample per second=210.37. ETA=0.03min
[Iter 32/35, Epoch 4] train loss=4.19e-01, gnorm=3.78e+00, lr=9.68e-06, #samples processed=128, #sample per second=540.78. ETA=0.02min
[Iter 32/35, Epoch 4] valid f1=8.3333e-01, mcc=6.1508e-01, roc_auc=8.7263e-01, accuracy=8.1000e-01, log_loss=4.6262e-01, time spent=0.127s, total time spent=0.21min. Find new best=False, Find new top-3=True
[Iter 33/35, Epoch 4] train loss=3.98e-01, gnorm=3.59e+00, lr=6.45e-06, #samples processed=128, #sample per second=261.92. ETA=0.01min
[Iter 34/35, Epoch 4] train loss=3.74e-01, gnorm=4.93e+00, lr=3.23e-06, #samples processed=128, #sample per second=542.66. ETA=0.01min
[Iter 34/35, Epoch 4] valid f1=8.3333e-01, mcc=6.1508e-01, roc_auc=8.7303e-01, accuracy=8.1000e-01, log_loss=4.6039e-01, time spent=0.127s, total time spent=0.22min. Find new best=False, Find new top-3=True
[Iter 35/35, Epoch 4] train loss=3.44e-01, gnorm=3.90e+00, lr=0.00e+00, #samples processed=128, #sample per second=256.03. ETA=0.00min
[Iter 35/35, Epoch 4] valid f1=8.3333e-01, mcc=6.1508e-01, roc_auc=8.7303e-01, accuracy=8.1000e-01, log_loss=4.6039e-01, time spent=0.127s, total time spent=0.23min. Find new best=False, Find new top-3=True
Training completed. Auto-saving to "ag_text_customize3/". For loading the model, you can use `predictor = TextPredictor.load("ag_text_customize3/")`
.. parsed-literal::
:class: output
HPO over a Customized Search Space via Bayesian Optimization
------------------------------------------------------------
To control which hyperparameter values are considered during ``fit()``,
we specify the ``hyperparameters`` argument. Rather than specifying a
particular fixed value for a hyperparameter, we can specify a space of
values to search over via ``ag.core.space``. We can also specify which
HPO method to use for the search via ``search_strategy``. By default, we
will use `Bayesian
Optimization `__ as the searcher.
In this example, we search for good values of the following
hyperparameters:
- warmup
- number of hidden units in the final MLP layer that maps aggregated
features to output prediction
- learning rate
- weight decay
.. code:: python
def electra_small_basic_demo_hpo():
hparams = ag_text_presets.create('electra_small_fuse_late')
search_space = hparams['models']['MultimodalTextModel']['search_space']
search_space['optimization.per_device_batch_size'] = 8
search_space['model.network.agg_net.mid_units'] = ag.core.space.Int(32, 128)
search_space['optimization.warmup_portion'] = ag.core.space.Categorical(0.1, 0.2)
search_space['optimization.lr'] = ag.core.space.Real(1E-5, 2E-4)
search_space['optimization.wd'] = ag.core.space.Categorical(1E-4, 1E-3, 1E-2)
search_space['optimization.num_train_epochs'] = 5
return hparams
We can now call ``fit()`` with hyperparameter-tuning over our custom
search space. Below ``num_trials`` controls the maximal number of
different hyperparameter configurations for which AutoGluon will train
models (4 models are trained under different hyperparameter
configurations in this case). To achieve good performance in your
applications, you should use larger values of ``num_trials``, which may
identify superior hyperparameter values but will require longer
runtimes.
.. code:: python
predictor_sst_rs = TextPredictor(path='ag_text_sst_random_search', label='label', eval_metric='acc')
predictor_sst_rs.set_verbosity(0)
predictor_sst_rs.fit(train_data,
hyperparameters=electra_small_basic_demo_hpo(),
time_limit=60 * 2,
num_trials=4,
seed=123)
.. parsed-literal::
:class: output
0%| | 0/4 [00:00, ?it/s]
.. parsed-literal::
:class: output
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task0/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task1/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task2/training.log
All Logs will be saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-text-v3/docs/_build/eval/tutorials/text_prediction/ag_text_sst_random_search/task3/training.log
.. parsed-literal::
:class: output
We can again evaluate our model's performance on separate test data.
.. code:: python
test_score = predictor_sst_rs.evaluate(test_data, metrics=['acc', 'f1'])
print('Best Config = {}'.format(predictor_sst_rs.results['best_config']))
print('Total Time = {}s'.format(predictor_sst_rs.results['total_time']))
print('Accuracy = {:.2f}%'.format(test_score['acc'] * 100))
print('F1 = {:.2f}%'.format(test_score['f1'] * 100))
.. parsed-literal::
:class: output
Best Config = {'search_space▁model.network.agg_net.mid_units': 64, 'search_space▁optimization.lr': 0.00019631030903737018, 'search_space▁optimization.warmup_portion▁choice': 1, 'search_space▁optimization.wd▁choice': 0}
Total Time = 56.6493763923645s
Accuracy = 81.31%
F1 = 81.24%
You can also try setting
``hyperparameters['tune_kwargs']['search_strategy']`` to be
``'random'``, ``'bayesopt'``, ``'bayesopt_hyperband'`` as alternative
HPO methods although they are currently experimental.