.. _sec_custom_advancedhpo:
Getting started with Advanced HPO Algorithms
============================================
This tutorial provides a complete example of how to use AutoGluon's
state-of-the-art hyperparameter optimization (HPO) algorithms to tune a
basic Multi-Layer Perceptron (MLP) model, which is the most basic type
of neural network.
Loading libraries
-----------------
.. code:: python
# Basic utils for folder manipulations etc
import time
import multiprocessing # to count the number of CPUs available
# External tools to load and process data
import numpy as np
import pandas as pd
# MXNet (NeuralNets)
import mxnet as mx
from mxnet import gluon, autograd
from mxnet.gluon import nn
# AutoGluon and HPO tools
import autogluon.core as ag
from autogluon.mxnet.utils import load_and_split_openml_data
Check the version of MxNet, you should be fine with version >= 1.5
.. code:: python
mx.__version__
.. parsed-literal::
:class: output
'1.7.0'
You can also check the version of AutoGluon and the specific commit and
check that it matches what you want.
.. code:: python
import autogluon.core.version
ag.version.__version__
.. parsed-literal::
:class: output
'0.1.1b20210305'
Hyperparameter Optimization of a 2-layer MLP
--------------------------------------------
Setting up the context
~~~~~~~~~~~~~~~~~~~~~~
Here we declare a few "environment variables" setting the context for
what we're doing
.. code:: python
OPENML_TASK_ID = 6 # describes the problem we will tackle
RATIO_TRAIN_VALID = 0.33 # split of the training data used for validation
RESOURCE_ATTR_NAME = 'epoch' # how do we measure resources (will become clearer further)
REWARD_ATTR_NAME = 'objective' # how do we measure performance (will become clearer further)
NUM_CPUS = multiprocessing.cpu_count()
Preparing the data
~~~~~~~~~~~~~~~~~~
We will use a multi-way classification task from OpenML. Data
preparation includes:
- Missing values are imputed, using the 'mean' strategy of
``sklearn.impute.SimpleImputer``
- Split training set into training and validation
- Standardize inputs to mean 0, variance 1
.. code:: python
X_train, X_valid, y_train, y_valid, n_classes = load_and_split_openml_data(
OPENML_TASK_ID, RATIO_TRAIN_VALID, download_from_openml=False)
n_classes
.. parsed-literal::
:class: output
100%|██████████| 704/704 [00:00<00:00, 56064.21KB/s]
100%|██████████| 2521/2521 [00:00<00:00, 41820.94KB/s]
3KB [00:00, 3791.18KB/s]
8KB [00:00, 10058.28KB/s]
15KB [00:00, 16926.17KB/s]
2998KB [00:00, 31276.17KB/s]
881KB [00:00, 5595.18KB/s]
3KB [00:00, 3993.31KB/s]
.. parsed-literal::
:class: output
26
The problem has 26 classes.
Declaring a model specifying a hyperparameter space with AutoGluon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Two layer MLP where we optimize over:
- the number of units on the first layer
- the number of units on the second layer
- the dropout rate after each layer
- the learning rate
- the scaling
- the ``@ag.args`` decorator allows us to specify the space we will
optimize over, this matches the
`ConfigSpace `__ syntax
The body of the function ``run_mlp_openml`` is pretty simple:
- it reads the hyperparameters given via the decorator
- it defines a 2 layer MLP with dropout
- it declares a trainer with the 'adam' loss function and a provided
learning rate
- it trains the NN with a number of epochs (most of that is boilerplate
code from ``mxnet``)
- the ``reporter`` at the end is used to keep track of training history
in the hyperparameter optimization
**Note**: The number of epochs and the hyperparameter space are reduced
to make for a shorter experiment
.. code:: python
@ag.args(n_units_1=ag.space.Int(lower=16, upper=128),
n_units_2=ag.space.Int(lower=16, upper=128),
dropout_1=ag.space.Real(lower=0, upper=.75),
dropout_2=ag.space.Real(lower=0, upper=.75),
learning_rate=ag.space.Real(lower=1e-6, upper=1, log=True),
batch_size=ag.space.Int(lower=8, upper=128),
scale_1=ag.space.Real(lower=0.001, upper=10, log=True),
scale_2=ag.space.Real(lower=0.001, upper=10, log=True),
epochs=9)
def run_mlp_openml(args, reporter, **kwargs):
# Time stamp for elapsed_time
ts_start = time.time()
# Unwrap hyperparameters
n_units_1 = args.n_units_1
n_units_2 = args.n_units_2
dropout_1 = args.dropout_1
dropout_2 = args.dropout_2
scale_1 = args.scale_1
scale_2 = args.scale_2
batch_size = args.batch_size
learning_rate = args.learning_rate
ctx = mx.cpu()
net = nn.Sequential()
with net.name_scope():
# Layer 1
net.add(nn.Dense(n_units_1, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_1)))
# Dropout
net.add(gluon.nn.Dropout(dropout_1))
# Layer 2
net.add(nn.Dense(n_units_2, activation='relu',
weight_initializer=mx.initializer.Uniform(scale=scale_2)))
# Dropout
net.add(gluon.nn.Dropout(dropout_2))
# Output
net.add(nn.Dense(n_classes))
net.initialize(ctx=ctx)
trainer = gluon.Trainer(net.collect_params(), 'adam',
{'learning_rate': learning_rate})
for epoch in range(args.epochs):
ts_epoch = time.time()
train_iter = mx.io.NDArrayIter(
data={'data': X_train},
label={'label': y_train},
batch_size=batch_size,
shuffle=True)
valid_iter = mx.io.NDArrayIter(
data={'data': X_valid},
label={'label': y_valid},
batch_size=batch_size,
shuffle=False)
metric = mx.metric.Accuracy()
loss = gluon.loss.SoftmaxCrossEntropyLoss()
for batch in train_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
with autograd.record():
output = net(data)
L = loss(output, label)
L.backward()
trainer.step(data.shape[0])
metric.update([label], [output])
name, train_acc = metric.get()
metric = mx.metric.Accuracy()
for batch in valid_iter:
data = batch.data[0].as_in_context(ctx)
label = batch.label[0].as_in_context(ctx)
output = net(data)
metric.update([label], [output])
name, val_acc = metric.get()
print('Epoch %d ; Time: %f ; Training: %s=%f ; Validation: %s=%f' % (
epoch + 1, time.time() - ts_start, name, train_acc, name, val_acc))
ts_now = time.time()
eval_time = ts_now - ts_epoch
elapsed_time = ts_now - ts_start
# The resource reported back (as 'epoch') is the number of epochs
# done, starting at 1
reporter(
epoch=epoch + 1,
objective=float(val_acc),
eval_time=eval_time,
time_step=ts_now,
elapsed_time=elapsed_time)
**Note**: The annotation ``epochs=9`` specifies the maximum number of
epochs for training. It becomes available as ``args.epochs``.
Importantly, it is also processed by ``HyperbandScheduler`` below in
order to set its ``max_t`` attribute.
**Recommendation**: Whenever writing training code to be passed as
``train_fn`` to a scheduler, if this training code reports a resource
(or time) attribute, the corresponding maximum resource value should be
included in ``train_fn.args``:
- If the resource attribute (``time_attr`` of scheduler) in
``train_fn`` is ``epoch``, make sure to include ``epochs=XYZ`` in the
annotation. This allows the scheduler to read ``max_t`` from
``train_fn.args.epochs``. This case corresponds to our example here.
- If the resource attribute is something else than ``epoch``, you can
also include the annotation ``max_t=XYZ``, which allows the scheduler
to read ``max_t`` from ``train_fn.args.max_t``.
Annotating the training function by the correct value for ``max_t``
simplifies scheduler creation (since ``max_t`` does not have to be
passed), and avoids inconsistencies between ``train_fn`` and the
scheduler.
Running the Hyperparameter Optimization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the following schedulers:
- FIFO (``fifo``)
- Hyperband (either the stopping (``hbs``) or promotion (``hbp``)
variant)
And the following searchers:
- Random search (``random``)
- Gaussian process based Bayesian optimization (``bayesopt``)
- SkOpt Bayesian optimization (``skopt``; only with FIFO scheduler)
Note that the method known as (asynchronous) Hyperband is using random
search. Combining Hyperband scheduling with the ``bayesopt`` searcher
uses a novel method called asynchronous BOHB.
Pick the combination you're interested in (doing the full experiment
takes around 120 seconds, see the ``time_out`` parameter), running
everything with multiple runs can take a fair bit of time. In real life,
you will want to choose a larger ``time_out`` in order to obtain good
performance.
.. code:: python
SCHEDULER = "hbs"
SEARCHER = "bayesopt"
.. code:: python
def compute_error(df):
return 1.0 - df["objective"]
def compute_runtime(df, start_timestamp):
return df["time_step"] - start_timestamp
def process_training_history(task_dicts, start_timestamp,
runtime_fn=compute_runtime,
error_fn=compute_error):
task_dfs = []
for task_id in task_dicts:
task_df = pd.DataFrame(task_dicts[task_id])
task_df = task_df.assign(task_id=task_id,
runtime=runtime_fn(task_df, start_timestamp),
error=error_fn(task_df),
target_epoch=task_df["epoch"].iloc[-1])
task_dfs.append(task_df)
result = pd.concat(task_dfs, axis="index", ignore_index=True, sort=True)
# re-order by runtime
result = result.sort_values(by="runtime")
# calculate incumbent best -- the cumulative minimum of the error.
result = result.assign(best=result["error"].cummin())
return result
resources = dict(num_cpus=NUM_CPUS, num_gpus=0)
.. code:: python
search_options = {
'num_init_random': 2,
'debug_log': True}
if SCHEDULER == 'fifo':
myscheduler = ag.scheduler.FIFOScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME)
else:
# This setup uses rung levels at 1, 3, 9 epochs. We just use a single
# bracket, so this is in fact successive halving (Hyperband would use
# more than 1 bracket).
# Also note that since we do not use the max_t argument of
# HyperbandScheduler, this value is obtained from train_fn.args.epochs.
sch_type = 'stopping' if SCHEDULER == 'hbs' else 'promotion'
myscheduler = ag.scheduler.HyperbandScheduler(
run_mlp_openml,
resource=resources,
searcher=SEARCHER,
search_options=search_options,
time_out=120,
time_attr=RESOURCE_ATTR_NAME,
reward_attr=REWARD_ATTR_NAME,
type=sch_type,
grace_period=1,
reduction_factor=3,
brackets=1)
# run tasks
myscheduler.run()
myscheduler.join_jobs()
results_df = process_training_history(
myscheduler.training_history.copy(),
start_timestamp=myscheduler._start_time)
.. parsed-literal::
:class: output
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-course-v3/venv/lib/python3.7/site-packages/distributed/worker.py:3460: UserWarning: Large object of size 1.30 MB detected in task graph:
(0, , { ... sReporter}, [])
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
% (format_bytes(len(b)), s)
.. parsed-literal::
:class: output
Epoch 1 ; Time: 0.482545 ; Training: accuracy=0.260079 ; Validation: accuracy=0.531250
Epoch 2 ; Time: 0.909615 ; Training: accuracy=0.496365 ; Validation: accuracy=0.655247
Epoch 3 ; Time: 1.354616 ; Training: accuracy=0.559650 ; Validation: accuracy=0.694686
Epoch 4 ; Time: 1.777528 ; Training: accuracy=0.588896 ; Validation: accuracy=0.711063
Epoch 5 ; Time: 2.193362 ; Training: accuracy=0.609385 ; Validation: accuracy=0.726939
Epoch 6 ; Time: 2.627868 ; Training: accuracy=0.628139 ; Validation: accuracy=0.745321
Epoch 7 ; Time: 3.048275 ; Training: accuracy=0.641193 ; Validation: accuracy=0.750501
Epoch 8 ; Time: 3.470435 ; Training: accuracy=0.653751 ; Validation: accuracy=0.763202
Epoch 9 ; Time: 3.891281 ; Training: accuracy=0.665482 ; Validation: accuracy=0.766043
Epoch 1 ; Time: 0.303029 ; Training: accuracy=0.397831 ; Validation: accuracy=0.682837
Epoch 2 ; Time: 0.547623 ; Training: accuracy=0.543385 ; Validation: accuracy=0.748076
Epoch 3 ; Time: 0.785948 ; Training: accuracy=0.591406 ; Validation: accuracy=0.780194
Epoch 4 ; Time: 1.029117 ; Training: accuracy=0.617652 ; Validation: accuracy=0.794246
Epoch 5 ; Time: 1.270646 ; Training: accuracy=0.632969 ; Validation: accuracy=0.801606
Epoch 6 ; Time: 1.524059 ; Training: accuracy=0.647872 ; Validation: accuracy=0.808966
Epoch 7 ; Time: 1.764688 ; Training: accuracy=0.654993 ; Validation: accuracy=0.815490
Epoch 8 ; Time: 2.010853 ; Training: accuracy=0.671634 ; Validation: accuracy=0.828036
Epoch 9 ; Time: 2.257933 ; Training: accuracy=0.667081 ; Validation: accuracy=0.829709
Epoch 1 ; Time: 0.397650 ; Training: accuracy=0.061781 ; Validation: accuracy=0.186175
Epoch 1 ; Time: 0.419245 ; Training: accuracy=0.292741 ; Validation: accuracy=0.572090
Epoch 1 ; Time: 0.993374 ; Training: accuracy=0.038806 ; Validation: accuracy=0.032660
Epoch 1 ; Time: 0.408552 ; Training: accuracy=0.132360 ; Validation: accuracy=0.145515
Epoch 1 ; Time: 0.307373 ; Training: accuracy=0.040165 ; Validation: accuracy=0.035537
Epoch 1 ; Time: 0.303210 ; Training: accuracy=0.085418 ; Validation: accuracy=0.127352
Epoch 1 ; Time: 0.299465 ; Training: accuracy=0.038405 ; Validation: accuracy=0.017453
Epoch 1 ; Time: 1.086341 ; Training: accuracy=0.413656 ; Validation: accuracy=0.627096
Epoch 2 ; Time: 2.210605 ; Training: accuracy=0.472406 ; Validation: accuracy=0.660966
Epoch 3 ; Time: 3.460688 ; Training: accuracy=0.506961 ; Validation: accuracy=0.688799
Epoch 1 ; Time: 4.005888 ; Training: accuracy=0.450514 ; Validation: accuracy=0.636440
Epoch 2 ; Time: 8.077382 ; Training: accuracy=0.537467 ; Validation: accuracy=0.688425
Epoch 3 ; Time: 12.041726 ; Training: accuracy=0.566230 ; Validation: accuracy=0.684556
Epoch 1 ; Time: 0.304248 ; Training: accuracy=0.254907 ; Validation: accuracy=0.541729
Epoch 1 ; Time: 0.368284 ; Training: accuracy=0.245586 ; Validation: accuracy=0.478363
Epoch 1 ; Time: 0.300972 ; Training: accuracy=0.404934 ; Validation: accuracy=0.708112
Epoch 2 ; Time: 0.537373 ; Training: accuracy=0.570477 ; Validation: accuracy=0.757812
Epoch 3 ; Time: 0.767954 ; Training: accuracy=0.607566 ; Validation: accuracy=0.775266
Epoch 4 ; Time: 0.994074 ; Training: accuracy=0.634539 ; Validation: accuracy=0.793717
Epoch 5 ; Time: 1.227305 ; Training: accuracy=0.651974 ; Validation: accuracy=0.818152
Epoch 6 ; Time: 1.452635 ; Training: accuracy=0.666365 ; Validation: accuracy=0.813165
Epoch 7 ; Time: 1.690346 ; Training: accuracy=0.671711 ; Validation: accuracy=0.832281
Epoch 8 ; Time: 1.940082 ; Training: accuracy=0.687911 ; Validation: accuracy=0.828624
Epoch 9 ; Time: 2.170427 ; Training: accuracy=0.701316 ; Validation: accuracy=0.832945
Epoch 1 ; Time: 0.316835 ; Training: accuracy=0.313079 ; Validation: accuracy=0.629630
Epoch 2 ; Time: 0.576798 ; Training: accuracy=0.443618 ; Validation: accuracy=0.701058
Epoch 3 ; Time: 0.842701 ; Training: accuracy=0.486442 ; Validation: accuracy=0.712798
Epoch 1 ; Time: 0.311446 ; Training: accuracy=0.411102 ; Validation: accuracy=0.640293
Epoch 2 ; Time: 0.550697 ; Training: accuracy=0.541447 ; Validation: accuracy=0.683012
Epoch 3 ; Time: 0.784446 ; Training: accuracy=0.556414 ; Validation: accuracy=0.657414
Epoch 1 ; Time: 0.301954 ; Training: accuracy=0.142270 ; Validation: accuracy=0.457281
Epoch 1 ; Time: 0.410496 ; Training: accuracy=0.520632 ; Validation: accuracy=0.765950
Epoch 2 ; Time: 0.757561 ; Training: accuracy=0.680063 ; Validation: accuracy=0.811261
Epoch 3 ; Time: 1.095346 ; Training: accuracy=0.718763 ; Validation: accuracy=0.829752
Epoch 4 ; Time: 1.437277 ; Training: accuracy=0.742826 ; Validation: accuracy=0.833083
Epoch 5 ; Time: 1.767010 ; Training: accuracy=0.757132 ; Validation: accuracy=0.864734
Epoch 6 ; Time: 2.108599 ; Training: accuracy=0.766228 ; Validation: accuracy=0.851241
Epoch 7 ; Time: 2.451191 ; Training: accuracy=0.775325 ; Validation: accuracy=0.865067
Epoch 8 ; Time: 2.787077 ; Training: accuracy=0.781030 ; Validation: accuracy=0.860903
Epoch 9 ; Time: 3.116224 ; Training: accuracy=0.785827 ; Validation: accuracy=0.872231
Epoch 1 ; Time: 0.382438 ; Training: accuracy=0.387011 ; Validation: accuracy=0.659507
Epoch 2 ; Time: 0.718951 ; Training: accuracy=0.552425 ; Validation: accuracy=0.739261
Epoch 3 ; Time: 1.044293 ; Training: accuracy=0.597290 ; Validation: accuracy=0.758075
Epoch 4 ; Time: 1.396677 ; Training: accuracy=0.629431 ; Validation: accuracy=0.782218
Epoch 5 ; Time: 1.727669 ; Training: accuracy=0.639098 ; Validation: accuracy=0.797203
Epoch 6 ; Time: 2.045914 ; Training: accuracy=0.659919 ; Validation: accuracy=0.802198
Epoch 7 ; Time: 2.365279 ; Training: accuracy=0.668347 ; Validation: accuracy=0.817016
Epoch 8 ; Time: 2.690139 ; Training: accuracy=0.672643 ; Validation: accuracy=0.822178
Epoch 9 ; Time: 3.008781 ; Training: accuracy=0.678757 ; Validation: accuracy=0.816184
Epoch 1 ; Time: 0.291943 ; Training: accuracy=0.478193 ; Validation: accuracy=0.746640
Epoch 2 ; Time: 0.553220 ; Training: accuracy=0.688446 ; Validation: accuracy=0.803595
Epoch 3 ; Time: 0.829441 ; Training: accuracy=0.748025 ; Validation: accuracy=0.835685
Epoch 4 ; Time: 1.170491 ; Training: accuracy=0.781024 ; Validation: accuracy=0.855175
Epoch 5 ; Time: 1.462178 ; Training: accuracy=0.796412 ; Validation: accuracy=0.867776
Epoch 6 ; Time: 1.702321 ; Training: accuracy=0.818302 ; Validation: accuracy=0.887769
Epoch 7 ; Time: 1.940559 ; Training: accuracy=0.832949 ; Validation: accuracy=0.893481
Epoch 8 ; Time: 2.172590 ; Training: accuracy=0.841590 ; Validation: accuracy=0.898185
Epoch 9 ; Time: 2.422612 ; Training: accuracy=0.854180 ; Validation: accuracy=0.905410
Epoch 1 ; Time: 0.328024 ; Training: accuracy=0.545800 ; Validation: accuracy=0.771717
Epoch 2 ; Time: 0.655222 ; Training: accuracy=0.699322 ; Validation: accuracy=0.812121
Epoch 3 ; Time: 0.924953 ; Training: accuracy=0.732556 ; Validation: accuracy=0.823232
Epoch 4 ; Time: 1.202127 ; Training: accuracy=0.736855 ; Validation: accuracy=0.847811
Epoch 5 ; Time: 1.473680 ; Training: accuracy=0.759177 ; Validation: accuracy=0.847980
Epoch 6 ; Time: 1.742765 ; Training: accuracy=0.769511 ; Validation: accuracy=0.861616
Epoch 7 ; Time: 2.035580 ; Training: accuracy=0.776951 ; Validation: accuracy=0.873906
Epoch 8 ; Time: 2.306909 ; Training: accuracy=0.782986 ; Validation: accuracy=0.872727
Epoch 9 ; Time: 2.577575 ; Training: accuracy=0.780506 ; Validation: accuracy=0.873064
Epoch 1 ; Time: 0.434735 ; Training: accuracy=0.663435 ; Validation: accuracy=0.805472
Epoch 2 ; Time: 0.786759 ; Training: accuracy=0.831220 ; Validation: accuracy=0.867534
Epoch 3 ; Time: 1.137336 ; Training: accuracy=0.872483 ; Validation: accuracy=0.905906
Epoch 4 ; Time: 1.487284 ; Training: accuracy=0.894357 ; Validation: accuracy=0.916750
Epoch 5 ; Time: 1.841773 ; Training: accuracy=0.910017 ; Validation: accuracy=0.919419
Epoch 6 ; Time: 2.221821 ; Training: accuracy=0.921203 ; Validation: accuracy=0.926093
Epoch 7 ; Time: 2.582683 ; Training: accuracy=0.925594 ; Validation: accuracy=0.928428
Epoch 8 ; Time: 2.933392 ; Training: accuracy=0.936946 ; Validation: accuracy=0.933767
Epoch 9 ; Time: 3.300093 ; Training: accuracy=0.937609 ; Validation: accuracy=0.934935
Epoch 1 ; Time: 0.289876 ; Training: accuracy=0.479136 ; Validation: accuracy=0.693543
Epoch 2 ; Time: 0.530906 ; Training: accuracy=0.741265 ; Validation: accuracy=0.786216
Epoch 3 ; Time: 0.759281 ; Training: accuracy=0.812635 ; Validation: accuracy=0.821010
Epoch 4 ; Time: 0.987005 ; Training: accuracy=0.840454 ; Validation: accuracy=0.841251
Epoch 5 ; Time: 1.230134 ; Training: accuracy=0.863305 ; Validation: accuracy=0.863499
Epoch 6 ; Time: 1.456895 ; Training: accuracy=0.874979 ; Validation: accuracy=0.869856
Epoch 7 ; Time: 1.715792 ; Training: accuracy=0.890876 ; Validation: accuracy=0.878889
Epoch 8 ; Time: 1.988375 ; Training: accuracy=0.900563 ; Validation: accuracy=0.889428
Epoch 9 ; Time: 2.216249 ; Training: accuracy=0.907352 ; Validation: accuracy=0.891435
Epoch 1 ; Time: 1.812222 ; Training: accuracy=0.670424 ; Validation: accuracy=0.826109
Epoch 2 ; Time: 3.544398 ; Training: accuracy=0.814075 ; Validation: accuracy=0.863743
Epoch 3 ; Time: 5.316420 ; Training: accuracy=0.844496 ; Validation: accuracy=0.886089
Epoch 4 ; Time: 7.080265 ; Training: accuracy=0.864307 ; Validation: accuracy=0.903226
Epoch 5 ; Time: 8.963737 ; Training: accuracy=0.879393 ; Validation: accuracy=0.914819
Epoch 6 ; Time: 10.707708 ; Training: accuracy=0.887185 ; Validation: accuracy=0.914483
Epoch 7 ; Time: 12.473687 ; Training: accuracy=0.889755 ; Validation: accuracy=0.921707
Epoch 8 ; Time: 14.208724 ; Training: accuracy=0.899370 ; Validation: accuracy=0.930948
Epoch 9 ; Time: 16.156154 ; Training: accuracy=0.908074 ; Validation: accuracy=0.932292
Epoch 1 ; Time: 1.330742 ; Training: accuracy=0.695577 ; Validation: accuracy=0.793683
Epoch 2 ; Time: 2.508893 ; Training: accuracy=0.808731 ; Validation: accuracy=0.812164
Epoch 3 ; Time: 3.699025 ; Training: accuracy=0.843191 ; Validation: accuracy=0.831653
Epoch 4 ; Time: 5.029982 ; Training: accuracy=0.863569 ; Validation: accuracy=0.871304
Epoch 5 ; Time: 6.284560 ; Training: accuracy=0.878810 ; Validation: accuracy=0.886761
Epoch 6 ; Time: 7.667023 ; Training: accuracy=0.879473 ; Validation: accuracy=0.867103
Epoch 7 ; Time: 8.896725 ; Training: accuracy=0.885852 ; Validation: accuracy=0.895161
Epoch 8 ; Time: 10.133780 ; Training: accuracy=0.903330 ; Validation: accuracy=0.878192
Epoch 9 ; Time: 11.409727 ; Training: accuracy=0.896372 ; Validation: accuracy=0.881216
Epoch 1 ; Time: 0.453156 ; Training: accuracy=0.313961 ; Validation: accuracy=0.584418
Epoch 1 ; Time: 0.943871 ; Training: accuracy=0.480116 ; Validation: accuracy=0.743361
Epoch 2 ; Time: 1.762146 ; Training: accuracy=0.652858 ; Validation: accuracy=0.800168
Epoch 3 ; Time: 2.583119 ; Training: accuracy=0.689644 ; Validation: accuracy=0.814286
Epoch 1 ; Time: 0.638521 ; Training: accuracy=0.250517 ; Validation: accuracy=0.619910
Epoch 1 ; Time: 0.287776 ; Training: accuracy=0.518010 ; Validation: accuracy=0.756981
Epoch 2 ; Time: 0.528645 ; Training: accuracy=0.719655 ; Validation: accuracy=0.824967
Epoch 3 ; Time: 0.759621 ; Training: accuracy=0.773026 ; Validation: accuracy=0.849402
Epoch 4 ; Time: 0.997138 ; Training: accuracy=0.801974 ; Validation: accuracy=0.868517
Epoch 5 ; Time: 1.226936 ; Training: accuracy=0.822368 ; Validation: accuracy=0.881483
Epoch 6 ; Time: 1.454836 ; Training: accuracy=0.833635 ; Validation: accuracy=0.898271
Epoch 7 ; Time: 1.678437 ; Training: accuracy=0.851398 ; Validation: accuracy=0.905419
Epoch 8 ; Time: 1.925603 ; Training: accuracy=0.855510 ; Validation: accuracy=0.910073
Epoch 9 ; Time: 2.154048 ; Training: accuracy=0.866694 ; Validation: accuracy=0.916722
Analysing the results
~~~~~~~~~~~~~~~~~~~~~
The training history is stored in the ``results_df``, the main fields
are the runtime and ``'best'`` (the objective).
**Note**: You will get slightly different curves for different pairs of
scheduler/searcher, the ``time_out`` here is a bit too short to really
see the difference in a significant way (it would be better to set it to
>1000s). Generally speaking though, hyperband stopping / promotion +
model will tend to significantly outperform other combinations given
enough time.
.. code:: python
results_df.head()
.. raw:: html
|
bracket |
elapsed_time |
epoch |
error |
eval_time |
objective |
runtime |
searcher_data_size |
searcher_params_kernel_covariance_scale |
searcher_params_kernel_inv_bw0 |
... |
searcher_params_kernel_inv_bw7 |
searcher_params_kernel_inv_bw8 |
searcher_params_mean_mean_value |
searcher_params_noise_variance |
target_epoch |
task_id |
time_since_start |
time_step |
time_this_iter |
best |
| 0 |
0 |
0.485194 |
1 |
0.468750 |
0.480406 |
0.531250 |
1.165886 |
NaN |
1.0 |
1.0 |
... |
1.0 |
1.0 |
0.0 |
0.001 |
9 |
0 |
1.167620 |
1.614913e+09 |
0.513523 |
0.468750 |
| 1 |
0 |
0.911648 |
2 |
0.344753 |
0.422225 |
0.655247 |
1.592340 |
1.0 |
1.0 |
1.0 |
... |
1.0 |
1.0 |
0.0 |
0.001 |
9 |
0 |
1.593296 |
1.614913e+09 |
0.426427 |
0.344753 |
| 2 |
0 |
1.356211 |
3 |
0.305314 |
0.442450 |
0.694686 |
2.036903 |
1.0 |
1.0 |
1.0 |
... |
1.0 |
1.0 |
0.0 |
0.001 |
9 |
0 |
2.037788 |
1.614913e+09 |
0.444562 |
0.305314 |
| 3 |
0 |
1.779227 |
4 |
0.288937 |
0.420393 |
0.711063 |
2.459918 |
2.0 |
1.0 |
1.0 |
... |
1.0 |
1.0 |
0.0 |
0.001 |
9 |
0 |
2.461463 |
1.614913e+09 |
0.423017 |
0.288937 |
| 4 |
0 |
2.195071 |
5 |
0.273061 |
0.413210 |
0.726939 |
2.875762 |
2.0 |
1.0 |
1.0 |
... |
1.0 |
1.0 |
0.0 |
0.001 |
9 |
0 |
2.876830 |
1.614913e+09 |
0.415844 |
0.273061 |
5 rows × 26 columns
.. code:: python
import matplotlib.pyplot as plt
plt.figure(figsize=(12, 8))
runtime = results_df['runtime'].values
objective = results_df['best'].values
plt.plot(runtime, objective, lw=2)
plt.xticks(fontsize=12)
plt.xlim(0, 120)
plt.ylim(0, 0.5)
plt.yticks(fontsize=12)
plt.xlabel("Runtime [s]", fontsize=14)
plt.ylabel("Objective", fontsize=14)
.. parsed-literal::
:class: output
Text(0, 0.5, 'Objective')
Diving Deeper
-------------
Now, you are ready to try HPO on your own machine learning models (if
you use PyTorch, have a look at :ref:`sec_customstorch`). While
AutoGluon comes with well-chosen defaults, it can pay off to tune it to
your specific needs. Here are some tips which may come useful.
Logging the Search Progress
~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, it is a good idea in general to switch on ``debug_log``, which
outputs useful information about the search progress. This is already
done in the example above.
The outputs show which configurations are chosen, stopped, or promoted.
For BO and BOHB, a range of information is displayed for every
``get_config`` decision. This log output is very useful in order to
figure out what is going on during the search.
Configuring ``HyperbandScheduler``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The most important knobs to turn with ``HyperbandScheduler`` are
``max_t``, ``grace_period``, ``reduction_factor``, ``brackets``, and
``type``. The first three determine the rung levels at which stopping or
promotion decisions are being made.
- The maximum resource level ``max_t`` (usually, resource equates to
epochs, so ``max_t`` is the maximum number of training epochs) is
typically hardcoded in ``train_fn`` passed to the scheduler (this is
``run_mlp_openml`` in the example above). As already noted above, the
value is best fixed in the ``ag.args`` decorator as ``epochs=XYZ``,
it can then be accessed as ``args.epochs`` in the ``train_fn`` code.
If this is done, you do not have to pass ``max_t`` when creating the
scheduler.
- ``grace_period`` and ``reduction_factor`` determine the rung levels,
which are ``grace_period``, ``grace_period * reduction_factor``,
``grace_period * (reduction_factor ** 2)``, etc. All rung levels must
be less or equal than ``max_t``. It is recommended to make ``max_t``
equal to the largest rung level. For example, if
``grace_period = 1``, ``reduction_factor = 3``, it is in general
recommended to use ``max_t = 9``, ``max_t = 27``, or ``max_t = 81``.
Choosing a ``max_t`` value "off the grid" works against the
successive halving principle that the total resources spent in a rung
should be roughly equal between rungs. If in the example above, you
set ``max_t = 10``, about a third of configurations reaching 9 epochs
are allowed to proceed, but only for one more epoch.
- With ``reduction_factor``, you tune the extent to which successive
halving filtering is applied. The larger this integer, the fewer
configurations make it to higher number of epochs. Values 2, 3, 4 are
commonly used.
- Finally, ``grace_period`` should be set to the smallest resource
(number of epochs) for which you expect any meaningful
differentiation between configurations. While ``grace_period = 1``
should always be explored, it may be too low for any meaningful
stopping decisions to be made at the first rung.
- ``brackets`` sets the maximum number of brackets in Hyperband (make
sure to study the Hyperband paper or follow-ups for details). For
``brackets = 1``, you are running successive halving (single
bracket). Higher brackets have larger effective ``grace_period``
values (so runs are not stopped until later), yet are also chosen
with less probability. We recommend to always consider successive
halving (``brackets = 1``) in a comparison.
- Finally, with ``type`` (values ``stopping``, ``promotion``) you are
choosing different ways of extending successive halving scheduling to
the asynchronous case. The method for the default ``stopping`` is
simpler and seems to perform well, but ``promotion`` is more careful
promoting configurations to higher resource levels, which can work
better in some cases.
Asynchronous BOHB
~~~~~~~~~~~~~~~~~
Finally, here are some ideas for tuning asynchronous BOHB, apart from
tuning its ``HyperbandScheduling`` component. You need to pass these
options in ``search_options``.
- We support a range of different surrogate models over the criterion
functions across resource levels. All of them are jointly dependent
Gaussian process models, meaning that data collected at all resource
levels are modelled together. The surrogate model is selected by
``gp_resource_kernel``, values are ``matern52``,
``matern52-res-warp``, ``exp-decay-sum``, ``exp-decay-combined``,
``exp-decay-delta1``. These are variants of either a joint Matern 5/2
kernel over configuration and resource, or the exponential decay
model. Details about the latter can be found
`here `__.
- Fitting a Gaussian process surrogate model to data encurs a cost
which scales cubically with the number of datapoints. When applied to
expensive deep learning workloads, even multi-fidelity asynchronous
BOHB is rarely running up more than 100 observations or so (across
all rung levels and brackets), and the GP computations are
subdominant. However, if you apply it to cheaper ``train_fn`` and
find yourself beyond 2000 total evaluations, the cost of GP fitting
can become painful. In such a situation, you can explore the options
``opt_skip_period`` and ``opt_skip_num_max_resource``. The basic idea
is as follows. By far the most expensive part of a ``get_config``
call (picking the next configuration) is the refitting of the GP
model to past data (this entails re-optimizing hyperparameters of the
surrogate model itself). The options allow you to skip this expensive
step for most ``get_config`` calls, after some initial period. Check
the docstrings for details about these options. If you find yourself
in such a situation and gain experience with these skipping features,
make sure to contact the AutoGluon developers -- we would love to
learn about your use case.