autogluon.scheduler¶
Example
Define a toy training function with searchable spaces:
>>> import numpy as np
>>> import autogluon as ag
>>> @ag.args(
... lr=ag.space.Real(1e-3, 1e-2, log=True),
... wd=ag.space.Real(1e-3, 1e-2),
... epochs=10)
>>> def train_fn(args, reporter):
... print('lr: {}, wd: {}'.format(args.lr, args.wd))
... for e in range(args.epochs):
... dummy_accuracy = 1 - np.power(1.8, -np.random.uniform(e, 2*e))
... reporter(epoch=e+1, accuracy=dummy_accuracy, lr=args.lr, wd=args.wd)
Note that epoch returned by reporter must be the number of epochs done, and start with 1. Create a scheduler and use it to run training jobs:
>>> scheduler = ag.scheduler.HyperbandScheduler(
... train_fn,
... resource={'num_cpus': 2, 'num_gpus': 0},
... num_trials=100,
... reward_attr='accuracy',
... time_attr='epoch',
... grace_period=1,
... reduction_factor=3,
... type='stopping')
>>> scheduler.run()
>>> scheduler.join_jobs()
Note that HyperbandScheduler obtains the maximum number of epochs from train_fn.args.epochs (specified by epochs=10 in the example above in the ag.args decorator). The value can also be passed as max_t to HyperbandScheduler.
Visualize the results:
>>> scheduler.get_training_curves(plot=True)
Schedulers¶
Simple scheduler that just runs trials in submission order. |
|
Implements different variants of asynchronous Hyperband |
|
Scheduler that uses Reinforcement Learning with a LSTM controller created based on the provided search spaces |
FIFOScheduler¶
-
class
autogluon.scheduler.FIFOScheduler(train_fn, **kwargs)¶ Simple scheduler that just runs trials in submission order.
- Parameters
- train_fncallable
A task launch function for training.
- argsobject (optional)
Default arguments for launching train_fn.
- resourcedict
Computation resources. For example, {‘num_cpus’:2, ‘num_gpus’:1}
- searcherstr or BaseSearcher
Searcher (get_config decisions). If str, this is passed to searcher_factory along with search_options.
- search_optionsdict
If searcher is str, these arguments are passed to searcher_factory.
- checkpointstr
If filename given here, a checkpoint of scheduler (and searcher) state is written to file every time a job finishes. Note: May not be fully supported by all searchers.
- resumebool
If True, scheduler state is loaded from checkpoint, and experiment starts from there. Note: May not be fully supported by all searchers.
- num_trialsint
Maximum number of jobs run in experiment. One of num_trials, time_out must be given.
- time_outfloat
If given, jobs are started only until this time_out (wall clock time). One of num_trials, time_out must be given.
- reward_attrstr
Name of reward (i.e., metric to maximize) attribute in data obtained from reporter
- time_attrstr
Name of resource (or time) attribute in data obtained from reporter. This attribute is optional for FIFO scheduling, but becomes mandatory in multi-fidelity scheduling (e.g., Hyperband). Note: The type of resource must be int.
- dist_ip_addrslist of str
IP addresses of remote machines.
- training_history_callbackcallable
Callback function func called every time a job finishes, if at least training_history_callback_delta_secs seconds passed since the last recent call. The call has the form:
func(self.training_history, self._start_time)
Here, self._start_time is time stamp for when experiment started. Use this callback to serialize self.training_history after regular intervals.
- training_history_callback_delta_secsfloat
See training_history_callback.
- delay_get_configbool
If True, the call to searcher.get_config is delayed until a worker resource for evaluation is available. Otherwise, get_config is called just after a job has been started. For searchers which adapt to past data, True should be preferred. Otherwise, it does not matter.
Examples
>>> import numpy as np >>> import autogluon as ag >>> @ag.args( ... lr=ag.space.Real(1e-3, 1e-2, log=True), ... wd=ag.space.Real(1e-3, 1e-2)) >>> def train_fn(args, reporter): ... print('lr: {}, wd: {}'.format(args.lr, args.wd)) ... for e in range(10): ... dummy_accuracy = 1 - np.power(1.8, -np.random.uniform(e, 2*e)) ... reporter(epoch=e+1, accuracy=dummy_accuracy, lr=args.lr, wd=args.wd) >>> scheduler = ag.scheduler.FIFOScheduler(train_fn, ... resource={'num_cpus': 2, 'num_gpus': 0}, ... num_trials=20, ... reward_attr='accuracy', ... time_attr='epoch') >>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
- Attributes
- num_finished_tasks
Methods
add_job(self, task, \*\*kwargs)Adding a training task to the scheduler.
add_remote(self, ip_addrs)Add remote nodes to the scheduler computation resource.
add_task(self, task, \*\*kwargs)add_task() is now deprecated in favor of add_job().
get_best_config(self)Get the best configuration from the finished jobs.
get_best_reward(self)Get the best reward from the finished jobs.
get_best_task_id(self)Get the task id that results in the best configuration/best reward.
get_training_curves(self[, filename, plot, …])Get Training Curves
join_jobs(self[, timeout])Wait all scheduled jobs to finish
load_state_dict(self, state_dict)Load from the saved state dict.
run(self, \*\*kwargs)Run multiple number of trials
run_job(self, task)Run a training task to the scheduler (Sync).
run_with_config(self, config)Run with config for final fit.
save(self[, checkpoint])Save Checkpoint
schedule_next(self)Schedule next searcher suggested task
shutdown(self)shutdown() is now deprecated in favor of
autogluon.done().state_dict(self[, destination])Returns a dictionary containing a whole state of the Scheduler.
upload_files(files, \*\*kwargs)Upload files to remote machines, so that they are accessible by import or load.
join_tasks
-
add_job(self, task, **kwargs)¶ Adding a training task to the scheduler.
- Args:
task (
autogluon.scheduler.Task): a new training task- Relevant entries in kwargs:
bracket: HB bracket to be used. Has been sampled in _promote_config
new_config: If True, task starts new config eval, otherwise it promotes a config (only if type == ‘promotion’)
- Only if new_config == False:
config_key: Internal key for config
resume_from: config promoted from this milestone
milestone: config promoted to this milestone (next from resume_from)
-
add_remote(self, ip_addrs)¶ Add remote nodes to the scheduler computation resource.
-
add_task(self, task, **kwargs)¶ add_task() is now deprecated in favor of add_job().
-
get_best_config(self)¶ Get the best configuration from the finished jobs.
-
get_best_reward(self)¶ Get the best reward from the finished jobs.
-
get_best_task_id(self)¶ Get the task id that results in the best configuration/best reward.
If there are duplicated configurations, we return the id of the first one.
-
get_training_curves(self, filename=None, plot=False, use_legend=True)¶ Get Training Curves
- Parameters
- filenamestr
plot : bool use_legend : bool
Examples
>>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
-
join_jobs(self, timeout=None)¶ Wait all scheduled jobs to finish
-
load_state_dict(self, state_dict)¶ Load from the saved state dict. This can be used to resume an experiment from a checkpoint (see ‘state_dict’ for caveats).
This method must only be called as part of scheduler construction. Calling it in the middle of an experiment can lead to an undefined inner state of scheduler or searcher.
Examples
>>> scheduler.load_state_dict(ag.load('checkpoint.ag'))
-
run(self, **kwargs)¶ Run multiple number of trials
-
run_job(self, task)¶ Run a training task to the scheduler (Sync).
-
run_with_config(self, config)¶ Run with config for final fit. It launches a single training trial under any fixed values of the hyperparameters. For example, after HPO has identified the best hyperparameter values based on a hold-out dataset, one can use this function to retrain a model with the same hyperparameters on all the available labeled data (including the hold out set). It can also returns other objects or states.
-
save(self, checkpoint=None)¶ Save Checkpoint
-
schedule_next(self)¶ Schedule next searcher suggested task
-
shutdown(self)¶ shutdown() is now deprecated in favor of
autogluon.done().
-
state_dict(self, destination=None)¶ Returns a dictionary containing a whole state of the Scheduler. This is used for checkpointing.
Note that the checkpoint only contains information which has been registered at scheduler and searcher. It does not contain information about currently running jobs, except what they reported before the checkpoint. Therefore, resuming an experiment from a checkpoint is slightly different from continuing the experiment past the checkpoint. The former behaves as if all currently running jobs are terminated at the checkpoint, and new jobs are scheduled from there, starting from scheduler and searcher state according to all information recorded until the checkpoint.
Examples
>>> ag.save(scheduler.state_dict(), 'checkpoint.ag')
-
classmethod
upload_files(files, **kwargs)¶ Upload files to remote machines, so that they are accessible by import or load.
HyperbandScheduler¶
-
class
autogluon.scheduler.HyperbandScheduler(train_fn, **kwargs)¶ Implements different variants of asynchronous Hyperband
See ‘type’ for the different variants. One implementation detail is when using multiple brackets, task allocation to bracket is done randomly based on a softmax probability.
Note: This scheduler requires both reward and resource (time) to be returned by the reporter. Here, resource (time) values must be positive int. If time_attr == ‘epoch’, this should be the number of epochs done, starting from 1 (not the epoch number, starting from 0).
- Parameters
- train_fncallable
A task launch function for training.
- argsobject, optional
Default arguments for launching train_fn.
- resourcedict
Computation resources. For example, {‘num_cpus’:2, ‘num_gpus’:1}
- searcherstr or BaseSearcher
Searcher (get_config decisions). If str, this is passed to searcher_factory along with search_options.
- search_optionsdict
If searcher is str, these arguments are passed to searcher_factory.
- checkpointstr
If filename given here, a checkpoint of scheduler (and searcher) state is written to file every time a job finishes. Note: May not be fully supported by all searchers.
- resumebool
If True, scheduler state is loaded from checkpoint, and experiment starts from there. Note: May not be fully supported by all searchers.
- num_trialsint
Maximum number of jobs run in experiment. One of num_trials, time_out must be given.
- time_outfloat
If given, jobs are started only until this time_out (wall clock time). One of num_trials, time_out must be given.
- reward_attrstr
Name of reward (i.e., metric to maximize) attribute in data obtained from reporter
- time_attrstr
Name of resource (or time) attribute in data obtained from reporter. Note: The type of resource must be positive int.
- max_tint
Maximum resource (see time_attr) to be used for a job. Together with grace_period and reduction_factor, this is used to determine rung levels in Hyperband brackets. Note: If this is not given, we try to infer its value from train_fn.args, checking train_fn.args.epochs or train_fn.args.max_t. If max_t is given as argument here, it takes precedence.
- grace_periodint
Minimum resource (see time_attr) to be used for a job.
- reduction_factorint (>= 2)
Parameter to determine rung levels in successive halving (Hyperband).
- bracketsint
Number of brackets to be used in Hyperband. Each bracket has a different grace period, all share max_t and reduction_factor. If brackets == 1, we just run successive halving.
- training_history_callbackcallable
Callback function func called every time a job finishes, if at least training_history_callback_delta_secs seconds passed since the last recent call. The call has the form:
func(self.training_history, self._start_time)
Here, self._start_time is time stamp for when experiment started. Use this callback to serialize self.training_history after regular intervals.
- training_history_callback_delta_secsfloat
See training_history_callback.
- delay_get_configbool
If True, the call to searcher.get_config is delayed until a worker resource for evaluation is available. Otherwise, get_config is called just after a job has been started. For searchers which adapt to past data, True should be preferred. Otherwise, it does not matter.
- typestr
- Type of Hyperband scheduler:
- stopping:
See
HyperbandStopping_Manager. Tasks and config evals are tightly coupled. A task is stopped at a milestone if worse than most others, otherwise it continues. As implemented in Ray/Tune: https://ray.readthedocs.io/en/latest/tune-schedulers.html#asynchronous-hyperband- promotion:
See
HyperbandPromotion_Manager. A config eval may be associated with multiple tasks over its lifetime. It is never terminated, but may be paused. Whenever a task becomes available, it may promote a config to the next milestone, if better than most others. If no config can be promoted, a new one is chosen. This variant may benefit from pause&resume, which is not directly supported here. As proposed in this paper (termed ASHA): https://arxiv.org/abs/1810.05934
- dist_ip_addrslist of str
IP addresses of remote machines.
- keep_size_ratiosbool
Implemented for type ‘promotion’ only. If True, promotions are done only if the (current estimate of the) size ratio between rung and next rung are 1 / reduction_factor or better. This avoids higher rungs to get more populated than they would be in synchronous Hyperband. A drawback is that promotions to higher rungs take longer.
- maxt_pendingbool
Relevant only if a model-based searcher is used. If True, register pending config at level max_t whenever a new evaluation is started. This has a direct effect on the acquisition function (for model-based variant), which operates at level max_t. On the other hand, it decreases the variance of the latent process there.
- searcher_datastr
Relevant only if a model-based searcher is used, and if train_fn is such that we receive results (from the reporter) at each successive resource level, not just at the rung levels. Example: For NN tuning and time_attr == ‘epoch’, we receive a result for each epoch, but not all epoch values are also rung levels. searcher_data determines which of these results are passed to the searcher. As a rule, the more data the searcher receives, the better its fit, but also the more expensive get_config may become. Choices: - ‘rungs’ (default): Only results at rung levels. Cheapest - ‘all’: All results. Most expensive - ‘rungs_and_last’: Results at rung levels, plus the most recent result.
This means that in between rung levels, only the most recent result is used by the searcher. This is in between
See also
HyperbandStopping_ManagerHyperbandPromotion_Manager
Examples
>>> import numpy as np >>> import autogluon as ag >>> @ag.args( ... lr=ag.space.Real(1e-3, 1e-2, log=True), ... wd=ag.space.Real(1e-3, 1e-2), ... epochs=10) >>> def train_fn(args, reporter): ... print('lr: {}, wd: {}'.format(args.lr, args.wd)) ... for e in range(args.epochs): ... dummy_accuracy = 1 - np.power(1.8, -np.random.uniform(e, 2*e)) ... reporter(epoch=e+1, accuracy=dummy_accuracy, lr=args.lr, wd=args.wd) >>> scheduler = ag.scheduler.HyperbandScheduler( ... train_fn, ... resource={'num_cpus': 2, 'num_gpus': 0}, ... num_trials=20, ... reward_attr='accuracy', ... time_attr='epoch', ... grace_period=1) >>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
- Attributes
- num_finished_tasks
Methods
add_job(self, task, \*\*kwargs)Adding a training task to the scheduler.
add_remote(self, ip_addrs)Add remote nodes to the scheduler computation resource.
add_task(self, task, \*\*kwargs)add_task() is now deprecated in favor of add_job().
get_best_config(self)Get the best configuration from the finished jobs.
get_best_reward(self)Get the best reward from the finished jobs.
get_best_task_id(self)Get the task id that results in the best configuration/best reward.
get_training_curves(self[, filename, plot, …])Get Training Curves
join_jobs(self[, timeout])Wait all scheduled jobs to finish
load_state_dict(self, state_dict)Load from the saved state dict.
run(self, \*\*kwargs)Run multiple number of trials
run_job(self, task)Run a training task to the scheduler (Sync).
run_with_config(self, config)Run with config for final fit.
save(self[, checkpoint])Save Checkpoint
schedule_next(self)Schedule next searcher suggested task
shutdown(self)shutdown() is now deprecated in favor of
autogluon.done().state_dict(self[, destination])Returns a dictionary containing a whole state of the Scheduler
upload_files(files, \*\*kwargs)Upload files to remote machines, so that they are accessible by import or load.
join_tasks
map_resource_to_index
-
add_job(self, task, **kwargs)¶ Adding a training task to the scheduler.
- Args:
task (
autogluon.scheduler.Task): a new training task
Relevant entries in kwargs: - bracket: HB bracket to be used. Has been sampled in _promote_config - new_config: If True, task starts new config eval, otherwise it promotes
a config (only if type == ‘promotion’)
Only if new_config == False: - config_key: Internal key for config - resume_from: config promoted from this milestone - milestone: config promoted to this milestone (next from resume_from)
-
add_remote(self, ip_addrs)¶ Add remote nodes to the scheduler computation resource.
-
add_task(self, task, **kwargs)¶ add_task() is now deprecated in favor of add_job().
-
get_best_config(self)¶ Get the best configuration from the finished jobs.
-
get_best_reward(self)¶ Get the best reward from the finished jobs.
-
get_best_task_id(self)¶ Get the task id that results in the best configuration/best reward.
If there are duplicated configurations, we return the id of the first one.
-
get_training_curves(self, filename=None, plot=False, use_legend=True)¶ Get Training Curves
- Parameters
- filenamestr
plot : bool use_legend : bool
Examples
>>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
-
join_jobs(self, timeout=None)¶ Wait all scheduled jobs to finish
-
load_state_dict(self, state_dict)¶ Load from the saved state dict.
Examples
>>> scheduler.load_state_dict(ag.load('checkpoint.ag'))
-
run(self, **kwargs)¶ Run multiple number of trials
-
run_job(self, task)¶ Run a training task to the scheduler (Sync).
-
run_with_config(self, config)¶ Run with config for final fit. It launches a single training trial under any fixed values of the hyperparameters. For example, after HPO has identified the best hyperparameter values based on a hold-out dataset, one can use this function to retrain a model with the same hyperparameters on all the available labeled data (including the hold out set). It can also returns other objects or states.
-
save(self, checkpoint=None)¶ Save Checkpoint
-
schedule_next(self)¶ Schedule next searcher suggested task
-
shutdown(self)¶ shutdown() is now deprecated in favor of
autogluon.done().
-
state_dict(self, destination=None)¶ Returns a dictionary containing a whole state of the Scheduler
Examples
>>> ag.save(scheduler.state_dict(), 'checkpoint.ag')
-
classmethod
upload_files(files, **kwargs)¶ Upload files to remote machines, so that they are accessible by import or load.
RLScheduler¶
-
class
autogluon.scheduler.RLScheduler(train_fn, **kwargs)¶ Scheduler that uses Reinforcement Learning with a LSTM controller created based on the provided search spaces
- Parameters
- train_fncallable
A task launch function for training. Note: please add the @ag.args decorater to the original function.
- argsobject (optional)
Default arguments for launching train_fn.
- resourcedict
Computation resources. For example, {‘num_cpus’:2, ‘num_gpus’:1}
- searcherobject (optional)
Autogluon searcher. For example, autogluon.searcher.RandomSearcher
- time_attrstr
A training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_epoch as a measure of progress, the only requirement is that the attribute should increase monotonically.
- reward_attrstr
The training result objective value attribute. As with time_attr, this may refer to any objective value. Stopping procedures will use this attribute.
- controller_resourceint
Batch size for training controllers.
- dist_ip_addrslist of str
IP addresses of remote machines.
Examples
>>> import numpy as np >>> import autogluon as ag >>> >>> @ag.args( ... lr=ag.space.Real(1e-3, 1e-2, log=True), ... wd=ag.space.Real(1e-3, 1e-2)) >>> def train_fn(args, reporter): ... print('lr: {}, wd: {}'.format(args.lr, args.wd)) ... for e in range(10): ... dummy_accuracy = 1 - np.power(1.8, -np.random.uniform(e, 2*e)) ... reporter(epoch=e+1, accuracy=dummy_accuracy, lr=args.lr, wd=args.wd) ... >>> scheduler = ag.scheduler.RLScheduler(train_fn, ... resource={'num_cpus': 2, 'num_gpus': 0}, ... num_trials=20, ... reward_attr='accuracy', ... time_attr='epoch') >>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
- Attributes
- num_finished_tasks
Methods
add_job(self, task, \*\*kwargs)Adding a training task to the scheduler.
add_remote(self, ip_addrs)Add remote nodes to the scheduler computation resource.
add_task(self, task, \*\*kwargs)add_task() is now deprecated in favor of add_job().
get_best_config(self)Get the best configuration from the finished jobs.
get_best_reward(self)Get the best reward from the finished jobs.
get_best_task_id(self)Get the task id that results in the best configuration/best reward.
get_training_curves(self[, filename, plot, …])Get Training Curves
join_jobs(self[, timeout])Wait all scheduled jobs to finish
load_state_dict(self, state_dict)Load from the saved state dict.
run(self, \*\*kwargs)Run multiple number of trials
run_job(self, task)Run a training task to the scheduler (Sync).
run_with_config(self, config)Run with config for final fit.
save(self[, checkpoint])Save Checkpoint
schedule_next(self)Schedule next searcher suggested task
shutdown(self)shutdown() is now deprecated in favor of
autogluon.done().state_dict(self[, destination])Returns a dictionary containing a whole state of the Scheduler
upload_files(files, \*\*kwargs)Upload files to remote machines, so that they are accessible by import or load.
join_tasks
sync_schedule_tasks
-
add_job(self, task, **kwargs)¶ Adding a training task to the scheduler.
- Args:
task (
autogluon.scheduler.Task): a new training task
-
add_remote(self, ip_addrs)¶ Add remote nodes to the scheduler computation resource.
-
add_task(self, task, **kwargs)¶ add_task() is now deprecated in favor of add_job().
-
get_best_config(self)¶ Get the best configuration from the finished jobs.
-
get_best_reward(self)¶ Get the best reward from the finished jobs.
-
get_best_task_id(self)¶ Get the task id that results in the best configuration/best reward.
If there are duplicated configurations, we return the id of the first one.
-
get_training_curves(self, filename=None, plot=False, use_legend=True)¶ Get Training Curves
- Parameters
- filenamestr
plot : bool use_legend : bool
Examples
>>> scheduler.run() >>> scheduler.join_jobs() >>> scheduler.get_training_curves(plot=True)
-
join_jobs(self, timeout=None)¶ Wait all scheduled jobs to finish
-
load_state_dict(self, state_dict)¶ Load from the saved state dict.
Examples
>>> scheduler.load_state_dict(ag.load('checkpoint.ag'))
-
run(self, **kwargs)¶ Run multiple number of trials
-
run_job(self, task)¶ Run a training task to the scheduler (Sync).
-
run_with_config(self, config)¶ Run with config for final fit. It launches a single training trial under any fixed values of the hyperparameters. For example, after HPO has identified the best hyperparameter values based on a hold-out dataset, one can use this function to retrain a model with the same hyperparameters on all the available labeled data (including the hold out set). It can also returns other objects or states.
-
save(self, checkpoint=None)¶ Save Checkpoint
-
schedule_next(self)¶ Schedule next searcher suggested task
-
shutdown(self)¶ shutdown() is now deprecated in favor of
autogluon.done().
-
state_dict(self, destination=None)¶ Returns a dictionary containing a whole state of the Scheduler
Examples
>>> ag.save(scheduler.state_dict(), 'checkpoint.ag')
-
classmethod
upload_files(files, **kwargs)¶ Upload files to remote machines, so that they are accessible by import or load.