AutoMM Presets#

It is well-known that we usually need to set hyperparameters before the learning process begins. Deep learning models, e.g., pretrained foundation models, can have anywhere from a few hyperparameters to a few hundred. The hyperparameters can impact training speed, final model performance, and inference latency. However, choosing the proper hyperparameters may be challenging for many users with limited expertise.

In this tutorial, we will introduce the easy-to-use presets in AutoMM. Our presets can condense the complex hyperparameter setups into simple strings. More specifically, AutoMM supports three presets: medium_quality, high_quality, and best_quality.

import warnings

warnings.filterwarnings('ignore')

Dataset#

For demonstration, we use a subsampled Stanford Sentiment Treebank (SST) dataset, which consists of movie reviews and their associated sentiment. Given a new movie review, the goal is to predict the sentiment reflected in the text (in this case, a binary classification, where reviews are labeled as 1 if they conveyed a positive opinion and 0 otherwise). To get started, let’s download and prepare the dataset.

from autogluon.core.utils.loaders import load_pd

train_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet')
test_data = load_pd.load('https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet')
subsample_size = 1000  # subsample data for faster demo, try setting this to larger values
train_data = train_data.sample(n=subsample_size, random_state=0)
train_data.head(10)
sentence label
43787 very pleasing at its best moments 1
16159 , american chai is enough to make you put away... 0
59015 too much like an infomercial for ram dass 's l... 0
5108 a stirring visual sequence 1
67052 cool visual backmasking 1
35938 hard ground 0
49879 the striking , quietly vulnerable personality ... 1
51591 pan nalin 's exposition is beautiful and myste... 1
56780 wonderfully loopy 1
28518 most beautiful , evocative 1

Medium Quality#

In some situations, we prefer fast training and inference to the prediction quality. medium_quality is designed for this purpose. Among the three presets, medium_quality has the smallest model size. Now let’s fit the predictor using the medium_quality preset. Here we set a tight time budget for a quick demo.

from autogluon.multimodal import MultiModalPredictor

predictor = MultiModalPredictor(label='label', eval_metric='acc', presets="medium_quality")
predictor.fit(
    train_data=train_data,
    time_limit=30, # seconds
)
No path specified. Models will be saved in: "AutogluonModels/ag-20230622_215511/"
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  [0, 1]
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Global seed set to 0
AutoMM starts to create your model. ✨

- AutoGluon version is 0.8.1b20230622.

- Pytorch version is 1.13.1+cu117.

- Model will be saved to "/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511".

- Validation metric is "acc".

- To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511
    ```

Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai

1 GPUs are detected, and 1 GPUs will be used.
   - GPU 0 name: Tesla T4
   - GPU 0 memory: 15.74GB/15.84GB (Free/Total)
CUDA version is 11.7.

Using 16bit None Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name              | Type                         | Params
-------------------------------------------------------------------
0 | model             | HFAutoModelForTextPrediction | 13.5 M
1 | validation_metric | MulticlassAccuracy           | 0     
2 | loss_func         | CrossEntropyLoss             | 0     
-------------------------------------------------------------------
13.5 M    Trainable params
0         Non-trainable params
13.5 M    Total params
26.967    Total estimated model params size (MB)
Epoch 0, global step 3: 'val_acc' reached 0.59500 (best 0.59500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=0-step=3.ckpt' as top 3
Epoch 0, global step 7: 'val_acc' reached 0.45500 (best 0.59500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=0-step=7.ckpt' as top 3
Epoch 1, global step 10: 'val_acc' reached 0.57000 (best 0.59500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=1-step=10.ckpt' as top 3
Epoch 1, global step 14: 'val_acc' reached 0.73500 (best 0.73500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=1-step=14.ckpt' as top 3
Epoch 2, global step 17: 'val_acc' reached 0.74500 (best 0.74500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=2-step=17.ckpt' as top 3
Epoch 2, global step 21: 'val_acc' reached 0.72500 (best 0.74500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=2-step=21.ckpt' as top 3
Epoch 3, global step 24: 'val_acc' reached 0.81000 (best 0.81000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=3-step=24.ckpt' as top 3
Epoch 3, global step 28: 'val_acc' reached 0.80500 (best 0.81000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511/epoch=3-step=28.ckpt' as top 3
Time limit reached. Elapsed time is 0:00:30. Signaling Trainer to stop.
Start to fuse 3 checkpoints via the greedy soup algorithm.
AutoMM has created your model 🎉🎉🎉

- To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511")
    ```

- You can open a terminal and launch Tensorboard to visualize the training log:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215511
    ```

- If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub: https://github.com/autogluon/autogluon
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f74a4136670>

Then we can evaluate the predictor on the test data.

scores = predictor.evaluate(test_data, metrics=["roc_auc"])
scores
{'roc_auc': 0.8987039025006315}

High Quality#

If you want to balance the prediction quality and training/inference speed, you can try the high_quality preset, which uses a larger model than medium_quality. Accordingly, we need to increase the time limit since larger models require more time to train.

from autogluon.multimodal import MultiModalPredictor

predictor = MultiModalPredictor(label='label', eval_metric='acc', presets="high_quality")
predictor.fit(
    train_data=train_data,
    time_limit=50, # seconds
)
No path specified. Models will be saved in: "AutogluonModels/ag-20230622_215549/"
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  [0, 1]
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Global seed set to 0
AutoMM starts to create your model. ✨

- AutoGluon version is 0.8.1b20230622.

- Pytorch version is 1.13.1+cu117.

- Model will be saved to "/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549".

- Validation metric is "acc".

- To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549
    ```

Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai

1 GPUs are detected, and 1 GPUs will be used.
   - GPU 0 name: Tesla T4
   - GPU 0 memory: 15.00GB/15.84GB (Free/Total)
CUDA version is 11.7.

Using 16bit None Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name              | Type                         | Params
-------------------------------------------------------------------
0 | model             | HFAutoModelForTextPrediction | 108 M 
1 | validation_metric | MulticlassAccuracy           | 0     
2 | loss_func         | CrossEntropyLoss             | 0     
-------------------------------------------------------------------
108 M     Trainable params
0         Non-trainable params
108 M     Total params
217.786   Total estimated model params size (MB)
Epoch 0, global step 3: 'val_acc' reached 0.51500 (best 0.51500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=0-step=3.ckpt' as top 3
Epoch 0, global step 7: 'val_acc' reached 0.57000 (best 0.57000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=0-step=7.ckpt' as top 3
Epoch 1, global step 10: 'val_acc' reached 0.68500 (best 0.68500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=1-step=10.ckpt' as top 3
Epoch 1, global step 14: 'val_acc' reached 0.81000 (best 0.81000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=1-step=14.ckpt' as top 3
Epoch 2, global step 17: 'val_acc' reached 0.90500 (best 0.90500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=2-step=17.ckpt' as top 3
Time limit reached. Elapsed time is 0:00:50. Signaling Trainer to stop.
Epoch 2, global step 18: 'val_acc' reached 0.91500 (best 0.91500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549/epoch=2-step=18.ckpt' as top 3
Start to fuse 3 checkpoints via the greedy soup algorithm.
AutoMM has created your model 🎉🎉🎉

- To load the model, use the code below:
    ```python
    from autogluon.multimodal import MultiModalPredictor
    predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549")
    ```

- You can open a terminal and launch Tensorboard to visualize the training log:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215549
    ```

- If you are not satisfied with the model, try to increase the training time, 
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub: https://github.com/autogluon/autogluon
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f74f8df4fa0>

Although high_quality requires more training time than medium_quality, it also brings performance gains.

scores = predictor.evaluate(test_data, metrics=["roc_auc"])
scores
{'roc_auc': 0.9530079144565127}

Best Quality#

If you want the best performance and don’t care about the training/inference cost, give it a try for the best_quality preset. High-end GPUs with large memory are preferred in this case. Compared to high_quality, it requires much longer training time.

from autogluon.multimodal import MultiModalPredictor

predictor = MultiModalPredictor(label='label', eval_metric='acc', presets="best_quality")
predictor.fit(train_data=train_data, time_limit=180)
No path specified. Models will be saved in: "AutogluonModels/ag-20230622_215708/"
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
	2 unique label values:  [0, 1]
	If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Global seed set to 0
AutoMM starts to create your model. ✨

- AutoGluon version is 0.8.1b20230622.

- Pytorch version is 1.13.1+cu117.

- Model will be saved to "/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215708".

- Validation metric is "acc".

- To track the learning progress, you can open a terminal and launch Tensorboard:
    ```shell
    # Assume you have installed tensorboard
    tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20230622_215708
    ```

Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
 in <module>:4                                                                                    
                                                                                                  
   1 from autogluon.multimodal import MultiModalPredictor                                         
   2                                                                                              
   3 predictor = MultiModalPredictor(label='label', eval_metric='acc', presets="best_quality"     
 4 predictor.fit(train_data=train_data, time_limit=180)                                         
   5                                                                                              
                                                                                                  
 /home/ci/autogluon/multimodal/src/autogluon/multimodal/predictor.py:864 in fit                   
                                                                                                  
    861 │   │   │   )                                                                             
    862 │   │   │   return predictor                                                              
    863 │   │                                                                                     
  864 │   │   self._fit(**_fit_args)                                                            
    865 │   │   training_end = time.time()                                                        
    866 │   │   self._total_train_time = training_end - training_start                            
    867                                                                                           
                                                                                                  
 /home/ci/autogluon/multimodal/src/autogluon/multimodal/predictor.py:1140 in _fit                 
                                                                                                  
   1137 │   │   │   self._output_shape = len(df_preprocessor.label_generator.unique_entity_group  
   1138 │   │                                                                                     
   1139 │   │   if self._model is None:                                                           
 1140 │   │   │   model = create_fusion_model(                                                  
   1141 │   │   │   │   config=config,                                                            
   1142 │   │   │   │   num_classes=self._output_shape,                                           
   1143 │   │   │   │   classes=self._classes,                                                    
                                                                                                  
 /home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/model.py:442 in create_fusion_model 
                                                                                                  
   439                                                                                        
   440 for model_name in names:                                                               
   441 │   │   model_config = getattr(config.model, model_name)                                   
 442 │   │   model = create_model(                                                              
   443 │   │   │   model_name=model_name,                                                         
   444 │   │   │   model_config=model_config,                                                     
   445 │   │   │   num_classes=num_classes,                                                       
                                                                                                  
 /home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/model.py:209 in create_model        
                                                                                                  
   206 │   │   │   pretrained=pretrained,                                                         
   207 │   │   )                                                                                  
   208 elif model_name.lower().startswith(HF_TEXT):                                           
 209 │   │   model = HFAutoModelForTextPrediction(                                              
   210 │   │   │   prefix=model_name,                                                             
   211 │   │   │   checkpoint_name=model_config.checkpoint_name,                                  
   212 │   │   │   num_classes=num_classes,                                                       
                                                                                                  
 /home/ci/autogluon/multimodal/src/autogluon/multimodal/models/huggingface_text.py:84 in __init__ 
                                                                                                  
    81 │   │   self.config, self.model = get_hf_config_and_model(                                 
    82 │   │   │   checkpoint_name=checkpoint_name, pretrained=pretrained, low_cpu_mem_usage=lo   
    83 │   │   )                                                                                  
  84 │   │   self._hf_model_input_names = AutoTokenizer.from_pretrained(checkpoint_name).mode   
    85 │   │                                                                                      
    86 │   │   if isinstance(self.model, T5PreTrainedModel):                                      
    87 │   │   │   self.is_t5 = True                                                              
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py:676  
 in from_pretrained                                                                               
                                                                                                  
   673 │   │   if model_type is not None:                                                         
   674 │   │   │   tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]     
   675 │   │   │   if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):          
 676 │   │   │   │   return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_pat   
   677 │   │   │   else:                                                                          
   678 │   │   │   │   if tokenizer_class_py is not None:                                         
   679 │   │   │   │   │   return tokenizer_class_py.from_pretrained(pretrained_model_name_or_p   
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1804 in    
 from_pretrained                                                                                  
                                                                                                  
   1801 │   │   │   else:                                                                         
   1802 │   │   │   │   logger.info(f"loading file {file_path} from cache at {resolved_vocab_fil  
   1803 │   │                                                                                     
 1804 │   │   return cls._from_pretrained(                                                      
   1805 │   │   │   resolved_vocab_files,                                                         
   1806 │   │   │   pretrained_model_name_or_path,                                                
   1807 │   │   │   init_configuration,                                                           
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1959 in    
 _from_pretrained                                                                                 
                                                                                                  
   1956 │   │                                                                                     
   1957 │   │   # Instantiate tokenizer.                                                          
   1958 │   │   try:                                                                              
 1959 │   │   │   tokenizer = cls(*init_inputs, **init_kwargs)                                  
   1960 │   │   except OSError:                                                                   
   1961 │   │   │   raise OSError(                                                                
   1962 │   │   │   │   "Unable to load vocabulary from file. "                                   
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/models/deberta_v2/tokenization_debert 
 a_v2_fast.py:133 in __init__                                                                     
                                                                                                  
   130 │   │   mask_token="[MASK]",                                                               
   131 │   │   **kwargs                                                                           
   132 ) -> None:                                                                             
 133 │   │   super().__init__(                                                                  
   134 │   │   │   vocab_file,                                                                    
   135 │   │   │   tokenizer_file=tokenizer_file,                                                 
   136 │   │   │   do_lower_case=do_lower_case,                                                   
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:114 in     
 __init__                                                                                         
                                                                                                  
   111 │   │   │   fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)                  
   112 │   │   elif slow_tokenizer is not None:                                                   
   113 │   │   │   # We need to convert a slow tokenizer to build the backend                     
 114 │   │   │   fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)                        
   115 │   │   elif self.slow_tokenizer_class is not None:                                        
   116 │   │   │   # We need to create and convert a slow tokenizer to build the backend          
   117 │   │   │   slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)                    
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:1162 in     
 convert_slow_tokenizer                                                                           
                                                                                                  
   1159                                                                                       
   1160 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name]                       
   1161                                                                                       
 1162 return converter_class(transformer_tokenizer).converted()                             
   1163                                                                                           
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:438 in      
 __init__                                                                                         
                                                                                                  
    435 │   │                                                                                     
    436 │   │   super().__init__(*args)                                                           
    437 │   │                                                                                     
  438 │   │   from .utils import sentencepiece_model_pb2 as model_pb2                           
    439 │   │                                                                                     
    440 │   │   m = model_pb2.ModelProto()                                                        
    441 │   │   with open(self.original_tokenizer.vocab_file, "rb") as f:                         
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/transformers/utils/sentencepiece_model_pb2.py:92   
 in <module>                                                                                      
                                                                                                  
     89 file=DESCRIPTOR,                                                                      
     90 create_key=_descriptor._internal_create_key,                                          
     91 values=[                                                                              
   92 │   │   _descriptor.EnumValueDescriptor(                                                  
     93 │   │   │   name="UNIGRAM",                                                               
     94 │   │   │   index=0,                                                                      
     95 │   │   │   number=1,                                                                     
                                                                                                  
 /home/ci/opt/venv/lib/python3.8/site-packages/google/protobuf/descriptor.py:796 in __new__       
                                                                                                  
    793 def __new__(cls, name, index, number,                                                 
    794 │   │   │   │   type=None,  # pylint: disable=redefined-builtin                           
    795 │   │   │   │   options=None, serialized_options=None, create_key=None):                  
  796 _message.Message._CheckCalledFromGeneratedFile()                                    
    797 # There is no way we can build a complete EnumValueDescriptor with the              
    798 # given parameters (the name of the Enum is not known, for example).                
    799 # Fortunately generated files just pass it to the EnumDescriptor()                  
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 
3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much 
slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

We can see that best_quality achieves better performance than high_quality.

scores = predictor.evaluate(test_data, metrics=["roc_auc"])
scores

HPO Presets#

The above three presets all use the default hyperparameters, which might not be optimal for your tasks. Fortunately, we also support hyperparameter optimization (HPO) with simple presets. To perform HPO, you can add a postfix _hpo in the three presets, resulting in medium_quality_hpo, high_quality_hpo, and best_quality_hpo.

Display Presets#

In case you want to see each preset’s inside details, we provide you with a util function to get the hyperparameter setups. For example, here are hyperparameters of preset high_quality.

import json
from autogluon.multimodal.presets import get_automm_presets

hyperparameters, hyperparameter_tune_kwargs = get_automm_presets(problem_type="default", presets="high_quality")
print(f"hyperparameters: {json.dumps(hyperparameters, sort_keys=True, indent=4)}")
print(f"hyperparameter_tune_kwargs: {json.dumps(hyperparameter_tune_kwargs, sort_keys=True, indent=4)}")

The HPO presets make several hyperparameters tunable such as model backbone, batch size, learning rate, max epoch, and optimizer type. Below are the details of preset high_quality_hpo.

import json
import yaml
from autogluon.multimodal.presets import get_automm_presets

hyperparameters, hyperparameter_tune_kwargs = get_automm_presets(problem_type="default", presets="high_quality_hpo")
print(f"hyperparameters: {yaml.dump(hyperparameters, allow_unicode=True, default_flow_style=False)}")
print(f"hyperparameter_tune_kwargs: {json.dumps(hyperparameter_tune_kwargs, sort_keys=True, indent=4)}")

Other Examples#

You may go to AutoMM Examples to explore other examples about AutoMM.

Customization#

To learn how to customize AutoMM, please refer to Customize AutoMM.