Continuous Training with AutoMM¶
Continuous training provides a method for machine learning models to refine their performance over time. It enables models to build upon previously acquired knowledge, thereby enhancing accuracy, facilitating knowledge transfer across tasks, and saving computational resources. In this tutorial, we will demonstrate three use cases of continuous training with AutoMM.
Use Case 1: Expanding Training with Additional Data or Training Time¶
Sometimes, the model could benefit from more training epochs or additional training time in case of underfitting. With AutoMM, you can easily extend the training time of your model without starting from scratch.
Additionally, it’s also common to need to incorporate more data into your model. AutoMM allows you to continue training with data of the same problem type and same classes if it is a multiclass problem. This flexibility makes it easy to improve and adapt your models as your data grows.
We use Stanford Sentiment Treebank (SST) dataset as an example. It consists of movie reviews and their associated sentiment. Given a new movie review, the goal is to predict the sentiment reflected in the text (in this case a binary classification, where reviews are labeled as 1 if they convey a positive opinion and labeled as 0 otherwise). Let’s first load and look at the data, noting the labels are stored in a column called label.
from autogluon.core.utils.loaders import load_pd
train_data = load_pd.load("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/train.parquet")
test_data = load_pd.load("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sst/dev.parquet")
subsample_size = 1000 # subsample data for faster demo, try setting this to larger values
train_data_1 = train_data.sample(n=subsample_size, random_state=0)
train_data_1.head(10)
| sentence | label | |
|---|---|---|
| 43787 | very pleasing at its best moments | 1 |
| 16159 | , american chai is enough to make you put away... | 0 |
| 59015 | too much like an infomercial for ram dass 's l... | 0 |
| 5108 | a stirring visual sequence | 1 |
| 67052 | cool visual backmasking | 1 |
| 35938 | hard ground | 0 |
| 49879 | the striking , quietly vulnerable personality ... | 1 |
| 51591 | pan nalin 's exposition is beautiful and myste... | 1 |
| 56780 | wonderfully loopy | 1 |
| 28518 | most beautiful , evocative | 1 |
Now let’s train the model. To ensure this tutorial runs quickly, we simply call fit() with a subset of 1000 training examples and limit its runtime to approximately 1 minute. To achieve reasonable performance in your applications, you are recommended to set much longer time_limit (eg. 1 hour), or do not specify time_limit at all (time_limit=None).
from autogluon.multimodal import MultiModalPredictor
import uuid
model_path = f"./tmp/{uuid.uuid4().hex}-automm_sst"
predictor = MultiModalPredictor(label="label", eval_metric="acc", path=model_path)
predictor.fit(train_data_1, time_limit=60)
/home/ci/opt/venv/lib/python3.11/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
from torch.distributed.optim import \
=================== System Info ===================
AutoGluon Version: 1.2b20241213
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Sep 24 10:00:37 UTC 2024
CPU Count: 8
Pytorch Version: 2.5.1+cu124
CUDA Version: 12.4
Memory Avail: 28.40 GB / 30.95 GB (91.8%)
Disk Space Avail: 187.72 GB / 255.99 GB (73.3%)
===================================================
AutoGluon infers your prediction problem is: 'binary' (because only two unique label-values observed).
2 unique label values: [1, 0]
If 'binary' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])
AutoMM starts to create your model. ✨✨✨
To track the learning progress, you can open a terminal and launch Tensorboard:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst
```
Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params | Mode
---------------------------------------------------------------------------
0 | model | HFAutoModelForTextPrediction | 108 M | train
1 | validation_metric | MulticlassAccuracy | 0 | train
2 | loss_func | CrossEntropyLoss | 0 | train
---------------------------------------------------------------------------
108 M Trainable params
0 Non-trainable params
108 M Total params
435.573 Total estimated model params size (MB)
4 Modules in train mode
225 Modules in eval mode
Epoch 0, global step 3: 'val_accuracy' reached 0.56000 (best 0.56000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=0-step=3.ckpt' as top 3
Epoch 0, global step 7: 'val_accuracy' reached 0.62500 (best 0.62500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=0-step=7.ckpt' as top 3
Epoch 1, global step 10: 'val_accuracy' reached 0.70500 (best 0.70500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=1-step=10.ckpt' as top 3
Epoch 1, global step 14: 'val_accuracy' reached 0.84500 (best 0.84500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=1-step=14.ckpt' as top 3
Epoch 2, global step 17: 'val_accuracy' reached 0.89000 (best 0.89000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=2-step=17.ckpt' as top 3
Epoch 2, global step 21: 'val_accuracy' reached 0.82500 (best 0.89000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/epoch=2-step=21.ckpt' as top 3
Time limit reached. Elapsed time is 0:01:01. Signaling Trainer to stop.
Start to fuse 3 checkpoints via the greedy soup algorithm.
/home/ci/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:2150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:59: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(per_path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:59: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(per_path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:77: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
avg_state_dict = torch.load(checkpoint_paths[0], map_location=torch.device("cpu"))["state_dict"] # nosec B614
AutoMM has created your model. 🎉🎉🎉
To load the model, use the code below:
```python
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst")
```
If you are not satisfied with the model, try to increase the training time,
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f4902f73a50>
After training, we can evaluate our predictor on separate test data formatted similarly to our training data:
test_score = predictor.evaluate(test_data)
print(test_score)
{'accuracy': 0.8864678899082569}
If the training was completed successfully, model.ckpt can be found under model_path. If you think the model still underfits, you can continue training from this checkpoint by just running another .fit() with the same data. If you have some new data to add in and don’t want to train from scratch, you can also run .fit() with the new combined dataset.
predictor_2 = MultiModalPredictor.load(model_path) # you can also use the `predictor` we assigned above
train_data_2 = train_data.drop(train_data_1.index).sample(n=subsample_size, random_state=0)
predictor_2.fit(train_data_2, time_limit=60)
Load pretrained checkpoint: /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/a4c4cf664c594e629b9fbf95c1cd6e85-automm_sst/model.ckpt
/home/ci/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:2150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
A new predictor save path is created. This is to prevent you to overwrite previous predictor saved here. You could check current save path at predictor._save_path. If you still want to use this path, set resume=True
No path specified. Models will be saved in: "AutogluonModels/ag-20241213_073647"
=================== System Info ===================
AutoGluon Version: 1.2b20241213
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Sep 24 10:00:37 UTC 2024
CPU Count: 8
Pytorch Version: 2.5.1+cu124
CUDA Version: 12.4
Memory Avail: 25.58 GB / 30.95 GB (82.7%)
Disk Space Avail: 186.51 GB / 255.99 GB (72.9%)
===================================================
AutoMM starts to create your model. ✨✨✨
To track the learning progress, you can open a terminal and launch Tensorboard:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647
```
Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params | Mode
---------------------------------------------------------------------------
0 | model | HFAutoModelForTextPrediction | 108 M | train
1 | validation_metric | MulticlassAccuracy | 0 | train
2 | loss_func | CrossEntropyLoss | 0 | train
---------------------------------------------------------------------------
108 M Trainable params
0 Non-trainable params
108 M Total params
435.573 Total estimated model params size (MB)
229 Modules in train mode
0 Modules in eval mode
Epoch 0, global step 3: 'val_accuracy' reached 0.86500 (best 0.86500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=0-step=3.ckpt' as top 3
Epoch 0, global step 7: 'val_accuracy' reached 0.83000 (best 0.86500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=0-step=7.ckpt' as top 3
Epoch 1, global step 10: 'val_accuracy' reached 0.86500 (best 0.86500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=1-step=10.ckpt' as top 3
Epoch 1, global step 14: 'val_accuracy' reached 0.88500 (best 0.88500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=1-step=14.ckpt' as top 3
Epoch 2, global step 17: 'val_accuracy' reached 0.88500 (best 0.88500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=2-step=17.ckpt' as top 3
Epoch 2, global step 21: 'val_accuracy' reached 0.90500 (best 0.90500), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=2-step=21.ckpt' as top 3
Epoch 3, global step 24: 'val_accuracy' reached 0.91000 (best 0.91000), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647/epoch=3-step=24.ckpt' as top 3
Time limit reached. Elapsed time is 0:01:04. Signaling Trainer to stop.
Start to fuse 3 checkpoints via the greedy soup algorithm.
/home/ci/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:2150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:59: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(per_path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:59: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(per_path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
/home/ci/autogluon/multimodal/src/autogluon/multimodal/utils/checkpoint.py:77: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
avg_state_dict = torch.load(checkpoint_paths[0], map_location=torch.device("cpu"))["state_dict"] # nosec B614
AutoMM has created your model. 🎉🎉🎉
To load the model, use the code below:
```python
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor.load("/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/AutogluonModels/ag-20241213_073647")
```
If you are not satisfied with the model, try to increase the training time,
adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),
or post issues on GitHub (https://github.com/autogluon/autogluon/issues).
<autogluon.multimodal.predictor.MultiModalPredictor at 0x7f49ed2a3950>
test_score_2 = predictor_2.evaluate(test_data)
print(test_score_2)
{'accuracy': 0.9059633027522935}
Use Case 2: Resuming Training from the Last Checkpoint¶
If your training process collapsed for some reason, AutoMM allows you to resume training right from where you left off. last.ckpt will be saved under model_path instead of model.ckpt. By resuming the training, you just have to call MultiModalPredictor.load() with resume option:
predictor_resume = MultiModalPredictor.load(path=model_path, resume=True)
predictor.fit(train_data, time_limit=60)
Use Case 3: Applying Pre-Trained Models to New Tasks¶
Often, you’ll encounter situations where a new task is related but not identical to a task you’ve previously trained a model for (e.g., training a more fine-grained sentiment analysis model, or adding more classes to your multiclass model). If you wish to leverage the knowledge that the model has already learned from the old data to help it learn the new task more quickly and effectively, AutoMM supports dumping your trained models into model weights and using them as foundation models:
dump_model_path = f"./tmp/{uuid.uuid4().hex}-automm_sst"
predictor.dump_model(save_path=dump_model_path)
Model weights and tokenizer for hf_text are saved to ./tmp/58de35e9a94344da82674965d00e61a0-automm_sst/hf_text.
'./tmp/58de35e9a94344da82674965d00e61a0-automm_sst'
You can then load the weights of the trained model, and continue training / fine-tuning the model on the new data.
Here is an example that uses the binary text model we trained previously on a regression task. We use the Semantic Textual Similarity Benchmark dataset for illustration only, so you might want to apply this feature to more relevant datasets. In this data, the column named score contains numerical values (which we would like to predict) that are human-annotated similarity scores for each given pair of sentences.
sts_train_data = load_pd.load("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/train.parquet")[
["sentence1", "sentence2", "score"]
]
sts_test_data = load_pd.load("https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/dev.parquet")[
["sentence1", "sentence2", "score"]
]
sts_train_data.head(10)
Loaded data from: https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/train.parquet | Columns = 4 / 4 | Rows = 5749 -> 5749
Loaded data from: https://autogluon-text.s3-accelerate.amazonaws.com/glue/sts/dev.parquet | Columns = 4 / 4 | Rows = 1500 -> 1500
| sentence1 | sentence2 | score | |
|---|---|---|---|
| 0 | A plane is taking off. | An air plane is taking off. | 5.00 |
| 1 | A man is playing a large flute. | A man is playing a flute. | 3.80 |
| 2 | A man is spreading shreded cheese on a pizza. | A man is spreading shredded cheese on an uncoo... | 3.80 |
| 3 | Three men are playing chess. | Two men are playing chess. | 2.60 |
| 4 | A man is playing the cello. | A man seated is playing the cello. | 4.25 |
| 5 | Some men are fighting. | Two men are fighting. | 4.25 |
| 6 | A man is smoking. | A man is skating. | 0.50 |
| 7 | The man is playing the piano. | The man is playing the guitar. | 1.60 |
| 8 | A man is playing on a guitar and singing. | A woman is playing an acoustic guitar and sing... | 2.20 |
| 9 | A person is throwing a cat on to the ceiling. | A person throws a cat on the ceiling. | 5.00 |
To specify a custom model that you created, use hyperparameters option in .fit():
hyperparameters={
"model.hf_text.checkpoint_name": dump_model_path
}
sts_model_path = f"./tmp/{uuid.uuid4().hex}-automm_sts"
predictor_sts = MultiModalPredictor(label="score", path=sts_model_path)
predictor_sts.fit(
sts_train_data, hyperparameters={"model.hf_text.checkpoint_name": f"{dump_model_path}/hf_text"}, time_limit=30
)
=================== System Info ===================
AutoGluon Version: 1.2b20241213
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Sep 24 10:00:37 UTC 2024
CPU Count: 8
Pytorch Version: 2.5.1+cu124
CUDA Version: 12.4
Memory Avail: 24.74 GB / 30.95 GB (80.0%)
Disk Space Avail: 185.70 GB / 255.99 GB (72.5%)
===================================================
AutoGluon infers your prediction problem is: 'regression' (because dtype of label-column == float and label-values can't be converted to int).
Label info (max, min, mean, stddev): (5.0, 0.0, 2.701, 1.4644)
If 'regression' is not the correct problem_type, please manually specify the problem_type parameter during Predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression', 'quantile'])
AutoMM starts to create your model. ✨✨✨
To track the learning progress, you can open a terminal and launch Tensorboard:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/387ac1665a60409db097d174a2d48fe2-automm_sts
```
Seed set to 0
GPU Count: 1
GPU Count to be Used: 1
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params | Mode
---------------------------------------------------------------------------
0 | model | HFAutoModelForTextPrediction | 108 M | train
1 | validation_metric | MeanSquaredError | 0 | train
2 | loss_func | MSELoss | 0 | train
---------------------------------------------------------------------------
108 M Trainable params
0 Non-trainable params
108 M Total params
435.570 Total estimated model params size (MB)
4 Modules in train mode
225 Modules in eval mode
Epoch 0, global step 20: 'val_rmse' reached 0.59089 (best 0.59089), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/387ac1665a60409db097d174a2d48fe2-automm_sts/epoch=0-step=20.ckpt' as top 3
Time limit reached. Elapsed time is 0:00:30. Signaling Trainer to stop.
Epoch 0, global step 34: 'val_rmse' reached 0.52262 (best 0.52262), saving model to '/home/ci/autogluon/docs/tutorials/multimodal/advanced_topics/tmp/387ac1665a60409db097d174a2d48fe2-automm_sts/epoch=0-step=34.ckpt' as top 3
Start to fuse 2 checkpoints via the greedy soup algorithm.
/home/ci/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:2150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(path, map_location=torch.device("cpu"))["state_dict"] # nosec B614
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[8], line 3
1 sts_model_path = f"./tmp/{uuid.uuid4().hex}-automm_sts"
2 predictor_sts = MultiModalPredictor(label="score", path=sts_model_path)
----> 3 predictor_sts.fit(
4 sts_train_data, hyperparameters={"model.hf_text.checkpoint_name": f"{dump_model_path}/hf_text"}, time_limit=30
5 )
File ~/autogluon/multimodal/src/autogluon/multimodal/predictor.py:529, in MultiModalPredictor.fit(self, train_data, presets, tuning_data, max_num_tuning_data, id_mappings, time_limit, save_path, hyperparameters, column_types, holdout_frac, teacher_predictor, seed, standalone, hyperparameter_tune_kwargs, clean_ckpts, predictions, labels, predictors)
526 assert isinstance(predictors, list)
527 learners = [ele if isinstance(ele, str) else ele._learner for ele in predictors]
--> 529 self._learner.fit(
530 train_data=train_data,
531 presets=presets,
532 tuning_data=tuning_data,
533 max_num_tuning_data=max_num_tuning_data,
534 time_limit=time_limit,
535 save_path=save_path,
536 hyperparameters=hyperparameters,
537 column_types=column_types,
538 holdout_frac=holdout_frac,
539 teacher_learner=teacher_learner,
540 seed=seed,
541 standalone=standalone,
542 hyperparameter_tune_kwargs=hyperparameter_tune_kwargs,
543 clean_ckpts=clean_ckpts,
544 id_mappings=id_mappings,
545 predictions=predictions,
546 labels=labels,
547 learners=learners,
548 )
550 return self
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:666, in BaseLearner.fit(self, train_data, presets, tuning_data, time_limit, save_path, hyperparameters, column_types, holdout_frac, teacher_learner, seed, standalone, hyperparameter_tune_kwargs, clean_ckpts, **kwargs)
659 self.prepare_fit_args(
660 time_limit=time_limit,
661 seed=seed,
662 standalone=standalone,
663 clean_ckpts=clean_ckpts,
664 )
665 fit_returns = self.execute_fit()
--> 666 self.on_fit_end(
667 training_start=training_start,
668 strategy=fit_returns.get("strategy", None),
669 strict_loading=fit_returns.get("strict_loading", True),
670 standalone=standalone,
671 clean_ckpts=clean_ckpts,
672 )
674 return self
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:610, in BaseLearner.on_fit_end(self, training_start, strategy, strict_loading, standalone, clean_ckpts)
607 self._fit_called = True
608 if not self._is_hpo:
609 # top_k_average is called inside hyperparameter_tune() when building the final predictor.
--> 610 self.top_k_average(
611 save_path=self._save_path,
612 top_k_average_method=self._config.optim.top_k_average_method,
613 strategy=strategy,
614 strict_loading=strict_loading,
615 # Not strict loading if using parameter-efficient finetuning
616 standalone=standalone,
617 clean_ckpts=clean_ckpts,
618 )
620 training_end = time.time()
621 self._total_train_time = training_end - training_start
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:1449, in BaseLearner.top_k_average(self, save_path, top_k_average_method, strategy, last_ckpt_path, strict_loading, standalone, clean_ckpts)
1440 logger.info(
1441 f"Start to fuse {len(top_k_model_paths)} checkpoints via the greedy soup algorithm."
1442 )
1444 self._load_state_dict(
1445 path=top_k_model_paths[0],
1446 prefix=prefix,
1447 strict=strict_loading,
1448 )
-> 1449 best_score = self.evaluate(self._tuning_data, metrics=[eval_metric])[self._eval_metric_name]
1450 for i in range(1, len(top_k_model_paths)):
1451 cand_avg_state_dict = average_checkpoints(
1452 checkpoint_paths=ingredients + [top_k_model_paths[i]],
1453 )
KeyError: 'rmse'
test_score = predictor_sts.evaluate(sts_test_data, metrics=["rmse", "pearsonr", "spearmanr"])
print("RMSE = {:.2f}".format(test_score["rmse"]))
print("PEARSONR = {:.4f}".format(test_score["pearsonr"]))
print("SPEARMANR = {:.4f}".format(test_score["spearmanr"]))
We currently support dumping timm image models, MMDetection image models, HuggingFace text models, and any fusion models that comprises the aforementioned models. Similarly, we can also load a custom trained timm image model with:
{"model.timm_image.checkpoint_name": timm_image_model_path}
and a custom trained MMDetection model with:
{"model.mmdet_image.checkpoint_name": mmdet_image_model_path}
This feature helps you apply the knowledge of your previously trained task onto a new task, which saves your time and computational power. We will not go into details in this tutorial, but do keep in mind that we have not addressed a big challenge in this use case, i.e. Catastrophic Forgetting.