AutoMM Detection - Quick Start on a Tiny COCO Format Dataset¶
In this section, our goal is to fast finetune a pretrained model on a small dataset in COCO format, and evaluate on its test set. Both training and test sets are in COCO format. See Convert Data to COCO Format for how to convert other datasets to COCO format.
Setting up the imports¶
Make sure mmcv and mmdet are installed:
#!pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 # To use object detection, downgrade the torch version if it's >=2.2
!mim install "mmcv==2.1.0" # For Google Colab, use the line below instead to install mmcv
#!pip install "mmcv==2.1.0" -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.1.0/index.html
!pip install "mmdet==3.2.0"
Show code cell output
Looking in links: https://download.openmmlab.com/mmcv/dist/cu124/torch2.5.0/index.html
Requirement already satisfied: mmcv==2.1.0 in /home/ci/opt/venv/lib/python3.11/site-packages (2.1.0)
Requirement already satisfied: addict in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (2.4.0)
Requirement already satisfied: mmengine>=0.3.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (0.10.5)
Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (1.26.4)
Requirement already satisfied: packaging in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (24.2)
Requirement already satisfied: Pillow in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (11.0.0)
Requirement already satisfied: pyyaml in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (6.0.2)
Requirement already satisfied: yapf in /home/ci/opt/venv/lib/python3.11/site-packages (from mmcv==2.1.0) (0.43.0)
Requirement already satisfied: matplotlib in /home/ci/opt/venv/lib/python3.11/site-packages (from mmengine>=0.3.0->mmcv==2.1.0) (3.9.4)
Requirement already satisfied: rich in /home/ci/opt/venv/lib/python3.11/site-packages (from mmengine>=0.3.0->mmcv==2.1.0) (13.9.4)
Requirement already satisfied: termcolor in /home/ci/opt/venv/lib/python3.11/site-packages (from mmengine>=0.3.0->mmcv==2.1.0) (2.5.0)
Requirement already satisfied: opencv-python>=3 in /home/ci/opt/venv/lib/python3.11/site-packages (from mmengine>=0.3.0->mmcv==2.1.0) (4.10.0.84)
Requirement already satisfied: platformdirs>=3.5.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from yapf->mmcv==2.1.0) (4.3.6)
Requirement already satisfied: contourpy>=1.0.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (4.55.3)
Requirement already satisfied: kiwisolver>=1.3.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (1.4.7)
Requirement already satisfied: pyparsing>=2.3.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (2.9.0.post0)
Requirement already satisfied: markdown-it-py>=2.2.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from rich->mmengine>=0.3.0->mmcv==2.1.0) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from rich->mmengine>=0.3.0->mmcv==2.1.0) (2.18.0)
Requirement already satisfied: mdurl~=0.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from markdown-it-py>=2.2.0->rich->mmengine>=0.3.0->mmcv==2.1.0) (0.1.2)
Requirement already satisfied: six>=1.5 in /home/ci/opt/venv/lib/python3.11/site-packages (from python-dateutil>=2.7->matplotlib->mmengine>=0.3.0->mmcv==2.1.0) (1.17.0)
Requirement already satisfied: mmdet==3.2.0 in /home/ci/opt/venv/lib/python3.11/site-packages (3.2.0)
Requirement already satisfied: matplotlib in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (3.9.4)
Requirement already satisfied: numpy in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (1.26.4)
Requirement already satisfied: pycocotools in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (2.0.8)
Requirement already satisfied: scipy in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (1.14.1)
Requirement already satisfied: shapely in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (2.0.6)
Requirement already satisfied: six in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (1.17.0)
Requirement already satisfied: terminaltables in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (3.1.10)
Requirement already satisfied: tqdm in /home/ci/opt/venv/lib/python3.11/site-packages (from mmdet==3.2.0) (4.67.1)
Requirement already satisfied: contourpy>=1.0.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (4.55.3)
Requirement already satisfied: kiwisolver>=1.3.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (1.4.7)
Requirement already satisfied: packaging>=20.0 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (24.2)
Requirement already satisfied: pillow>=8 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /home/ci/opt/venv/lib/python3.11/site-packages (from matplotlib->mmdet==3.2.0) (2.9.0.post0)
To start, let’s import MultiModalPredictor:
from autogluon.multimodal import MultiModalPredictor
/home/ci/opt/venv/lib/python3.11/site-packages/mmengine/optim/optimizer/zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
from torch.distributed.optim import \
And also import some other packages that will be used in this tutorial:
import os
import time
from autogluon.core.utils.loaders import load_zip
Downloading Data¶
We have the sample dataset ready in the cloud. Let’s download it:
zip_file = "https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip"
download_dir = "./tiny_motorbike_coco"
load_zip.unzip(zip_file, unzip_dir=download_dir)
data_dir = os.path.join(download_dir, "tiny_motorbike")
train_path = os.path.join(data_dir, "Annotations", "trainval_cocoformat.json")
test_path = os.path.join(data_dir, "Annotations", "test_cocoformat.json")
Downloading ./tiny_motorbike_coco/file.zip from https://automl-mm-bench.s3.amazonaws.com/object_detection_dataset/tiny_motorbike_coco.zip...
0%| | 0.00/21.8M [00:00<?, ?iB/s]
51%|█████ | 11.1M/21.8M [00:00<00:00, 111MiB/s]
100%|██████████| 21.8M/21.8M [00:00<00:00, 48.1MiB/s]
While using COCO format dataset, the input is the json annotation file of the dataset split.
In this example, trainval_cocoformat.json is the annotation file of the train-and-validate split,
and test_cocoformat.json is the annotation file of the test split.
Creating the MultiModalPredictor¶
We select the "medium_quality" presets, which uses a YOLOX-large model pretrained on COCO dataset. This preset is fast to finetune or inference,
and easy to deploy. We also provide presets "high_quality" with a DINO-Resnet50 model and "best quality" with a DINO-SwinL model, with much higher performance but also slower and with higher GPU memory usage.
presets = "medium_quality"
We create the MultiModalPredictor with selected presets.
We need to specify the problem_type to "object_detection",
and also provide a sample_data_path for the predictor to infer the catgories of the dataset.
Here we provide the train_path, and it also works using any other split of this dataset.
And we also provide a path to save the predictor.
It will be saved to a automatically generated directory with timestamp under AutogluonModels if path is not specified.
# Init predictor
import uuid
model_path = f"./tmp/{uuid.uuid4().hex}-quick_start_tutorial_temp_save"
predictor = MultiModalPredictor(
problem_type="object_detection",
sample_data_path=train_path,
presets=presets,
path=model_path,
)
Finetuning the Model¶
Learning rate, number of epochs, and batch_size are included in the presets, and thus no need to specify. Note that we use a two-stage learning rate option during finetuning by default, and the model head will have 100x learning rate. Using a two-stage learning rate with high learning rate only on head layers makes the model converge faster during finetuning. It usually gives better performance as well, especially on small datasets with hundreds or thousands of images. We also compute the time of the fit process here for better understanding the speed. We run it on a g4.2xlarge EC2 machine on AWS, and part of the command outputs are shown below:
start = time.time()
predictor.fit(train_path) # Fit
train_end = time.time()
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Downloading yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth from https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_l_8x8_300e_coco/yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth...
Loads checkpoint by local backend from path: yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth
The model and loaded state dict do not match exactly
size mismatch for bbox_head.multi_level_conv_cls.0.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([10, 256, 1, 1]).
size mismatch for bbox_head.multi_level_conv_cls.0.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([10]).
size mismatch for bbox_head.multi_level_conv_cls.1.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([10, 256, 1, 1]).
size mismatch for bbox_head.multi_level_conv_cls.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([10]).
size mismatch for bbox_head.multi_level_conv_cls.2.weight: copying a param with shape torch.Size([80, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([10, 256, 1, 1]).
size mismatch for bbox_head.multi_level_conv_cls.2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([10]).
=================== System Info ===================
AutoGluon Version: 1.2b20241213
Python Version: 3.11.9
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Tue Sep 24 10:00:37 UTC 2024
CPU Count: 8
Pytorch Version: 2.5.1+cu124
CUDA Version: 12.4
Memory Avail: 28.42 GB / 30.95 GB (91.8%)
Disk Space Avail: WARNING, an exception (FileNotFoundError) occurred while attempting to get available disk space. Consider opening a GitHub Issue.
===================================================
Using default root folder: ./tiny_motorbike_coco/tiny_motorbike/Annotations/... Specify `model.mmdet_image.coco_root=...` in hyperparameters if you think it is wrong.
AutoMM starts to create your model. ✨✨✨
To track the learning progress, you can open a terminal and launch Tensorboard:
```shell
# Assume you have installed tensorboard
tensorboard --logdir /home/ci/autogluon/docs/tutorials/multimodal/object_detection/quick_start/tmp/885ef80b39b94b55a6b2dbedbfa3bc68-quick_start_tutorial_temp_save
```
Seed set to 0
0%| | 0.00/217M [00:00<?, ?iB/s]
0%| | 164k/217M [00:00<02:14, 1.62MiB/s]
0%| | 393k/217M [00:00<01:50, 1.97MiB/s]
0%| | 655k/217M [00:00<01:36, 2.24MiB/s]
0%| | 967k/217M [00:00<01:24, 2.57MiB/s]
1%| | 1.31M/217M [00:00<01:15, 2.87MiB/s]
1%| | 1.70M/217M [00:00<01:07, 3.21MiB/s]
1%| | 2.15M/217M [00:00<00:59, 3.59MiB/s]
1%| | 2.64M/217M [00:00<00:53, 3.99MiB/s]
1%|▏ | 3.21M/217M [00:00<00:47, 4.50MiB/s]
2%|▏ | 3.82M/217M [00:01<00:42, 4.97MiB/s]
2%|▏ | 4.54M/217M [00:01<00:37, 5.63MiB/s]
2%|▏ | 5.29M/217M [00:01<00:34, 6.20MiB/s]
3%|▎ | 6.05M/217M [00:01<00:32, 6.57MiB/s]
3%|▎ | 6.83M/217M [00:01<00:30, 6.91MiB/s]
4%|▎ | 7.68M/217M [00:01<00:28, 7.33MiB/s]
4%|▍ | 8.49M/217M [00:01<00:27, 7.55MiB/s]
4%|▍ | 9.29M/217M [00:01<00:27, 7.65MiB/s]
5%|▍ | 10.1M/217M [00:01<00:26, 7.89MiB/s]
5%|▌ | 11.0M/217M [00:01<00:25, 8.03MiB/s]
5%|▌ | 11.8M/217M [00:02<00:25, 8.09MiB/s]
6%|▌ | 12.6M/217M [00:02<00:25, 7.99MiB/s]
6%|▌ | 13.5M/217M [00:02<00:25, 7.99MiB/s]
7%|▋ | 14.3M/217M [00:02<00:25, 8.04MiB/s]
7%|▋ | 15.1M/217M [00:02<00:24, 8.13MiB/s]
7%|▋ | 15.9M/217M [00:02<00:24, 8.18MiB/s]
8%|▊ | 16.8M/217M [00:02<00:24, 8.15MiB/s]
8%|▊ | 17.6M/217M [00:02<00:24, 8.26MiB/s]
8%|▊ | 18.4M/217M [00:02<00:24, 8.26MiB/s]
9%|▉ | 19.3M/217M [00:02<00:23, 8.30MiB/s]
9%|▉ | 20.1M/217M [00:03<00:23, 8.31MiB/s]
10%|▉ | 21.0M/217M [00:03<00:24, 8.03MiB/s]
10%|█ | 21.9M/217M [00:03<00:24, 8.11MiB/s]
10%|█ | 22.8M/217M [00:03<00:23, 8.32MiB/s]
11%|█ | 23.7M/217M [00:03<00:22, 8.42MiB/s]
11%|█▏ | 24.6M/217M [00:03<00:22, 8.54MiB/s]
12%|█▏ | 25.4M/217M [00:03<00:22, 8.55MiB/s]
12%|█▏ | 26.3M/217M [00:03<00:23, 8.17MiB/s]
13%|█▎ | 27.2M/217M [00:03<00:22, 8.36MiB/s]
13%|█▎ | 28.1M/217M [00:03<00:22, 8.50MiB/s]
13%|█▎ | 28.9M/217M [00:04<00:22, 8.35MiB/s]
14%|█▎ | 29.8M/217M [00:04<00:22, 8.44MiB/s]
14%|█▍ | 30.7M/217M [00:04<00:21, 8.54MiB/s]
15%|█▍ | 31.5M/217M [00:04<00:22, 8.13MiB/s]
15%|█▍ | 32.4M/217M [00:04<00:22, 8.30MiB/s]
15%|█▌ | 33.3M/217M [00:04<00:21, 8.42MiB/s]
16%|█▌ | 34.2M/217M [00:04<00:21, 8.54MiB/s]
16%|█▌ | 35.0M/217M [00:04<00:21, 8.57MiB/s]
17%|█▋ | 35.9M/217M [00:04<00:22, 8.22MiB/s]
17%|█▋ | 36.8M/217M [00:05<00:21, 8.37MiB/s]
17%|█▋ | 37.7M/217M [00:05<00:21, 8.54MiB/s]
18%|█▊ | 38.5M/217M [00:05<00:21, 8.39MiB/s]
18%|█▊ | 39.4M/217M [00:05<00:20, 8.59MiB/s]
19%|█▊ | 40.3M/217M [00:05<00:20, 8.63MiB/s]
19%|█▉ | 41.2M/217M [00:05<00:20, 8.76MiB/s]
19%|█▉ | 42.2M/217M [00:05<00:20, 8.44MiB/s]
20%|█▉ | 43.1M/217M [00:05<00:20, 8.63MiB/s]
20%|██ | 44.0M/217M [00:05<00:19, 8.67MiB/s]
21%|██ | 44.8M/217M [00:05<00:19, 8.72MiB/s]
21%|██ | 45.7M/217M [00:06<00:20, 8.39MiB/s]
21%|██▏ | 46.6M/217M [00:06<00:19, 8.57MiB/s]
22%|██▏ | 47.5M/217M [00:06<00:19, 8.59MiB/s]
22%|██▏ | 48.4M/217M [00:06<00:19, 8.75MiB/s]
23%|██▎ | 49.3M/217M [00:06<00:20, 8.40MiB/s]
23%|██▎ | 50.2M/217M [00:06<00:19, 8.55MiB/s]
24%|██▎ | 51.1M/217M [00:06<00:19, 8.61MiB/s]
24%|██▍ | 52.0M/217M [00:06<00:19, 8.70MiB/s]
24%|██▍ | 52.8M/217M [00:06<00:19, 8.35MiB/s]
25%|██▍ | 53.8M/217M [00:07<00:19, 8.56MiB/s]
25%|██▌ | 54.6M/217M [00:07<00:18, 8.63MiB/s]
26%|██▌ | 55.5M/217M [00:07<00:19, 8.29MiB/s]
26%|██▌ | 56.5M/217M [00:07<00:18, 8.51MiB/s]
26%|██▋ | 57.4M/217M [00:07<00:18, 8.65MiB/s]
27%|██▋ | 58.3M/217M [00:07<00:18, 8.79MiB/s]
27%|██▋ | 59.2M/217M [00:07<00:18, 8.34MiB/s]
28%|██▊ | 60.1M/217M [00:07<00:18, 8.53MiB/s]
28%|██▊ | 61.0M/217M [00:07<00:18, 8.66MiB/s]
28%|██▊ | 61.8M/217M [00:07<00:18, 8.28MiB/s]
29%|██▉ | 62.8M/217M [00:08<00:18, 8.52MiB/s]
29%|██▉ | 63.7M/217M [00:08<00:17, 8.59MiB/s]
30%|██▉ | 64.6M/217M [00:08<00:17, 8.69MiB/s]
30%|███ | 65.4M/217M [00:08<00:17, 8.68MiB/s]
31%|███ | 66.3M/217M [00:08<00:18, 8.35MiB/s]
31%|███ | 67.2M/217M [00:08<00:17, 8.47MiB/s]
31%|███▏ | 68.1M/217M [00:08<00:17, 8.31MiB/s]
32%|███▏ | 68.9M/217M [00:08<00:17, 8.35MiB/s]
32%|███▏ | 69.8M/217M [00:08<00:17, 8.57MiB/s]
33%|███▎ | 70.7M/217M [00:08<00:17, 8.54MiB/s]
33%|███▎ | 71.6M/217M [00:09<00:16, 8.68MiB/s]
33%|███▎ | 72.5M/217M [00:09<00:16, 8.76MiB/s]
34%|███▍ | 73.4M/217M [00:09<00:17, 8.36MiB/s]
34%|███▍ | 74.3M/217M [00:09<00:16, 8.55MiB/s]
35%|███▍ | 75.2M/217M [00:09<00:16, 8.50MiB/s]
35%|███▌ | 76.1M/217M [00:09<00:16, 8.66MiB/s]
35%|███▌ | 77.0M/217M [00:09<00:16, 8.39MiB/s]
36%|███▌ | 77.9M/217M [00:09<00:16, 8.59MiB/s]
36%|███▌ | 78.7M/217M [00:09<00:16, 8.52MiB/s]
37%|███▋ | 79.6M/217M [00:10<00:15, 8.66MiB/s]
37%|███▋ | 80.5M/217M [00:10<00:16, 8.41MiB/s]
37%|███▋ | 81.4M/217M [00:10<00:15, 8.57MiB/s]
38%|███▊ | 82.3M/217M [00:10<00:15, 8.51MiB/s]
38%|███▊ | 83.2M/217M [00:10<00:15, 8.66MiB/s]
39%|███▊ | 84.0M/217M [00:10<00:17, 7.56MiB/s]
39%|███▉ | 85.0M/217M [00:10<00:16, 7.91MiB/s]
40%|███▉ | 85.9M/217M [00:10<00:15, 8.24MiB/s]
40%|███▉ | 86.7M/217M [00:10<00:16, 8.04MiB/s]
40%|████ | 87.7M/217M [00:11<00:15, 8.32MiB/s]
41%|████ | 88.5M/217M [00:11<00:15, 8.05MiB/s]
41%|████ | 89.4M/217M [00:11<00:15, 8.19MiB/s]
42%|████▏ | 90.3M/217M [00:11<00:14, 8.51MiB/s]
42%|████▏ | 91.2M/217M [00:11<00:14, 8.59MiB/s]
42%|████▏ | 92.1M/217M [00:11<00:15, 8.20MiB/s]
43%|████▎ | 93.0M/217M [00:11<00:14, 8.45MiB/s]
43%|████▎ | 93.9M/217M [00:11<00:14, 8.56MiB/s]
44%|████▎ | 94.8M/217M [00:11<00:14, 8.68MiB/s]
44%|████▍ | 95.6M/217M [00:11<00:13, 8.71MiB/s]
44%|████▍ | 96.6M/217M [00:12<00:14, 8.31MiB/s]
45%|████▍ | 97.5M/217M [00:12<00:14, 8.54MiB/s]
45%|████▌ | 98.4M/217M [00:12<00:13, 8.61MiB/s]
46%|████▌ | 99.2M/217M [00:12<00:13, 8.68MiB/s]
46%|████▌ | 100M/217M [00:12<00:13, 8.76MiB/s]
46%|████▋ | 101M/217M [00:12<00:13, 8.69MiB/s]
47%|████▋ | 102M/217M [00:12<00:13, 8.40MiB/s]
47%|████▋ | 103M/217M [00:12<00:13, 8.52MiB/s]
48%|████▊ | 104M/217M [00:12<00:13, 8.51MiB/s]
48%|████▊ | 105M/217M [00:12<00:12, 8.71MiB/s]
49%|████▊ | 105M/217M [00:13<00:12, 8.70MiB/s]
49%|████▉ | 106M/217M [00:13<00:12, 8.61MiB/s]
49%|████▉ | 107M/217M [00:13<00:13, 8.44MiB/s]
50%|████▉ | 108M/217M [00:13<00:12, 8.67MiB/s]
50%|█████ | 109M/217M [00:13<00:12, 8.40MiB/s]
51%|█████ | 110M/217M [00:13<00:12, 8.59MiB/s]
51%|█████ | 111M/217M [00:13<00:12, 8.52MiB/s]
51%|█████▏ | 112M/217M [00:13<00:12, 8.69MiB/s]
52%|█████▏ | 113M/217M [00:13<00:12, 8.43MiB/s]
52%|█████▏ | 113M/217M [00:14<00:12, 8.57MiB/s]
53%|█████▎ | 114M/217M [00:14<00:11, 8.70MiB/s]
53%|█████▎ | 115M/217M [00:14<00:11, 8.56MiB/s]
53%|█████▎ | 116M/217M [00:14<00:12, 8.40MiB/s]
54%|█████▍ | 117M/217M [00:14<00:11, 8.56MiB/s]
54%|█████▍ | 118M/217M [00:14<00:11, 8.52MiB/s]
55%|█████▍ | 119M/217M [00:14<00:11, 8.69MiB/s]
55%|█████▌ | 120M/217M [00:14<00:11, 8.41MiB/s]
56%|█████▌ | 121M/217M [00:14<00:11, 8.57MiB/s]
56%|█████▌ | 121M/217M [00:14<00:11, 8.53MiB/s]
56%|█████▋ | 122M/217M [00:15<00:10, 8.70MiB/s]
57%|█████▋ | 123M/217M [00:15<00:11, 8.41MiB/s]
57%|█████▋ | 124M/217M [00:15<00:10, 8.52MiB/s]
58%|█████▊ | 125M/217M [00:15<00:10, 8.65MiB/s]
58%|█████▊ | 126M/217M [00:15<00:10, 8.65MiB/s]
58%|█████▊ | 127M/217M [00:15<00:10, 8.74MiB/s]
59%|█████▉ | 128M/217M [00:15<00:10, 8.39MiB/s]
59%|█████▉ | 129M/217M [00:15<00:10, 8.35MiB/s]
60%|█████▉ | 129M/217M [00:15<00:10, 8.39MiB/s]
60%|█████▉ | 130M/217M [00:16<00:10, 8.57MiB/s]
60%|██████ | 131M/217M [00:16<00:10, 8.24MiB/s]
61%|██████ | 132M/217M [00:16<00:10, 8.45MiB/s]
61%|██████ | 133M/217M [00:16<00:09, 8.53MiB/s]
62%|██████▏ | 134M/217M [00:16<00:09, 8.69MiB/s]
62%|██████▏ | 135M/217M [00:16<00:09, 8.32MiB/s]
62%|██████▏ | 136M/217M [00:16<00:09, 8.46MiB/s]
63%|██████▎ | 136M/217M [00:16<00:09, 8.48MiB/s]
63%|██████▎ | 137M/217M [00:16<00:09, 8.61MiB/s]
64%|██████▎ | 138M/217M [00:16<00:09, 8.67MiB/s]
64%|██████▍ | 139M/217M [00:17<00:08, 8.72MiB/s]
64%|██████▍ | 140M/217M [00:17<00:08, 8.75MiB/s]
65%|██████▍ | 141M/217M [00:17<00:09, 8.34MiB/s]
65%|██████▌ | 142M/217M [00:17<00:09, 8.36MiB/s]
66%|██████▌ | 143M/217M [00:17<00:08, 8.58MiB/s]
66%|██████▌ | 144M/217M [00:17<00:08, 8.58MiB/s]
66%|██████▋ | 144M/217M [00:17<00:08, 8.62MiB/s]
67%|██████▋ | 145M/217M [00:17<00:08, 8.22MiB/s]
67%|██████▋ | 146M/217M [00:17<00:08, 8.44MiB/s]
68%|██████▊ | 147M/217M [00:17<00:08, 8.63MiB/s]
68%|██████▊ | 148M/217M [00:18<00:07, 8.69MiB/s]
69%|██████▊ | 149M/217M [00:18<00:07, 8.73MiB/s]
69%|██████▉ | 150M/217M [00:18<00:08, 8.33MiB/s]
69%|██████▉ | 151M/217M [00:18<00:07, 8.51MiB/s]
70%|██████▉ | 152M/217M [00:18<00:07, 8.59MiB/s]
70%|███████ | 152M/217M [00:18<00:07, 8.60MiB/s]
71%|███████ | 153M/217M [00:18<00:07, 8.65MiB/s]
71%|███████ | 154M/217M [00:18<00:07, 8.26MiB/s]
71%|███████▏ | 155M/217M [00:18<00:07, 8.46MiB/s]
72%|███████▏ | 156M/217M [00:19<00:07, 8.65MiB/s]
72%|███████▏ | 157M/217M [00:19<00:07, 8.51MiB/s]
73%|███████▎ | 158M/217M [00:19<00:06, 8.63MiB/s]
73%|███████▎ | 159M/217M [00:19<00:06, 8.64MiB/s]
73%|███████▎ | 160M/217M [00:19<00:06, 8.75MiB/s]
74%|███████▍ | 160M/217M [00:19<00:06, 8.76MiB/s]
74%|███████▍ | 161M/217M [00:19<00:06, 8.57MiB/s]
75%|███████▍ | 162M/217M [00:19<00:06, 8.67MiB/s]
75%|███████▌ | 163M/217M [00:19<00:06, 8.79MiB/s]
75%|███████▌ | 164M/217M [00:19<00:06, 8.60MiB/s]
76%|███████▌ | 165M/217M [00:20<00:06, 8.70MiB/s]
76%|███████▋ | 166M/217M [00:20<00:06, 8.49MiB/s]
77%|███████▋ | 167M/217M [00:20<00:05, 8.65MiB/s]
77%|███████▋ | 168M/217M [00:20<00:05, 8.45MiB/s]
78%|███████▊ | 168M/217M [00:20<00:05, 8.58MiB/s]
78%|███████▊ | 169M/217M [00:20<00:05, 8.74MiB/s]
78%|███████▊ | 170M/217M [00:20<00:05, 8.48MiB/s]
79%|███████▊ | 171M/217M [00:20<00:05, 8.47MiB/s]
79%|███████▉ | 172M/217M [00:20<00:05, 8.54MiB/s]
80%|███████▉ | 173M/217M [00:20<00:05, 8.70MiB/s]
80%|███████▉ | 174M/217M [00:21<00:05, 8.34MiB/s]
80%|████████ | 175M/217M [00:21<00:05, 8.48MiB/s]
81%|████████ | 176M/217M [00:21<00:04, 8.50MiB/s]
81%|████████ | 176M/217M [00:21<00:04, 8.60MiB/s]
82%|████████▏ | 177M/217M [00:21<00:04, 8.21MiB/s]
82%|████████▏ | 178M/217M [00:21<00:04, 8.39MiB/s]
82%|████████▏ | 179M/217M [00:21<00:04, 8.57MiB/s]
83%|████████▎ | 180M/217M [00:21<00:04, 8.67MiB/s]
83%|████████▎ | 181M/217M [00:21<00:04, 8.76MiB/s]
84%|████████▎ | 182M/217M [00:22<00:04, 8.78MiB/s]
84%|████████▍ | 183M/217M [00:22<00:04, 8.38MiB/s]
84%|████████▍ | 184M/217M [00:22<00:03, 8.59MiB/s]
85%|████████▍ | 184M/217M [00:22<00:03, 8.62MiB/s]
85%|████████▌ | 185M/217M [00:22<00:03, 8.73MiB/s]
86%|████████▌ | 186M/217M [00:22<00:03, 8.85MiB/s]
86%|████████▌ | 187M/217M [00:22<00:03, 8.41MiB/s]
87%|████████▋ | 188M/217M [00:22<00:03, 8.53MiB/s]
87%|████████▋ | 189M/217M [00:22<00:03, 8.59MiB/s]
87%|████████▋ | 190M/217M [00:22<00:03, 8.72MiB/s]
88%|████████▊ | 191M/217M [00:23<00:03, 8.38MiB/s]
88%|████████▊ | 192M/217M [00:23<00:02, 8.57MiB/s]
89%|████████▊ | 192M/217M [00:23<00:02, 8.50MiB/s]
89%|████████▉ | 193M/217M [00:23<00:02, 8.56MiB/s]
89%|████████▉ | 194M/217M [00:23<00:02, 8.18MiB/s]
90%|████████▉ | 195M/217M [00:23<00:02, 8.44MiB/s]
90%|█████████ | 196M/217M [00:23<00:02, 8.59MiB/s]
91%|█████████ | 197M/217M [00:23<00:02, 8.71MiB/s]
91%|█████████ | 198M/217M [00:23<00:02, 8.32MiB/s]
91%|█████████▏| 199M/217M [00:24<00:02, 8.50MiB/s]
92%|█████████▏| 200M/217M [00:24<00:02, 8.53MiB/s]
92%|█████████▏| 200M/217M [00:24<00:01, 8.57MiB/s]
93%|█████████▎| 201M/217M [00:24<00:01, 8.74MiB/s]
93%|█████████▎| 202M/217M [00:24<00:01, 8.86MiB/s]
93%|█████████▎| 203M/217M [00:24<00:01, 8.51MiB/s]
94%|█████████▍| 204M/217M [00:24<00:01, 8.90MiB/s]
94%|█████████▍| 205M/217M [00:24<00:01, 8.89MiB/s]
95%|█████████▍| 206M/217M [00:24<00:01, 9.14MiB/s]
95%|█████████▌| 207M/217M [00:24<00:01, 8.88MiB/s]
96%|█████████▌| 208M/217M [00:25<00:00, 9.26MiB/s]
96%|█████████▋| 209M/217M [00:25<00:00, 8.92MiB/s]
97%|█████████▋| 210M/217M [00:25<00:00, 9.35MiB/s]
97%|█████████▋| 211M/217M [00:25<00:00, 9.00MiB/s]
98%|█████████▊| 212M/217M [00:25<00:00, 9.35MiB/s]
98%|█████████▊| 213M/217M [00:25<00:00, 9.02MiB/s]
99%|█████████▊| 214M/217M [00:25<00:00, 9.22MiB/s]
99%|█████████▉| 215M/217M [00:25<00:00, 8.99MiB/s]
100%|█████████▉| 216M/217M [00:25<00:00, 9.20MiB/s]
/home/ci/opt/venv/lib/python3.11/site-packages/mmengine/runner/checkpoint.py:347: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(filename, map_location=map_location)
GPU Count: 1
GPU Count to be Used: 1
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
`Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params | Mode
-------------------------------------------------------------------------------
0 | model | MMDetAutoModelForObjectDetection | 54.2 M | train
1 | validation_metric | MeanAveragePrecision | 0 | train
-------------------------------------------------------------------------------
54.2 M Trainable params
0 Non-trainable params
54.2 M Total params
216.620 Total estimated model params size (MB)
592 Modules in train mode
0 Modules in eval mode
/home/ci/opt/venv/lib/python3.11/site-packages/mmdet/models/backbones/csp_darknet.py:118: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast(enabled=False):
/home/ci/opt/venv/lib/python3.11/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 2
1 start = time.time()
----> 2 predictor.fit(train_path) # Fit
3 train_end = time.time()
File ~/autogluon/multimodal/src/autogluon/multimodal/predictor.py:529, in MultiModalPredictor.fit(self, train_data, presets, tuning_data, max_num_tuning_data, id_mappings, time_limit, save_path, hyperparameters, column_types, holdout_frac, teacher_predictor, seed, standalone, hyperparameter_tune_kwargs, clean_ckpts, predictions, labels, predictors)
526 assert isinstance(predictors, list)
527 learners = [ele if isinstance(ele, str) else ele._learner for ele in predictors]
--> 529 self._learner.fit(
530 train_data=train_data,
531 presets=presets,
532 tuning_data=tuning_data,
533 max_num_tuning_data=max_num_tuning_data,
534 time_limit=time_limit,
535 save_path=save_path,
536 hyperparameters=hyperparameters,
537 column_types=column_types,
538 holdout_frac=holdout_frac,
539 teacher_learner=teacher_learner,
540 seed=seed,
541 standalone=standalone,
542 hyperparameter_tune_kwargs=hyperparameter_tune_kwargs,
543 clean_ckpts=clean_ckpts,
544 id_mappings=id_mappings,
545 predictions=predictions,
546 labels=labels,
547 learners=learners,
548 )
550 return self
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/object_detection.py:243, in ObjectDetectionLearner.fit(self, train_data, presets, tuning_data, max_num_tuning_data, time_limit, save_path, hyperparameters, column_types, holdout_frac, seed, standalone, hyperparameter_tune_kwargs, clean_ckpts, **kwargs)
236 self.fit_sanity_check()
237 self.prepare_fit_args(
238 time_limit=time_limit,
239 seed=seed,
240 standalone=standalone,
241 clean_ckpts=clean_ckpts,
242 )
--> 243 fit_returns = self.execute_fit()
244 self.on_fit_end(
245 training_start=training_start,
246 strategy=fit_returns.get("strategy", None),
(...)
249 clean_ckpts=clean_ckpts,
250 )
252 return self
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:577, in BaseLearner.execute_fit(self)
575 return dict()
576 else:
--> 577 attributes = self.fit_per_run(**self._fit_args)
578 self.update_attributes(**attributes) # only update attributes for non-HPO mode
579 return attributes
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/object_detection.py:438, in ObjectDetectionLearner.fit_per_run(self, max_time, save_path, ckpt_path, resume, enable_progress_bar, seed, hyperparameters, advanced_hyperparameters, config, df_preprocessor, data_processors, model, standalone, clean_ckpts)
419 config = self.post_update_config_per_run(
420 config=config,
421 num_gpus=num_gpus,
422 precision=precision,
423 strategy=strategy,
424 )
425 trainer = self.init_trainer_per_run(
426 num_gpus=num_gpus,
427 config=config,
(...)
435 enable_progress_bar=enable_progress_bar,
436 )
--> 438 self.run_trainer(
439 trainer=trainer,
440 litmodule=litmodule,
441 datamodule=datamodule,
442 ckpt_path=ckpt_path,
443 resume=resume,
444 )
445 self.on_fit_per_run_end(
446 save_path=save_path,
447 standalone=standalone,
(...)
452 model=model,
453 )
455 return dict(
456 config=config,
457 df_preprocessor=df_preprocessor,
(...)
461 strategy=strategy,
462 )
File ~/autogluon/multimodal/src/autogluon/multimodal/learners/base.py:1211, in BaseLearner.run_trainer(self, trainer, litmodule, datamodule, ckpt_path, resume, pred_writer, is_train)
1209 warnings.filterwarnings("ignore", filter)
1210 if is_train:
-> 1211 trainer.fit(
1212 litmodule,
1213 datamodule=datamodule,
1214 ckpt_path=ckpt_path if resume else None, # this is to resume training that was broken accidentally
1215 )
1216 else:
1217 blacklist_msgs = []
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
536 self.state.status = TrainerStatus.RUNNING
537 self.training = True
--> 538 call._call_and_handle_interrupt(
539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
540 )
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py:47, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
45 if trainer.strategy.launcher is not None:
46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 47 return trainer_fn(*args, **kwargs)
49 except _TunerExitException:
50 _call_teardown_hook(trainer)
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:574, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
567 assert self.state.fn is not None
568 ckpt_path = self._checkpoint_connector._select_ckpt_path(
569 self.state.fn,
570 ckpt_path,
571 model_provided=True,
572 model_connected=self.lightning_module is not None,
573 )
--> 574 self._run(model, ckpt_path=ckpt_path)
576 assert self.state.stopped
577 self.training = False
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:981, in Trainer._run(self, model, ckpt_path)
976 self._signal_connector.register_signal_handlers()
978 # ----------------------------
979 # RUN THE TRAINER
980 # ----------------------------
--> 981 results = self._run_stage()
983 # ----------------------------
984 # POST-Training CLEAN UP
985 # ----------------------------
986 log.debug(f"{self.__class__.__name__}: trainer tearing down")
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:1025, in Trainer._run_stage(self)
1023 self._run_sanity_check()
1024 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> 1025 self.fit_loop.run()
1026 return None
1027 raise RuntimeError(f"Unexpected state {self.state}")
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py:205, in _FitLoop.run(self)
203 try:
204 self.on_advance_start()
--> 205 self.advance()
206 self.on_advance_end()
207 self._restarting = False
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py:363, in _FitLoop.advance(self)
361 with self.trainer.profiler.profile("run_training_epoch"):
362 assert self._data_fetcher is not None
--> 363 self.epoch_loop.run(self._data_fetcher)
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py:140, in _TrainingEpochLoop.run(self, data_fetcher)
138 while not self.done:
139 try:
--> 140 self.advance(data_fetcher)
141 self.on_advance_end(data_fetcher)
142 self._restarting = False
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/training_epoch_loop.py:212, in _TrainingEpochLoop.advance(self, data_fetcher)
210 else:
211 dataloader_iter = None
--> 212 batch, _, __ = next(data_fetcher)
213 # TODO: we should instead use the batch_idx returned by the fetcher, however, that will require saving the
214 # fetcher state so that the batch_idx is correct after restarting
215 batch_idx = self.batch_idx + 1
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/fetchers.py:133, in _PrefetchDataFetcher.__next__(self)
130 self.done = not self.batches
131 elif not self.done:
132 # this will run only when no pre-fetching was done.
--> 133 batch = super().__next__()
134 else:
135 # the iterator is empty
136 raise StopIteration
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/loops/fetchers.py:60, in _DataFetcher.__next__(self)
58 self._start_profiler()
59 try:
---> 60 batch = next(self.iterator)
61 except StopIteration:
62 self.done = True
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/utilities/combined_loader.py:341, in CombinedLoader.__next__(self)
339 def __next__(self) -> _ITERATOR_RETURN:
340 assert self._iterator is not None
--> 341 out = next(self._iterator)
342 if isinstance(self._iterator, _Sequential):
343 return out
File ~/opt/venv/lib/python3.11/site-packages/lightning/pytorch/utilities/combined_loader.py:78, in _MaxSizeCycle.__next__(self)
76 for i in range(n):
77 try:
---> 78 out[i] = next(self.iterators[i])
79 except StopIteration:
80 self._consumed[i] = True
File ~/opt/venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py:701, in _BaseDataLoaderIter.__next__(self)
698 if self._sampler_iter is None:
699 # TODO(https://github.com/pytorch/pytorch/issues/76750)
700 self._reset() # type: ignore[call-arg]
--> 701 data = self._next_data()
702 self._num_yielded += 1
703 if (
704 self._dataset_kind == _DatasetKind.Iterable
705 and self._IterableDataset_len_called is not None
706 and self._num_yielded > self._IterableDataset_len_called
707 ):
File ~/opt/venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1465, in _MultiProcessingDataLoaderIter._next_data(self)
1463 else:
1464 del self._task_info[idx]
-> 1465 return self._process_data(data)
File ~/opt/venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1491, in _MultiProcessingDataLoaderIter._process_data(self, data)
1489 self._try_put_index()
1490 if isinstance(data, ExceptionWrapper):
-> 1491 data.reraise()
1492 return data
File ~/opt/venv/lib/python3.11/site-packages/torch/_utils.py:715, in ExceptionWrapper.reraise(self)
711 except TypeError:
712 # If the exception takes multiple arguments, don't try to
713 # instantiate since we don't know how to
714 raise RuntimeError(msg) from None
--> 715 raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ci/opt/venv/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 351, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/opt/venv/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/opt/venv/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 146, in _load_item
return self.__getitem__((idx + 1) % self.__len__())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 167, in __getitem__
results = copy.deepcopy(self._load_item(idx))
^^^^^^^^^^^^^^^^^^^^
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 148, in _load_item
raise e
File "/home/ci/autogluon/multimodal/src/autogluon/multimodal/data/dataset_mmlab/multi_image_mix_dataset.py", line 134, in _load_item
per_ret = apply_data_processor(
^^^^^^^^^^^^^^^^^^^^^
TypeError: apply_data_processor() got an unexpected keyword argument 'feature_modalities'
Notice that at the end of each progress bar, if the checkpoint at current stage is saved,
it prints the model’s save path.
In this example, it’s ./quick_start_tutorial_temp_save.
Print out the time and we can see that it’s fast!
print("This finetuning takes %.2f seconds." % (train_end - start))
Evaluation¶
To evaluate the model we just trained, run following code.
And the evaluation results are shown in command line output. The first line is mAP in COCO standard, and the second line is mAP in VOC standard (or mAP50). For more details about these metrics, see COCO’s evaluation guideline. Note that for presenting a fast finetuning we use presets “medium_quality”, you could get better result on this dataset by simply using “high_quality” or “best_quality” presets, or customize your own model and hyperparameter settings: Customization, and some other examples at Fast Fine-tune Coco or High Performance Fine-tune Coco.
predictor.evaluate(test_path)
eval_end = time.time()
Print out the evaluation time:
print("The evaluation takes %.2f seconds." % (eval_end - train_end))
We can load a new predictor with previous save_path, and we can also reset the number of GPUs to use if not all the devices are available:
# Load and reset num_gpus
new_predictor = MultiModalPredictor.load(model_path)
new_predictor.set_num_gpus(1)
Evaluating the new predictor gives us exactly the same result:
# Evaluate new predictor
new_predictor.evaluate(test_path)
For how to set the hyperparameters and finetune the model with higher performance, see AutoMM Detection - High Performance Finetune on COCO Format Dataset.
Inference¶
Now that we have gone through the model setup, finetuning, and evaluation, this section details the inference. Specifically, we layout the steps for using the model to make predictions and visualize the results.
To run inference on the entire test set, perform:
pred = predictor.predict(test_path)
print(pred)
The output pred is a pandas DataFrame that has two columns, image and bboxes.
In image, each row contains the image path
In bboxes, each row is a list of dictionaries, each one representing a bounding box: {"class": <predicted_class_name>, "bbox": [x1, y1, x2, y2], "score": <confidence_score>}
Note that, by default, the predictor.predict does not save the detection results into a file.
To run inference and save results, run the following:
pred = predictor.predict(test_path, save_results=True)
Here, we save pred into a .txt file, which exactly follows the same layout as in pred.
You can use a predictor initialized in any way (i.e. finetuned predictor, predictor with pretrained model, etc.).
Visualizing Results¶
To run visualizations, ensure that you have opencv installed. If you haven’t already, install opencv by running
!pip install opencv-python
To visualize the detection bounding boxes, run the following:
from autogluon.multimodal.utils import ObjectDetectionVisualizer
conf_threshold = 0.4 # Specify a confidence threshold to filter out unwanted boxes
image_result = pred.iloc[30]
img_path = image_result.image # Select an image to visualize
visualizer = ObjectDetectionVisualizer(img_path) # Initialize the Visualizer
out = visualizer.draw_instance_predictions(image_result, conf_threshold=conf_threshold) # Draw detections
visualized = out.get_image() # Get the visualized image
from PIL import Image
from IPython.display import display
img = Image.fromarray(visualized, 'RGB')
display(img)
Testing on Your Own Data¶
You can also predict on your own images with various input format. The follow is an example:
Download the example image:
from autogluon.multimodal import download
image_url = "https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/detection/street_small.jpg"
test_image = download(image_url)
Run inference on data in a json file of COCO format (See Convert Data to COCO Format for more details about COCO format). Note that since the root is by default the parent folder of the annotation file, here we put the annotation file in a folder:
import json
# create a input file for demo
data = {"images": [{"id": 0, "width": -1, "height": -1, "file_name": test_image}], "categories": []}
os.mkdir("input_data_for_demo")
input_file = "input_data_for_demo/demo_annotation.json"
with open(input_file, "w+") as f:
json.dump(data, f)
pred_test_image = predictor.predict(input_file)
print(pred_test_image)
Run inference on data in a list of image file names:
pred_test_image = predictor.predict([test_image])
print(pred_test_image)
Other Examples¶
You may go to AutoMM Examples to explore other examples about AutoMM.
Customization¶
To learn how to customize AutoMM, please refer to Customize AutoMM.
Citation¶
@article{DBLP:journals/corr/abs-2107-08430,
author = {Zheng Ge and
Songtao Liu and
Feng Wang and
Zeming Li and
Jian Sun},
title = {{YOLOX:} Exceeding {YOLO} Series in 2021},
journal = {CoRR},
volume = {abs/2107.08430},
year = {2021},
url = {https://arxiv.org/abs/2107.08430},
eprinttype = {arXiv},
eprint = {2107.08430},
timestamp = {Tue, 05 Apr 2022 14:09:44 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-08430.bib},
bibsource = {dblp computer science bibliography, https://dblp.org},
}