Object Detection - Quick Start¶
Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection.
Tip: If you are new to AutoGluon, review Image Prediction - Quick Start first to learn the basics of the AutoGluon API.
Our goal is to detect motorbike in images by YOLOv3 model. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model.
To start, import ObjectDetector:
from autogluon.vision import ObjectDetector
/home/ci/opt/venv/lib/python3.8/site-packages/gluoncv/__init__.py:40: UserWarning: Both mxnet==1.9.1 and torch==1.12.0+cu102 are installed. You might encounter increased GPU memory footprint if both framework are used at the same time.
warnings.warn(f'Both mxnet=={mx.__version__} and torch=={torch.__version__} are installed. '
Tiny_motorbike Dataset¶
We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC.
Using the commands below, we can download this dataset, which is only
23M. The name of unzipped folder is called tiny_motorbike. Anyway,
the task dataset helper can perform the download and extraction
automatically, and load the dataset according to the detection formats.
url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip'
dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval')
Downloading /home/ci/.gluoncv/archive/tiny_motorbike.zip from https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip...
21273KB [00:01, 18397.86KB/s]
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
Fit Models by AutoGluon¶
In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space.
We fit a classifier using AutoGluon as follows. In each experiment
(one trial in our searching space), we train the model for 5 epochs to
avoid bursting our tutorial runtime.
time_limit = 60*30 # at most 0.5 hour
detector = ObjectDetector()
hyperparameters = {'epochs': 5, 'batch_size': 8}
hyperparameter_tune_kwargs={'num_trials': 2}
detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs)
=============================================================================
WARNING: ObjectDetector is deprecated as of v0.4.0 and may contain various bugs and issues!
In a future release ObjectDetector may be entirely reworked to use Torch as a backend.
This future change will likely be API breaking.Users should ensure they update their code that depends on ObjectDetector when upgrading to future AutoGluon releases.
For more information, refer to ObjectDetector refactor GitHub issue: https://github.com/awslabs/autogluon/issues/1559
=============================================================================
The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
Randomly split train_data into train[148]/validation[22] splits.
Starting HPO experiments
0%| | 0/2 [00:00<?, ?it/s]
modified configs(<old> != <new>): {
root.valid.batch_size 16 != 8
root.dataset_root ~/.mxnet/datasets/ != auto
root.train.epochs 20 != 5
root.train.batch_size 16 != 8
root.train.early_stop_patience -1 != 10
root.train.seed 233 != 746
root.train.early_stop_baseline 0.0 != -inf
root.train.early_stop_max_value 1.0 != inf
root.gpus (0, 1, 2, 3) != (0,)
root.dataset voc_tiny != auto
root.ssd.data_shape 300 != 512
root.ssd.base_network vgg16_atrous != resnet50_v1
root.num_workers 4 != 8
}
Saved config to /home/ci/autogluon/docs/_build/eval/tutorials/object_detection/c889a526/.trial_0/config.yaml
Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored.
Downloading /home/ci/.mxnet/models/ssd_512_resnet50_v1_coco-c4835162.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/ssd_512_resnet50_v1_coco-c4835162.zip...
0%| | 0/181188 [00:00<?, ?KB/s][A
0%| | 89/181188 [00:00<04:09, 726.24KB/s][A
0%| | 511/181188 [00:00<01:30, 1989.77KB/s][A
1%| | 1226/181188 [00:00<00:46, 3898.20KB/s][A
2%|▏ | 3117/181188 [00:00<00:22, 7996.45KB/s][A
3%|▎ | 5165/181188 [00:00<00:17, 10169.11KB/s][A
4%|▍ | 7213/181188 [00:00<00:14, 12177.06KB/s][A
5%|▌ | 9261/181188 [00:00<00:12, 14323.43KB/s][A
6%|▋ | 11601/181188 [00:00<00:10, 16821.09KB/s][A
7%|▋ | 13357/181188 [00:01<00:10, 15350.63KB/s][A
9%|▊ | 15405/181188 [00:01<00:10, 15944.02KB/s][A
9%|▉ | 17046/181188 [00:01<00:10, 15567.05KB/s][A
10%|█ | 18633/181188 [00:01<00:11, 14506.19KB/s][A
12%|█▏ | 21549/181188 [00:01<00:09, 17391.34KB/s][A
14%|█▍ | 25645/181188 [00:01<00:06, 23491.73KB/s][A
17%|█▋ | 31701/181188 [00:01<00:04, 33568.21KB/s][A
21%|██ | 38482/181188 [00:01<00:03, 43147.04KB/s][A
24%|██▍ | 43373/181188 [00:02<00:03, 44789.49KB/s][A
27%|██▋ | 49338/181188 [00:02<00:02, 46570.64KB/s][A
30%|██▉ | 54084/181188 [00:02<00:02, 45769.13KB/s][A
33%|███▎ | 60679/181188 [00:02<00:02, 51467.58KB/s][A
36%|███▋ | 66122/181188 [00:02<00:02, 49536.89KB/s][A
39%|███▉ | 71242/181188 [00:02<00:02, 49994.36KB/s][A
42%|████▏ | 76294/181188 [00:02<00:02, 42090.64KB/s][A
45%|████▍ | 80738/181188 [00:03<00:03, 26511.62KB/s][A
46%|████▋ | 84242/181188 [00:03<00:03, 26835.83KB/s][A
48%|████▊ | 87534/181188 [00:03<00:03, 24932.90KB/s][A
51%|█████ | 91780/181188 [00:03<00:03, 28497.81KB/s][A
53%|█████▎ | 95934/181188 [00:03<00:02, 31434.15KB/s][A
57%|█████▋ | 103075/181188 [00:03<00:01, 41164.05KB/s][A
60%|██████ | 109241/181188 [00:03<00:01, 46451.16KB/s][A
64%|██████▎ | 115349/181188 [00:03<00:01, 49011.08KB/s][A
67%|██████▋ | 122038/181188 [00:03<00:01, 53878.65KB/s][A
71%|███████ | 128580/181188 [00:04<00:00, 56379.41KB/s][A
74%|███████▍ | 134422/181188 [00:04<00:00, 51340.25KB/s][A
78%|███████▊ | 140864/181188 [00:04<00:00, 54833.54KB/s][A
81%|████████ | 146544/181188 [00:04<00:00, 43630.52KB/s][A
84%|████████▎ | 151383/181188 [00:04<00:00, 35621.93KB/s][A
86%|████████▌ | 155468/181188 [00:04<00:00, 33652.17KB/s][A
88%|████████▊ | 159184/181188 [00:05<00:00, 28276.01KB/s][A
90%|████████▉ | 162343/181188 [00:05<00:00, 27692.71KB/s][A
91%|█████████ | 165329/181188 [00:05<00:00, 27531.79KB/s][A
95%|█████████▍| 171252/181188 [00:05<00:00, 34728.39KB/s][A
181189KB [00:05, 32695.52KB/s]
Start training from [Epoch 0]
[Epoch 0] Training cost: 11.095930, CrossEntropy=3.600057, SmoothL1=0.986218
[Epoch 0] Validation:
cow=nan
person=0.7840103937930024
dog=nan
bicycle=0.06250000000000001
car=0.7846889952153111
chair=nan
boat=nan
motorbike=0.799121292523052
bus=0.06666666666666665
pottedplant=0.0
mAP=0.41616455803300534
[Epoch 0] Current best map: 0.416165 vs previous 0.000000, saved to /home/ci/autogluon/docs/_build/eval/tutorials/object_detection/c889a526/.trial_0/best_checkpoint.pkl
[Epoch 1] Training cost: 7.664943, CrossEntropy=2.547166, SmoothL1=1.024884
[Epoch 1] Validation:
cow=nan
person=0.9087662337662339
dog=nan
bicycle=0.09090909090909091
car=0.6975524475524476
chair=nan
boat=nan
motorbike=0.8361147243908319
bus=0.0
pottedplant=0.0
mAP=0.42222374943643404
[Epoch 1] Current best map: 0.422224 vs previous 0.416165, saved to /home/ci/autogluon/docs/_build/eval/tutorials/object_detection/c889a526/.trial_0/best_checkpoint.pkl
[Epoch 2] Training cost: 7.414706, CrossEntropy=2.350590, SmoothL1=1.048768
[Epoch 2] Validation:
cow=nan
person=0.7531411364168559
dog=nan
bicycle=0.33333333333333326
car=0.7362146050670644
chair=nan
boat=nan
motorbike=0.837292786893872
bus=1.0000000000000002
pottedplant=0.0
mAP=0.6099969769518543
[Epoch 2] Current best map: 0.609997 vs previous 0.422224, saved to /home/ci/autogluon/docs/_build/eval/tutorials/object_detection/c889a526/.trial_0/best_checkpoint.pkl
[Epoch 3] Training cost: 7.607863, CrossEntropy=2.512231, SmoothL1=1.045014
[Epoch 3] Validation:
cow=nan
person=0.9833576034949034
dog=nan
bicycle=0.5000000000000001
car=0.8051948051948054
chair=nan
boat=nan
motorbike=0.9134527089072544
bus=0.25000000000000006
pottedplant=0.0
mAP=0.5753341862661606
[Epoch 4] Training cost: 7.353425, CrossEntropy=2.167291, SmoothL1=1.041067
[Epoch 4] Validation:
cow=nan
person=1.0000000000000002
dog=nan
bicycle=0.33333333333333326
car=0.5757575757575759
chair=nan
boat=nan
motorbike=0.9654587836406021
bus=1.0000000000000002
pottedplant=0.0
mAP=0.6457582821219185
[Epoch 4] Current best map: 0.645758 vs previous 0.609997, saved to /home/ci/autogluon/docs/_build/eval/tutorials/object_detection/c889a526/.trial_0/best_checkpoint.pkl
Applying the state from the best checkpoint...
Downloading /home/ci/.mxnet/models/resnet50_v1-cc729d95.zip from https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/resnet50_v1-cc729d95.zip...
0%| | 0/57421 [00:00<?, ?KB/s][A
0%| | 93/57421 [00:00<01:14, 767.87KB/s][A
1%| | 514/57421 [00:00<00:24, 2319.37KB/s][A
4%|▍ | 2180/57421 [00:00<00:07, 7317.44KB/s][A
15%|█▍ | 8338/57421 [00:00<00:02, 24412.68KB/s][A
27%|██▋ | 15557/57421 [00:00<00:01, 39490.32KB/s][A
39%|███▊ | 22245/57421 [00:00<00:00, 47999.69KB/s][A
54%|█████▍ | 31062/57421 [00:00<00:00, 58088.94KB/s][A
69%|██████▉ | 39552/57421 [00:00<00:00, 66051.31KB/s][A
82%|████████▏ | 47216/57421 [00:01<00:00, 69202.22KB/s][A
100%|██████████| 57421/57421 [00:01<00:00, 43710.63KB/s]
Finished, total runtime is 81.15 s
{ 'best_config': { 'dataset': 'auto',
'dataset_root': 'auto',
'estimator': <class 'gluoncv.auto.estimators.ssd.ssd.SSDEstimator'>,
'gpus': [0],
'horovod': False,
'num_workers': 8,
'resume': '',
'save_interval': 1,
'ssd': { 'amp': False,
'base_network': 'resnet50_v1',
'data_shape': 512,
'filters': None,
'nms_thresh': 0.45,
'nms_topk': 400,
'ratios': ( [1, 2, 0.5],
[1, 2, 0.5, 3, 0.3333333333333333],
[1, 2, 0.5, 3, 0.3333333333333333],
[1, 2, 0.5, 3, 0.3333333333333333],
[1, 2, 0.5],
[1, 2, 0.5]),
'sizes': (30, 60, 111, 162, 213, 264, 315),
'steps': (8, 16, 32, 64, 100, 300),
'syncbn': False,
'transfer': 'ssd_512_resnet50_v1_coco'},
'train': { 'batch_size': 8,
'dali': False,
'early_stop_baseline': -inf,
'early_stop_max_value': inf,
'early_stop_min_delta': 0.001,
'early_stop_patience': 10,
'epochs': 5,
'log_interval': 100,
'lr': 0.001,
'lr_decay': 0.1,
'lr_decay_epoch': (160, 200),
'momentum': 0.9,
'seed': 746,
'start_epoch': 0,
'wd': 0.0005},
'valid': { 'batch_size': 8,
'iou_thresh': 0.5,
'metric': 'voc07',
'val_interval': 1}},
'total_time': 81.15429997444153,
'train_map': 0.7192153431108157,
'valid_map': 0.6457582821219185}
<autogluon.vision.detector.detector.ObjectDetector at 0x7fe6ab6794c0>
Note that num_trials=2 above is only used to speed up the tutorial.
In normal practice, it is common to only use time_limit and drop
num_trials. Also note that hyperparameter tuning defaults to random
search.
After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate().
dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test')
test_map = detector.evaluate(dataset_test)
print("mAP on test dataset: {}".format(test_map[1][-1]))
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
mAP on test dataset: 0.05576670334372517
Below, we randomly select an image from test dataset and show the
predicted class, box and probability over the origin image, stored in
predict_class, predict_rois and predict_score columns,
respectively. You can interpret predict_rois as a dict of (xmin,
ymin, xmax, ymax) proportional to original image size.
image_path = dataset_test.iloc[0]['image']
result = detector.predict(image_path)
print(result)
predict_class predict_score 0 person 0.996160
1 motorbike 0.993280
2 car 0.880913
3 car 0.291228
4 car 0.118190
.. ... ...
95 person 0.023835
96 car 0.023680
97 car 0.023606
98 person 0.023347
99 car 0.023324
predict_rois
0 {'xmin': 0.3971776068210602, 'ymin': 0.2898428...
1 {'xmin': 0.3141043186187744, 'ymin': 0.4262087...
2 {'xmin': 0.005963402334600687, 'ymin': 0.64446...
3 {'xmin': 0.70596843957901, 'ymin': 0.392122000...
4 {'xmin': 0.7118950486183167, 'ymin': 0.4667857...
.. ...
95 {'xmin': 0.39900490641593933, 'ymin': 0.325196...
96 {'xmin': 0.7305207252502441, 'ymin': 0.3780144...
97 {'xmin': 0.7975859045982361, 'ymin': 0.0664861...
98 {'xmin': 0.4782518148422241, 'ymin': 0.3042334...
99 {'xmin': 0.0, 'ymin': 0.6136587858200073, 'xma...
[100 rows x 3 columns]
Prediction with multiple images is permitted:
bulk_result = detector.predict(dataset_test)
print(bulk_result)
predict_class predict_score 0 person 0.996160
1 motorbike 0.993280
2 car 0.880913
3 car 0.291228
4 car 0.118190
... ... ...
4521 person 0.028537
4522 person 0.028444
4523 car 0.028356
4524 person 0.028315
4525 car 0.028298
predict_rois 0 {'xmin': 0.3971776068210602, 'ymin': 0.2898428...
1 {'xmin': 0.3141043186187744, 'ymin': 0.4262087...
2 {'xmin': 0.005963402334600687, 'ymin': 0.64446...
3 {'xmin': 0.70596843957901, 'ymin': 0.392122000...
4 {'xmin': 0.7118950486183167, 'ymin': 0.4667857...
... ...
4521 {'xmin': 0.35994425415992737, 'ymin': 0.985012...
4522 {'xmin': 0.7746092081069946, 'ymin': 0.0512405...
4523 {'xmin': 0.8870271444320679, 'ymin': 0.6572265...
4524 {'xmin': 0.4946913421154022, 'ymin': 0.2265687...
4525 {'xmin': 0.8662274479866028, 'ymin': 0.6559167...
image
0 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
1 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
2 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
3 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
4 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
... ...
4521 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
4522 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
4523 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
4524 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
4525 /home/ci/.gluoncv/datasets/tiny_motorbike/tiny...
[4526 rows x 4 columns]
We can also save the trained model, and use it later.
savefile = 'detector.ag'
detector.save(savefile)
new_detector = ObjectDetector.load(savefile)
/home/ci/opt/venv/lib/python3.8/site-packages/mxnet/gluon/block.py:1784: UserWarning: Cannot decide type for the following arguments. Consider providing them as input:
data: None
input_sym_arg_type = in_param.infer_type()[0]