Object Detection - Quick Start

Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection.

Tip: If you are new to AutoGluon, review Image Prediction - Quick Start first to learn the basics of the AutoGluon API.

Our goal is to detect motorbike in images by YOLOv3 model. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model.

To start, import ObjectDetector:

from autogluon.vision import ObjectDetector
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/gluoncv/__init__.py:40: UserWarning: Both mxnet==1.7.0 and torch==1.9.1+cu102 are installed. You might encounter increased GPU memory footprint if both framework are used at the same time.
  warnings.warn(f'Both mxnet=={mx.__version__} and torch=={torch.__version__} are installed. '

Tiny_motorbike Dataset

We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC.

Using the commands below, we can download this dataset, which is only 23M. The name of unzipped folder is called tiny_motorbike. Anyway, the task dataset helper can perform the download and extraction automatically, and load the dataset according to the detection formats.

url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip'
dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval')
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/

Fit Models by AutoGluon

In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space.

We fit a classifier using AutoGluon as follows. In each experiment (one trial in our searching space), we train the model for 5 epochs to avoid bursting our tutorial runtime.

time_limit = 60*30  # at most 0.5 hour
detector = ObjectDetector()
hyperparameters = {'epochs': 5, 'batch_size': 8}
hyperparameter_tune_kwargs={'num_trials': 2}
detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs)
The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
Randomly split train_data into train[159]/validation[11] splits.
Starting HPO experiments
  0%|          | 0/2 [00:00<?, ?it/s]
modified configs(<old> != <new>): {
root.num_workers     4 != 8
root.train.seed      233 != 543
root.train.early_stop_patience -1 != 10
root.train.epochs    20 != 5
root.train.batch_size 16 != 8
root.train.early_stop_baseline 0.0 != -inf
root.train.early_stop_max_value 1.0 != inf
root.gpus            (0, 1, 2, 3) != (0,)
root.dataset_root    ~/.mxnet/datasets/ != auto
root.dataset         voc_tiny != auto
root.valid.batch_size 16 != 8
root.ssd.base_network vgg16_atrous != resnet50_v1
root.ssd.data_shape  300 != 512
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_0/config.yaml
Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored.
Start training from [Epoch 0]
[Epoch 0] Training cost: 9.854390, CrossEntropy=3.488580, SmoothL1=0.980555
[Epoch 0] Validation:
person=0.7332635106828655
cow=nan
bus=nan
car=0.4727272727272728
boat=nan
bicycle=nan
motorbike=0.403202061096798
dog=nan
pottedplant=nan
chair=nan
mAP=0.5363976148356455
[Epoch 0] Current best map: 0.536398 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_0/best_checkpoint.pkl
[Epoch 1] Training cost: 8.806252, CrossEntropy=2.731563, SmoothL1=1.202553
[Epoch 1] Validation:
person=0.7917079768211273
cow=nan
bus=nan
car=0.7878787878787877
boat=nan
bicycle=nan
motorbike=0.7791802671481816
dog=nan
pottedplant=nan
chair=nan
mAP=0.7862556772826989
[Epoch 1] Current best map: 0.786256 vs previous 0.536398, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_0/best_checkpoint.pkl
[Epoch 2] Training cost: 8.800303, CrossEntropy=2.425905, SmoothL1=1.128311
[Epoch 2] Validation:
person=0.8995147255689425
cow=nan
bus=nan
car=0.7727272727272729
boat=nan
bicycle=nan
motorbike=0.7221919494646768
dog=nan
pottedplant=nan
chair=nan
mAP=0.7981446492536307
[Epoch 2] Current best map: 0.798145 vs previous 0.786256, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_0/best_checkpoint.pkl
[Epoch 3] Training cost: 8.764617, CrossEntropy=2.239421, SmoothL1=0.971916
[Epoch 3] Validation:
person=0.8425208778149956
cow=nan
bus=nan
car=0.8727272727272728
boat=nan
bicycle=nan
motorbike=0.7143358720898828
dog=nan
pottedplant=nan
chair=nan
mAP=0.8098613408773837
[Epoch 3] Current best map: 0.809861 vs previous 0.798145, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_0/best_checkpoint.pkl
[Epoch 4] Training cost: 8.537160, CrossEntropy=2.202732, SmoothL1=0.970223
[Epoch 4] Validation:
person=0.8079362836938596
cow=nan
bus=nan
car=0.8755980861244022
boat=nan
bicycle=nan
motorbike=0.6742331177374984
dog=nan
pottedplant=nan
chair=nan
mAP=0.7859224958519201
Applying the state from the best checkpoint...
modified configs(<old> != <new>): {
root.num_workers     4 != 8
root.train.seed      233 != 543
root.train.batch_size 16 != 8
root.train.early_stop_max_value 1.0 != inf
root.train.early_stop_patience -1 != 10
root.train.epochs    20 != 5
root.train.early_stop_baseline 0.0 != -inf
root.gpus            (0, 1, 2, 3) != (0,)
root.dataset_root    ~/.mxnet/datasets/ != auto
root.dataset         voc_tiny != auto
root.valid.batch_size 16 != 8
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_1/config.yaml
Using transfer learning from yolo3_darknet53_coco, the other network parameters are ignored.
Start training from [Epoch 0]
[Epoch 0] Training cost: 10.274, ObjLoss=8.609, BoxCenterLoss=7.432, BoxScaleLoss=2.745, ClassLoss=4.607
[Epoch 0] Validation:
person=0.6788842975206612
cow=nan
bus=nan
car=0.7424242424242425
boat=nan
bicycle=nan
motorbike=0.5362937252393519
dog=nan
pottedplant=nan
chair=nan
mAP=0.6525340883947518
[Epoch 0] Current best map: 0.652534 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_1/best_checkpoint.pkl
[Epoch 1] Training cost: 10.442, ObjLoss=9.408, BoxCenterLoss=7.842, BoxScaleLoss=3.043, ClassLoss=3.894
[Epoch 1] Validation:
person=0.41786916786916795
cow=nan
bus=nan
car=0.6424242424242425
boat=nan
bicycle=nan
motorbike=0.6174980322707595
dog=nan
pottedplant=nan
chair=nan
mAP=0.5592638141880567
[Epoch 2] Training cost: 15.730, ObjLoss=9.783, BoxCenterLoss=7.876, BoxScaleLoss=3.140, ClassLoss=3.403
[Epoch 2] Validation:
person=0.746064541519087
cow=nan
bus=nan
car=0.9545454545454546
boat=nan
bicycle=nan
motorbike=0.4138946280991736
dog=nan
pottedplant=nan
chair=nan
mAP=0.7048348747212384
[Epoch 2] Current best map: 0.704835 vs previous 0.652534, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_1/best_checkpoint.pkl
[Epoch 3] Training cost: 14.410, ObjLoss=9.895, BoxCenterLoss=7.993, BoxScaleLoss=3.251, ClassLoss=3.155
[Epoch 3] Validation:
person=0.818748562226823
cow=nan
bus=nan
car=1.0000000000000002
boat=nan
bicycle=nan
motorbike=0.660964035964036
dog=nan
pottedplant=nan
chair=nan
mAP=0.8265708660636197
[Epoch 3] Current best map: 0.826571 vs previous 0.704835, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/cd66fcc9/.trial_1/best_checkpoint.pkl
[Epoch 4] Training cost: 11.609, ObjLoss=9.825, BoxCenterLoss=8.038, BoxScaleLoss=3.280, ClassLoss=2.967
[Epoch 4] Validation:
person=0.8271103896103896
cow=nan
bus=nan
car=0.71900826446281
boat=nan
bicycle=nan
motorbike=0.49111607882099684
dog=nan
pottedplant=nan
chair=nan
mAP=0.6790782442980654
Applying the state from the best checkpoint...
Finished, total runtime is 162.60 s
{ 'best_config': { 'dataset': 'auto',
                   'dataset_root': 'auto',
                   'estimator': <class 'gluoncv.auto.estimators.ssd.ssd.SSDEstimator'>,
                   'gpus': [0],
                   'horovod': False,
                   'num_workers': 8,
                   'resume': '',
                   'save_interval': 1,
                   'ssd': { 'amp': False,
                            'base_network': 'resnet50_v1',
                            'data_shape': 512,
                            'filters': None,
                            'nms_thresh': 0.45,
                            'nms_topk': 400,
                            'ratios': ( [1, 2, 0.5],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5],
                                        [1, 2, 0.5]),
                            'sizes': (30, 60, 111, 162, 213, 264, 315),
                            'steps': (8, 16, 32, 64, 100, 300),
                            'syncbn': False,
                            'transfer': 'ssd_512_resnet50_v1_coco'},
                   'train': { 'batch_size': 8,
                              'dali': False,
                              'early_stop_baseline': -inf,
                              'early_stop_max_value': inf,
                              'early_stop_min_delta': 0.001,
                              'early_stop_patience': 10,
                              'epochs': 5,
                              'log_interval': 100,
                              'lr': 0.001,
                              'lr_decay': 0.1,
                              'lr_decay_epoch': (160, 200),
                              'momentum': 0.9,
                              'seed': 543,
                              'start_epoch': 0,
                              'wd': 0.0005},
                   'valid': { 'batch_size': 8,
                              'iou_thresh': 0.5,
                              'metric': 'voc07',
                              'val_interval': 1}},
  'total_time': 162.5992374420166,
  'train_map': 0.5441620981586264,
  'valid_map': 0.8265708660636197}
<autogluon.vision.detector.detector.ObjectDetector at 0x7fbdab4ebf50>

Note that num_trials=2 above is only used to speed up the tutorial. In normal practice, it is common to only use time_limit and drop num_trials. Also note that hyperparameter tuning defaults to random search. Model-based variants, such as searcher='bayesopt' in hyperparameter_tune_kwargs can be a lot more sample-efficient.

After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate().

dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test')

test_map = detector.evaluate(dataset_test)
print("mAP on test dataset: {}".format(test_map[1][-1]))
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
mAP on test dataset: 0.20529667557722214

Below, we randomly select an image from test dataset and show the predicted class, box and probability over the origin image, stored in predict_class, predict_rois and predict_score columns, respectively. You can interpret predict_rois as a dict of (xmin, ymin, xmax, ymax) proportional to original image size.

image_path = dataset_test.iloc[0]['image']
result = detector.predict(image_path)
print(result)
   predict_class  predict_score  0      motorbike       0.631014
1         person       0.498010
2      motorbike       0.307780
3      motorbike       0.237253
4            car       0.189609
5            car       0.146799
6         person       0.112611
7         person       0.111101
8        bicycle       0.087812
9    pottedplant       0.064931
10        person       0.055317
11        person       0.051224
12   pottedplant       0.041468
13     motorbike       0.040754
14           cow       0.040128
15           dog       0.030964
16         chair       0.030877
17       bicycle       0.030246
18        person       0.028742
19           dog       0.027186
20          boat       0.026147
21           bus       0.025168
22   pottedplant       0.023861
23     motorbike       0.019556
24        person       0.019490
25           cow       0.016880
26   pottedplant       0.016622
27        person       0.012806
28          boat       0.010524

                                         predict_rois
0   {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
1   {'xmin': 0.3838464021682739, 'ymin': 0.2620955...
2   {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
3   {'xmin': 0.7194532155990601, 'ymin': 0.3973482...
4   {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
5   {'xmin': 0.017133750021457672, 'ymin': 0.39104...
6   {'xmin': 0.0, 'ymin': 0.0, 'xmax': 0.171323761...
7   {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
8   {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
9   {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
10  {'xmin': 0.056676384061574936, 'ymin': 0.35251...
11  {'xmin': 0.41943150758743286, 'ymin': 0.306288...
12  {'xmin': 0.7194532155990601, 'ymin': 0.3973482...
13  {'xmin': 0.017133750021457672, 'ymin': 0.39104...
14  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
15  {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
16  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
17  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
18  {'xmin': 0.7194532155990601, 'ymin': 0.3973482...
19  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
20  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
21  {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
22  {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
23  {'xmin': 0.3838464021682739, 'ymin': 0.2620955...
24  {'xmin': 0.3252916634082794, 'ymin': 0.2931810...
25  {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
26  {'xmin': 0.36219701170921326, 'ymin': 0.269830...
27  {'xmin': 0.46081313490867615, 'ymin': 0.296700...
28  {'xmin': 0.3460637032985687, 'ymin': 0.4270371...

Prediction with multiple images is permitted:

bulk_result = detector.predict(dataset_test)
print(bulk_result)
     predict_class  predict_score  0        motorbike       0.631014
1           person       0.498010
2        motorbike       0.307780
3        motorbike       0.237253
4              car       0.189609
...            ...            ...
1474        person       0.100357
1475        person       0.021504
1476   pottedplant       0.011007
1477        person       0.010115
1478        person       0.010069

                                           predict_rois  0     {'xmin': 0.3460637032985687, 'ymin': 0.4270371...
1     {'xmin': 0.3838464021682739, 'ymin': 0.2620955...
2     {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
3     {'xmin': 0.7194532155990601, 'ymin': 0.3973482...
4     {'xmin': 0.0, 'ymin': 0.6649431586265564, 'xma...
...                                                 ...
1474  {'xmin': 0.22388924658298492, 'ymin': 0.002841...
1475  {'xmin': 0.3579971492290497, 'ymin': 0.2784602...
1476  {'xmin': 0.0, 'ymin': 0.46951761841773987, 'xm...
1477  {'xmin': 0.0, 'ymin': 0.3637838661670685, 'xma...
1478  {'xmin': 0.04689411073923111, 'ymin': 0.484895...

                                                  image
0     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
2     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
4     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
...                                                 ...
1474  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1475  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1476  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1477  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1478  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...

[1479 rows x 4 columns]

We can also save the trained model, and use it later.

savefile = 'detector.ag'
detector.save(savefile)
new_detector = ObjectDetector.load(savefile)