Object Detection - Quick Start

Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection.

Tip: If you are new to AutoGluon, review Image Prediction - Quick Start first to learn the basics of the AutoGluon API.

Our goal is to detect motorbike in images by YOLOv3 model. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model.

To start, import ObjectDetector:

from autogluon.vision import ObjectDetector
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/gluoncv/__init__.py:40: UserWarning: Both mxnet==1.7.0 and torch==1.9.1+cu102 are installed. You might encounter increased GPU memory footprint if both framework are used at the same time.
  warnings.warn(f'Both mxnet=={mx.__version__} and torch=={torch.__version__} are installed. '

Tiny_motorbike Dataset

We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC.

Using the commands below, we can download this dataset, which is only 23M. The name of unzipped folder is called tiny_motorbike. Anyway, the task dataset helper can perform the download and extraction automatically, and load the dataset according to the detection formats.

url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip'
dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval')
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/

Fit Models by AutoGluon

In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space.

We fit a classifier using AutoGluon as follows. In each experiment (one trial in our searching space), we train the model for 5 epochs to avoid bursting our tutorial runtime.

time_limit = 60*30  # at most 0.5 hour
detector = ObjectDetector()
hyperparameters = {'epochs': 5, 'batch_size': 8}
hyperparameter_tune_kwargs={'num_trials': 2}
detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs)
The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1
Randomly split train_data into train[158]/validation[12] splits.
Starting HPO experiments
  0%|          | 0/2 [00:00<?, ?it/s]
modified configs(<old> != <new>): {
root.valid.batch_size 16 != 8
root.train.early_stop_max_value 1.0 != inf
root.train.early_stop_patience -1 != 10
root.train.seed      233 != 120
root.train.epochs    20 != 5
root.train.early_stop_baseline 0.0 != -inf
root.train.batch_size 16 != 8
root.dataset         voc_tiny != auto
root.ssd.base_network vgg16_atrous != resnet50_v1
root.ssd.data_shape  300 != 512
root.gpus            (0, 1, 2, 3) != (0,)
root.dataset_root    ~/.mxnet/datasets/ != auto
root.num_workers     4 != 8
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_0/config.yaml
Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored.
Start training from [Epoch 0]
[Epoch 0] Training cost: 9.425862, CrossEntropy=3.687358, SmoothL1=1.059293
[Epoch 0] Validation:
dog=nan
bicycle=0.03636363636363636
motorbike=0.6441558441558443
pottedplant=nan
boat=nan
bus=nan
person=0.6080144792076609
chair=nan
cow=nan
car=0.6571822594880848
mAP=0.4864290548038066
[Epoch 0] Current best map: 0.486429 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_0/best_checkpoint.pkl
[Epoch 1] Training cost: 8.830706, CrossEntropy=2.624084, SmoothL1=1.100782
[Epoch 1] Validation:
dog=nan
bicycle=0.0
motorbike=0.7419474245561201
pottedplant=nan
boat=nan
bus=nan
person=0.7400748374432585
chair=nan
cow=nan
car=0.8909090909090911
mAP=0.5932328382271175
[Epoch 1] Current best map: 0.593233 vs previous 0.486429, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_0/best_checkpoint.pkl
[Epoch 2] Training cost: 8.502857, CrossEntropy=2.335668, SmoothL1=1.002768
[Epoch 2] Validation:
dog=nan
bicycle=0.0
motorbike=0.858840023612751
pottedplant=nan
boat=nan
bus=nan
person=0.789616048439578
chair=nan
cow=nan
car=0.9545454545454546
mAP=0.6507503816494459
[Epoch 2] Current best map: 0.650750 vs previous 0.593233, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_0/best_checkpoint.pkl
[Epoch 3] Training cost: 8.570399, CrossEntropy=2.282304, SmoothL1=1.033731
[Epoch 3] Validation:
dog=nan
bicycle=0.7272727272727274
motorbike=0.9278074866310162
pottedplant=nan
boat=nan
bus=nan
person=0.6662045256967313
chair=nan
cow=nan
car=0.5598845598845598
mAP=0.7202923248712586
[Epoch 3] Current best map: 0.720292 vs previous 0.650750, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_0/best_checkpoint.pkl
[Epoch 4] Training cost: 8.218557, CrossEntropy=2.121486, SmoothL1=1.015907
[Epoch 4] Validation:
dog=nan
bicycle=0.0
motorbike=0.884848484848485
pottedplant=nan
boat=nan
bus=nan
person=0.7713588767253687
chair=nan
cow=nan
car=0.9272727272727275
mAP=0.6458700222116454
Applying the state from the best checkpoint...
modified configs(<old> != <new>): {
root.valid.batch_size 16 != 8
root.train.seed      233 != 120
root.train.batch_size 16 != 8
root.train.early_stop_max_value 1.0 != inf
root.train.epochs    20 != 5
root.train.early_stop_patience -1 != 10
root.train.early_stop_baseline 0.0 != -inf
root.dataset         voc_tiny != auto
root.gpus            (0, 1, 2, 3) != (0,)
root.dataset_root    ~/.mxnet/datasets/ != auto
root.num_workers     4 != 8
}
Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_1/config.yaml
Using transfer learning from yolo3_darknet53_coco, the other network parameters are ignored.
Start training from [Epoch 0]
[Epoch 0] Training cost: 11.533, ObjLoss=9.027, BoxCenterLoss=7.919, BoxScaleLoss=2.328, ClassLoss=4.682
[Epoch 0] Validation:
dog=nan
bicycle=0.7272727272727274
motorbike=0.8205430932703661
pottedplant=nan
boat=nan
bus=nan
person=0.4922946037919301
chair=nan
cow=nan
car=0.7787485242030697
mAP=0.7047147371345233
[Epoch 0] Current best map: 0.704715 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/3cab7473/.trial_1/best_checkpoint.pkl
[Epoch 1] Training cost: 10.417, ObjLoss=9.346, BoxCenterLoss=8.003, BoxScaleLoss=2.544, ClassLoss=3.846
[Epoch 1] Validation:
dog=nan
bicycle=0.1818181818181818
motorbike=0.883838383838384
pottedplant=nan
boat=nan
bus=nan
person=0.7550964187327823
chair=nan
cow=nan
car=0.5
mAP=0.580188246097337
[Epoch 2] Training cost: 10.954, ObjLoss=9.608, BoxCenterLoss=7.935, BoxScaleLoss=2.742, ClassLoss=3.460
[Epoch 2] Validation:
dog=nan
bicycle=0.5454545454545455
motorbike=0.8695187165775402
pottedplant=nan
boat=nan
bus=nan
person=0.8138906547997458
chair=nan
cow=nan
car=0.21546635182998813
mAP=0.6110825671654548
[Epoch 3] Training cost: 12.552, ObjLoss=9.361, BoxCenterLoss=7.773, BoxScaleLoss=2.833, ClassLoss=3.125
[Epoch 3] Validation:
dog=nan
bicycle=0.07792207792207792
motorbike=0.8073863636363637
pottedplant=nan
boat=nan
bus=nan
person=0.4590909090909091
chair=nan
cow=nan
car=0.28099173553719003
mAP=0.4063477715466352
[Epoch 4] Training cost: 13.141, ObjLoss=9.219, BoxCenterLoss=7.733, BoxScaleLoss=2.833, ClassLoss=2.956
[Epoch 4] Validation:
dog=nan
bicycle=0.0
motorbike=0.7223873305378008
pottedplant=nan
boat=nan
bus=nan
person=0.7436073798060268
chair=nan
cow=nan
car=0.14285714285714288
mAP=0.4022129633002426
Applying the state from the best checkpoint...
Finished, total runtime is 153.97 s
{ 'best_config': { 'dataset': 'auto',
                   'dataset_root': 'auto',
                   'estimator': <class 'gluoncv.auto.estimators.ssd.ssd.SSDEstimator'>,
                   'gpus': [0],
                   'horovod': False,
                   'num_workers': 8,
                   'resume': '',
                   'save_interval': 1,
                   'ssd': { 'amp': False,
                            'base_network': 'resnet50_v1',
                            'data_shape': 512,
                            'filters': None,
                            'nms_thresh': 0.45,
                            'nms_topk': 400,
                            'ratios': ( [1, 2, 0.5],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5, 3, 0.3333333333333333],
                                        [1, 2, 0.5],
                                        [1, 2, 0.5]),
                            'sizes': (30, 60, 111, 162, 213, 264, 315),
                            'steps': (8, 16, 32, 64, 100, 300),
                            'syncbn': False,
                            'transfer': 'ssd_512_resnet50_v1_coco'},
                   'train': { 'batch_size': 8,
                              'dali': False,
                              'early_stop_baseline': -inf,
                              'early_stop_max_value': inf,
                              'early_stop_min_delta': 0.001,
                              'early_stop_patience': 10,
                              'epochs': 5,
                              'log_interval': 100,
                              'lr': 0.001,
                              'lr_decay': 0.1,
                              'lr_decay_epoch': (160, 200),
                              'momentum': 0.9,
                              'seed': 120,
                              'start_epoch': 0,
                              'wd': 0.0005},
                   'valid': { 'batch_size': 8,
                              'iou_thresh': 0.5,
                              'metric': 'voc07',
                              'val_interval': 1}},
  'total_time': 153.97012877464294,
  'train_map': 0.5863849944312046,
  'valid_map': 0.7047147371345233}
<autogluon.vision.detector.detector.ObjectDetector at 0x7f4b163b88d0>

Note that num_trials=2 above is only used to speed up the tutorial. In normal practice, it is common to only use time_limit and drop num_trials. Also note that hyperparameter tuning defaults to random search. Model-based variants, such as searcher='bayesopt' in hyperparameter_tune_kwargs can be a lot more sample-efficient.

After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate().

dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test')

test_map = detector.evaluate(dataset_test)
print("mAP on test dataset: {}".format(test_map[1][-1]))
tiny_motorbike/
├── Annotations/
├── ImageSets/
└── JPEGImages/
mAP on test dataset: 0.0676766156218211

Below, we randomly select an image from test dataset and show the predicted class, box and probability over the origin image, stored in predict_class, predict_rois and predict_score columns, respectively. You can interpret predict_rois as a dict of (xmin, ymin, xmax, ymax) proportional to original image size.

image_path = dataset_test.iloc[0]['image']
result = detector.predict(image_path)
print(result)
   predict_class  predict_score  0         person       0.947252
1      motorbike       0.946329
2      motorbike       0.592899
3        bicycle       0.142534
4            car       0.121906
..           ...            ...
74           car       0.025102
75        person       0.025037
76        person       0.024732
77        person       0.024474
78        person       0.024397

                                         predict_rois
0   {'xmin': 0.38754934072494507, 'ymin': 0.275230...
1   {'xmin': 0.3210601806640625, 'ymin': 0.4177174...
2   {'xmin': 0.0, 'ymin': 0.6606400012969971, 'xma...
3   {'xmin': 0.31595027446746826, 'ymin': 0.441645...
4   {'xmin': 0.0, 'ymin': 0.6457163691520691, 'xma...
..                                                ...
74  {'xmin': 0.31066572666168213, 'ymin': 0.440147...
75  {'xmin': 0.445857971906662, 'ymin': 0.72475284...
76  {'xmin': 0.32825109362602234, 'ymin': 0.384096...
77  {'xmin': 0.7443744540214539, 'ymin': 0.6144488...
78  {'xmin': 0.4017369747161865, 'ymin': 0.4161507...

[79 rows x 3 columns]

Prediction with multiple images is permitted:

bulk_result = detector.predict(dataset_test)
print(bulk_result)
     predict_class  predict_score  0           person       0.947252
1        motorbike       0.946329
2        motorbike       0.592899
3          bicycle       0.142534
4              car       0.121906
...            ...            ...
3868        person       0.023109
3869        person       0.022987
3870        person       0.022833
3871        person       0.022566
3872         chair       0.022438

                                           predict_rois  0     {'xmin': 0.38754934072494507, 'ymin': 0.275230...
1     {'xmin': 0.3210601806640625, 'ymin': 0.4177174...
2     {'xmin': 0.0, 'ymin': 0.6606400012969971, 'xma...
3     {'xmin': 0.31595027446746826, 'ymin': 0.441645...
4     {'xmin': 0.0, 'ymin': 0.6457163691520691, 'xma...
...                                                 ...
3868  {'xmin': 0.43698179721832275, 'ymin': 0.302956...
3869  {'xmin': 0.5001113414764404, 'ymin': 0.1730404...
3870  {'xmin': 0.12783662974834442, 'ymin': 0.447688...
3871  {'xmin': 0.26958897709846497, 'ymin': 0.122434...
3872  {'xmin': 0.7818522453308105, 'ymin': 0.6967631...

                                                  image
0     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
1     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
2     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
4     /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
...                                                 ...
3868  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3869  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3870  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3871  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...
3872  /var/lib/jenkins/.gluoncv/datasets/tiny_motorb...

[3873 rows x 4 columns]

We can also save the trained model, and use it later.

savefile = 'detector.ag'
detector.save(savefile)
new_detector = ObjectDetector.load(savefile)
/var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.7/site-packages/mxnet/gluon/block.py:1512: UserWarning: Cannot decide type for the following arguments. Consider providing them as input:
    data: None
  input_sym_arg_type = in_param.infer_type()[0]