.. _sec_object_detection_quick: Object Detection - Quick Start ============================== Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection. **Tip**: If you are new to AutoGluon, review :ref:`sec_imgquick` first to learn the basics of the AutoGluon API. Our goal is to detect motorbike in images by `YOLOv3 model `__. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model. To start, import ObjectDetector: .. code:: python from autogluon.vision import ObjectDetector .. parsed-literal:: :class: output /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.9/site-packages/gluoncv/__init__.py:40: UserWarning: Both `mxnet==1.7.0` and `torch==1.9.1+cu102` are installed. You might encounter increased GPU memory footprint if both framework are used at the same time. warnings.warn(f'Both `mxnet=={mx.__version__}` and `torch=={torch.__version__}` are installed. ' /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/core/src/autogluon/core/scheduler/seq_scheduler.py:119: SyntaxWarning: "is" with a literal. Did you mean "=="? if searcher is 'auto': Tiny\_motorbike Dataset ----------------------- We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC. Using the commands below, we can download this dataset, which is only 23M. The name of unzipped folder is called ``tiny_motorbike``. Anyway, the task dataset helper can perform the download and extraction automatically, and load the dataset according to the detection formats. .. code:: python url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip' dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval') .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ Fit Models by AutoGluon ----------------------- In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space. We ``fit`` a classifier using AutoGluon as follows. In each experiment (one trial in our searching space), we train the model for 5 epochs to avoid bursting our tutorial runtime. .. code:: python time_limit = 60*30 # at most 0.5 hour detector = ObjectDetector() hyperparameters = {'epochs': 5, 'batch_size': 8} hyperparameter_tune_kwargs={'num_trials': 2} detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs) .. parsed-literal:: :class: output The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1 Randomly split train_data into train[151]/validation[19] splits. Starting HPO experiments .. parsed-literal:: :class: output 0%| | 0/2 [00:00 != ): { root.train.seed 233 != 191 root.train.early_stop_max_value 1.0 != inf root.train.epochs 20 != 5 root.train.batch_size 16 != 8 root.train.early_stop_baseline 0.0 != -inf root.train.early_stop_patience -1 != 10 root.dataset_root ~/.mxnet/datasets/ != auto root.valid.batch_size 16 != 8 root.ssd.data_shape 300 != 512 root.ssd.base_network vgg16_atrous != resnet50_v1 root.gpus (0, 1, 2, 3) != (0,) root.dataset voc_tiny != auto root.num_workers 4 != 8 } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_0/config.yaml Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 8.878838, CrossEntropy=3.522975, SmoothL1=0.974455 [Epoch 0] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.7645663397732807 dog=nan pottedplant=nan person=0.7748315219059898 cow=nan car=0.8236500341763502 chair=0.0 mAP=0.4726095791711241 [Epoch 0] Current best map: 0.472610 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_0/best_checkpoint.pkl [Epoch 1] Training cost: 8.220559, CrossEntropy=2.679690, SmoothL1=1.177056 [Epoch 1] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.6936951978556176 dog=nan pottedplant=nan person=0.6833267742358651 cow=nan car=0.464935064935065 chair=0.0 mAP=0.3683914074053095 [Epoch 2] Training cost: 8.024891, CrossEntropy=2.432516, SmoothL1=1.072502 [Epoch 2] Validation: bicycle=0.16666666666666663 bus=nan boat=nan motorbike=0.7229170829170829 dog=nan pottedplant=nan person=0.7943170838823013 cow=nan car=0.6 chair=0.0 mAP=0.4567801666932102 [Epoch 3] Training cost: 8.144258, CrossEntropy=2.423499, SmoothL1=1.070929 [Epoch 3] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.6967079808512033 dog=nan pottedplant=nan person=0.7528246666908414 cow=nan car=0.7012987012987013 chair=0.0 mAP=0.43016626976814915 [Epoch 4] Training cost: 8.205319, CrossEntropy=2.172361, SmoothL1=1.004024 [Epoch 4] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.8489067858285688 dog=nan pottedplant=nan person=0.7614858260019552 cow=nan car=0.7648760330578512 chair=0.0 mAP=0.47505372897767506 [Epoch 4] Current best map: 0.475054 vs previous 0.472610, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_0/best_checkpoint.pkl Applying the state from the best checkpoint... modified configs( != ): { root.train.early_stop_max_value 1.0 != inf root.train.epochs 20 != 5 root.train.early_stop_patience -1 != 10 root.train.seed 233 != 191 root.train.batch_size 16 != 8 root.train.early_stop_baseline 0.0 != -inf root.dataset_root ~/.mxnet/datasets/ != auto root.gpus (0, 1, 2, 3) != (0,) root.valid.batch_size 16 != 8 root.dataset voc_tiny != auto root.num_workers 4 != 8 } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_1/config.yaml Using transfer learning from yolo3_darknet53_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 13.517, ObjLoss=8.578, BoxCenterLoss=7.223, BoxScaleLoss=2.062, ClassLoss=4.275 [Epoch 0] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.4849332232645933 dog=nan pottedplant=nan person=0.5106141630205802 cow=nan car=0.5097125097125097 chair=0.0 mAP=0.30105197919953663 [Epoch 0] Current best map: 0.301052 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_1/best_checkpoint.pkl [Epoch 1] Training cost: 8.896, ObjLoss=8.973, BoxCenterLoss=7.585, BoxScaleLoss=3.431, ClassLoss=3.616 [Epoch 1] Validation: bicycle=0.11111111111111108 bus=nan boat=nan motorbike=0.6988199300699302 dog=nan pottedplant=nan person=0.7474556796807436 cow=nan car=0.7045454545454545 chair=0.0 mAP=0.4523864350814478 [Epoch 1] Current best map: 0.452386 vs previous 0.301052, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/09bc59fd/.trial_1/best_checkpoint.pkl [Epoch 2] Training cost: 17.688, ObjLoss=9.607, BoxCenterLoss=7.676, BoxScaleLoss=3.316, ClassLoss=3.222 [Epoch 2] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.7319674012855831 dog=nan pottedplant=nan person=0.5683385579937303 cow=nan car=0.15810768751945223 chair=0.0 mAP=0.29168272935975315 [Epoch 3] Training cost: 16.370, ObjLoss=9.825, BoxCenterLoss=7.827, BoxScaleLoss=3.420, ClassLoss=2.989 [Epoch 3] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.41317163228927933 dog=nan pottedplant=nan person=0.6887445887445888 cow=nan car=0.2 chair=0.0 mAP=0.2603832442067736 [Epoch 4] Training cost: 14.191, ObjLoss=9.675, BoxCenterLoss=7.839, BoxScaleLoss=3.321, ClassLoss=2.855 [Epoch 4] Validation: bicycle=0.0 bus=nan boat=nan motorbike=0.7876559714795011 dog=nan pottedplant=nan person=0.7241759334236625 cow=nan car=0.7318181818181818 chair=0.0 mAP=0.4487300173442691 Applying the state from the best checkpoint... Finished, total runtime is 163.15 s { 'best_config': { 'dataset': 'auto', 'dataset_root': 'auto', 'estimator': , 'gpus': [0], 'horovod': False, 'num_workers': 8, 'resume': '', 'save_interval': 1, 'ssd': { 'amp': False, 'base_network': 'resnet50_v1', 'data_shape': 512, 'filters': None, 'nms_thresh': 0.45, 'nms_topk': 400, 'ratios': ( [1, 2, 0.5], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5], [1, 2, 0.5]), 'sizes': (30, 60, 111, 162, 213, 264, 315), 'steps': (8, 16, 32, 64, 100, 300), 'syncbn': False, 'transfer': 'ssd_512_resnet50_v1_coco'}, 'train': { 'batch_size': 8, 'dali': False, 'early_stop_baseline': -inf, 'early_stop_max_value': inf, 'early_stop_min_delta': 0.001, 'early_stop_patience': 10, 'epochs': 5, 'log_interval': 100, 'lr': 0.001, 'lr_decay': 0.1, 'lr_decay_epoch': (160, 200), 'momentum': 0.9, 'seed': 191, 'start_epoch': 0, 'wd': 0.0005}, 'valid': { 'batch_size': 8, 'iou_thresh': 0.5, 'metric': 'voc07', 'val_interval': 1}}, 'total_time': 163.14516854286194, 'train_map': 0.7472250785485073, 'valid_map': 0.4523864350814478} .. parsed-literal:: :class: output Note that ``num_trials=2`` above is only used to speed up the tutorial. In normal practice, it is common to only use ``time_limit`` and drop ``num_trials``. Also note that hyperparameter tuning defaults to random search. Model-based variants, such as ``searcher='bayesopt'`` in ``hyperparameter_tune_kwargs`` can be a lot more sample-efficient. After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate(). .. code:: python dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test') test_map = detector.evaluate(dataset_test) print("mAP on test dataset: {}".format(test_map[1][-1])) .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ mAP on test dataset: 0.2914310762337079 Below, we randomly select an image from test dataset and show the predicted class, box and probability over the origin image, stored in ``predict_class``, ``predict_rois`` and ``predict_score`` columns, respectively. You can interpret ``predict_rois`` as a dict of (``xmin``, ``ymin``, ``xmax``, ``ymax``) proportional to original image size. .. code:: python image_path = dataset_test.iloc[0]['image'] result = detector.predict(image_path) print(result) .. parsed-literal:: :class: output predict_class predict_score \ 0 motorbike 0.969254 1 person 0.911415 2 car 0.782424 3 person 0.524552 4 motorbike 0.378225 .. ... ... 95 car 0.040758 96 person 0.040754 97 person 0.040627 98 person 0.040399 99 person 0.040316 predict_rois 0 {'xmin': 0.3113465905189514, 'ymin': 0.4561654... 1 {'xmin': 0.39144590497016907, 'ymin': 0.290609... 2 {'xmin': 0.004328793380409479, 'ymin': 0.62619... 3 {'xmin': 0.8572840690612793, 'ymin': 0.3927940... 4 {'xmin': 0.36528724431991577, 'ymin': 0.346019... .. ... 95 {'xmin': 0.0, 'ymin': 0.6293460726737976, 'xma... 96 {'xmin': 0.6061198115348816, 'ymin': 0.0, 'xma... 97 {'xmin': 0.7729136347770691, 'ymin': 0.3866939... 98 {'xmin': 0.3768836557865143, 'ymin': 0.3073220... 99 {'xmin': 0.9906854629516602, 'ymin': 0.2200912... [100 rows x 3 columns] Prediction with multiple images is permitted: .. code:: python bulk_result = detector.predict(dataset_test) print(bulk_result) .. parsed-literal:: :class: output predict_class predict_score \ 0 motorbike 0.969254 1 person 0.911415 2 car 0.782424 3 person 0.524552 4 motorbike 0.378225 ... ... ... 4342 person 0.020544 4343 person 0.020527 4344 person 0.020524 4345 person 0.020416 4346 person 0.020313 predict_rois \ 0 {'xmin': 0.3113465905189514, 'ymin': 0.4561654... 1 {'xmin': 0.39144590497016907, 'ymin': 0.290609... 2 {'xmin': 0.004328793380409479, 'ymin': 0.62619... 3 {'xmin': 0.8572840690612793, 'ymin': 0.3927940... 4 {'xmin': 0.36528724431991577, 'ymin': 0.346019... ... ... 4342 {'xmin': 0.45396631956100464, 'ymin': 0.219021... 4343 {'xmin': 0.9669079184532166, 'ymin': 0.7741760... 4344 {'xmin': 0.9294031858444214, 'ymin': 0.6654704... 4345 {'xmin': 0.8707968592643738, 'ymin': 0.6883020... 4346 {'xmin': 0.5710065960884094, 'ymin': 0.1913737... image 0 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 2 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... ... ... 4342 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4343 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4344 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4345 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4346 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... [4347 rows x 4 columns] We can also save the trained model, and use it later. .. code:: python savefile = 'detector.ag' detector.save(savefile) new_detector = ObjectDetector.load(savefile) .. parsed-literal:: :class: output /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/venv/lib/python3.9/site-packages/mxnet/gluon/block.py:1512: UserWarning: Cannot decide type for the following arguments. Consider providing them as input: data: None input_sym_arg_type = in_param.infer_type()[0]