.. _sec_object_detection_quick: Object Detection - Quick Start ============================== Object detection is the process of identifying and localizing objects in an image and is an important task in computer vision. Follow this tutorial to learn how to use AutoGluon for object detection. **Tip**: If you are new to AutoGluon, review :ref:`sec_imgquick` first to learn the basics of the AutoGluon API. Our goal is to detect motorbike in images by `YOLOv3 model `__. A tiny dataset is collected from VOC dataset, which only contains the motorbike category. The model pretrained on the COCO dataset is used to fine-tune our small dataset. With the help of AutoGluon, we are able to try many models with different hyperparameters automatically, and return the best one as our final model. To start, import ObjectDetector: .. code:: python from autogluon.vision import ObjectDetector .. parsed-literal:: :class: output /var/lib/jenkins/miniconda3/envs/autogluon-tutorial-object-detection-v3/lib/python3.9/site-packages/gluoncv/__init__.py:40: UserWarning: Both `mxnet==1.7.0` and `torch==1.10.2+cu102` are installed. You might encounter increased GPU memory footprint if both framework are used at the same time. warnings.warn(f'Both `mxnet=={mx.__version__}` and `torch=={torch.__version__}` are installed. ' Tiny\_motorbike Dataset ----------------------- We collect a toy dataset for detecting motorbikes in images. From the VOC dataset, images are randomly selected for training, validation, and testing - 120 images for training, 50 images for validation, and 50 for testing. This tiny dataset follows the same format as VOC. Using the commands below, we can download this dataset, which is only 23M. The name of unzipped folder is called ``tiny_motorbike``. Anyway, the task dataset helper can perform the download and extraction automatically, and load the dataset according to the detection formats. .. code:: python url = 'https://autogluon.s3.amazonaws.com/datasets/tiny_motorbike.zip' dataset_train = ObjectDetector.Dataset.from_voc(url, splits='trainval') .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ Fit Models by AutoGluon ----------------------- In this section, we demonstrate how to apply AutoGluon to fit our detection models. We use mobilenet as the backbone for the YOLOv3 model. Two different learning rates are used to fine-tune the network. The best model is the one that obtains the best performance on the validation dataset. You can also try using more networks and hyperparameters to create a larger searching space. We ``fit`` a classifier using AutoGluon as follows. In each experiment (one trial in our searching space), we train the model for 5 epochs to avoid bursting our tutorial runtime. .. code:: python time_limit = 60*30 # at most 0.5 hour detector = ObjectDetector() hyperparameters = {'epochs': 5, 'batch_size': 8} hyperparameter_tune_kwargs={'num_trials': 2} detector.fit(dataset_train, time_limit=time_limit, hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs) .. parsed-literal:: :class: output ============================================================================= WARNING: ObjectDetector is deprecated as of v0.4.0 and may contain various bugs and issues! In a future release ObjectDetector may be entirely reworked to use Torch as a backend. This future change will likely be API breaking.Users should ensure they update their code that depends on ObjectDetector when upgrading to future AutoGluon releases. For more information, refer to ObjectDetector refactor GitHub issue: https://github.com/awslabs/autogluon/issues/1559 ============================================================================= The number of requested GPUs is greater than the number of available GPUs.Reduce the number to 1 Randomly split train_data into train[152]/validation[18] splits. Starting HPO experiments .. parsed-literal:: :class: output 0%| | 0/2 [00:00 != ): { root.dataset_root ~/.mxnet/datasets/ != auto root.train.early_stop_max_value 1.0 != inf root.train.early_stop_baseline 0.0 != -inf root.train.epochs 20 != 5 root.train.seed 233 != 354 root.train.batch_size 16 != 8 root.train.early_stop_patience -1 != 10 root.num_workers 4 != 8 root.gpus (0, 1, 2, 3) != (0,) root.valid.batch_size 16 != 8 root.ssd.base_network vgg16_atrous != resnet50_v1 root.ssd.data_shape 300 != 512 root.dataset voc_tiny != auto } Saved config to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/config.yaml Using transfer learning from ssd_512_resnet50_v1_coco, the other network parameters are ignored. Start training from [Epoch 0] [Epoch 0] Training cost: 9.739492, CrossEntropy=3.526174, SmoothL1=1.000556 [Epoch 0] Validation: motorbike=0.6480460980460979 dog=0.0 bicycle=0.07272727272727274 cow=nan bus=nan pottedplant=0.0 boat=nan person=0.5364828678162498 chair=nan car=0.6363636363636365 mAP=0.31560331249220946 [Epoch 0] Current best map: 0.315603 vs previous 0.000000, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/best_checkpoint.pkl [Epoch 1] Training cost: 7.960934, CrossEntropy=2.649148, SmoothL1=1.264558 [Epoch 1] Validation: motorbike=0.7181168831168832 dog=0.0 bicycle=0.10606060606060605 cow=nan bus=nan pottedplant=0.0 boat=nan person=0.7205387205387206 chair=nan car=0.4242424242424242 mAP=0.32815977232643906 [Epoch 1] Current best map: 0.328160 vs previous 0.315603, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/best_checkpoint.pkl [Epoch 2] Training cost: 8.498090, CrossEntropy=2.618884, SmoothL1=1.374472 [Epoch 2] Validation: motorbike=0.7379935720844811 dog=0.0 bicycle=0.36363636363636365 cow=nan bus=nan pottedplant=0.0 boat=nan person=0.7536128526645769 chair=nan car=0.7202797202797202 mAP=0.4292537514441903 [Epoch 2] Current best map: 0.429254 vs previous 0.328160, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/best_checkpoint.pkl [Epoch 3] Training cost: 8.231382, CrossEntropy=2.534156, SmoothL1=1.235584 [Epoch 3] Validation: motorbike=0.7416912110877629 dog=0.0 bicycle=0.4727272727272728 cow=nan bus=nan pottedplant=0.0 boat=nan person=0.7657023499932814 chair=nan car=0.6363636363636365 mAP=0.43608074502865896 [Epoch 3] Current best map: 0.436081 vs previous 0.429254, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/best_checkpoint.pkl [Epoch 4] Training cost: 8.520520, CrossEntropy=2.207729, SmoothL1=1.038262 [Epoch 4] Validation: motorbike=0.7984848484848486 dog=0.0 bicycle=0.36363636363636365 cow=nan bus=nan pottedplant=0.0 boat=nan person=0.8206487292611696 chair=nan car=0.6363636363636365 mAP=0.43652226295766966 [Epoch 4] Current best map: 0.436522 vs previous 0.436081, saved to /var/lib/jenkins/workspace/workspace/autogluon-tutorial-object-detection-v3/docs/_build/eval/tutorials/object_detection/b4ab3cd3/.trial_0/best_checkpoint.pkl Applying the state from the best checkpoint... Finished, total runtime is 71.84 s { 'best_config': { 'dataset': 'auto', 'dataset_root': 'auto', 'estimator': , 'gpus': [0], 'horovod': False, 'num_workers': 8, 'resume': '', 'save_interval': 1, 'ssd': { 'amp': False, 'base_network': 'resnet50_v1', 'data_shape': 512, 'filters': None, 'nms_thresh': 0.45, 'nms_topk': 400, 'ratios': ( [1, 2, 0.5], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5, 3, 0.3333333333333333], [1, 2, 0.5], [1, 2, 0.5]), 'sizes': (30, 60, 111, 162, 213, 264, 315), 'steps': (8, 16, 32, 64, 100, 300), 'syncbn': False, 'transfer': 'ssd_512_resnet50_v1_coco'}, 'train': { 'batch_size': 8, 'dali': False, 'early_stop_baseline': -inf, 'early_stop_max_value': inf, 'early_stop_min_delta': 0.001, 'early_stop_patience': 10, 'epochs': 5, 'log_interval': 100, 'lr': 0.001, 'lr_decay': 0.1, 'lr_decay_epoch': (160, 200), 'momentum': 0.9, 'seed': 354, 'start_epoch': 0, 'wd': 0.0005}, 'valid': { 'batch_size': 8, 'iou_thresh': 0.5, 'metric': 'voc07', 'val_interval': 1}}, 'total_time': 71.84064221382141, 'train_map': 0.7027151859705439, 'valid_map': 0.43652226295766966} .. parsed-literal:: :class: output Note that ``num_trials=2`` above is only used to speed up the tutorial. In normal practice, it is common to only use ``time_limit`` and drop ``num_trials``. Also note that hyperparameter tuning defaults to random search. After fitting, AutoGluon automatically returns the best model among all models in the searching space. From the output, we know the best model is the one trained with the second learning rate. To see how well the returned model performed on test dataset, call detector.evaluate(). .. code:: python dataset_test = ObjectDetector.Dataset.from_voc(url, splits='test') test_map = detector.evaluate(dataset_test) print("mAP on test dataset: {}".format(test_map[1][-1])) .. parsed-literal:: :class: output tiny_motorbike/ ├── Annotations/ ├── ImageSets/ └── JPEGImages/ mAP on test dataset: 0.2955375636355408 Below, we randomly select an image from test dataset and show the predicted class, box and probability over the origin image, stored in ``predict_class``, ``predict_rois`` and ``predict_score`` columns, respectively. You can interpret ``predict_rois`` as a dict of (``xmin``, ``ymin``, ``xmax``, ``ymax``) proportional to original image size. .. code:: python image_path = dataset_test.iloc[0]['image'] result = detector.predict(image_path) print(result) .. parsed-literal:: :class: output predict_class predict_score \ 0 person 0.968006 1 motorbike 0.958618 2 motorbike 0.172943 3 car 0.155108 4 motorbike 0.122104 .. ... ... 95 person 0.030052 96 person 0.030051 97 person 0.029965 98 person 0.029713 99 person 0.029689 predict_rois 0 {'xmin': 0.4036383628845215, 'ymin': 0.2828292... 1 {'xmin': 0.3045811057090759, 'ymin': 0.4250262... 2 {'xmin': 0.006702372804284096, 'ymin': 0.67529... 3 {'xmin': 0.00579970283433795, 'ymin': 0.658171... 4 {'xmin': 0.3792843520641327, 'ymin': 0.3439804... .. ... 95 {'xmin': 0.8851612210273743, 'ymin': 0.7229430... 96 {'xmin': 1.0, 'ymin': 0.5008818507194519, 'xma... 97 {'xmin': 0.9956833720207214, 'ymin': 0.8594512... 98 {'xmin': 0.20750705897808075, 'ymin': 0.689306... 99 {'xmin': 0.9666803479194641, 'ymin': 0.6918264... [100 rows x 3 columns] Prediction with multiple images is permitted: .. code:: python bulk_result = detector.predict(dataset_test) print(bulk_result) .. parsed-literal:: :class: output predict_class predict_score \ 0 person 0.968006 1 motorbike 0.958618 2 motorbike 0.172943 3 car 0.155108 4 motorbike 0.122104 ... ... ... 4611 person 0.020639 4612 person 0.020607 4613 motorbike 0.020576 4614 person 0.020542 4615 person 0.020524 predict_rois \ 0 {'xmin': 0.4036383628845215, 'ymin': 0.2828292... 1 {'xmin': 0.3045811057090759, 'ymin': 0.4250262... 2 {'xmin': 0.006702372804284096, 'ymin': 0.67529... 3 {'xmin': 0.00579970283433795, 'ymin': 0.658171... 4 {'xmin': 0.3792843520641327, 'ymin': 0.3439804... ... ... 4611 {'xmin': 0.3389773368835449, 'ymin': 0.0953838... 4612 {'xmin': 0.497570663690567, 'ymin': 0.13796894... 4613 {'xmin': 0.0, 'ymin': 0.3180899918079376, 'xma... 4614 {'xmin': 0.0074115656316280365, 'ymin': 0.3797... 4615 {'xmin': 0.22133126854896545, 'ymin': 0.388737... image 0 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 1 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 2 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 3 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... ... ... 4611 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4612 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4613 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4614 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... 4615 /var/lib/jenkins/.gluoncv/datasets/tiny_motorb... [4616 rows x 4 columns] We can also save the trained model, and use it later. .. code:: python savefile = 'detector.ag' detector.save(savefile) new_detector = ObjectDetector.load(savefile) .. parsed-literal:: :class: output /var/lib/jenkins/miniconda3/envs/autogluon-tutorial-object-detection-v3/lib/python3.9/site-packages/mxnet/gluon/block.py:1512: UserWarning: Cannot decide type for the following arguments. Consider providing them as input: data: None input_sym_arg_type = in_param.infer_type()[0]