Highlights
- Supports tracking algorithms including multi-object tracking (MOT) algorithms SORT, DeepSORT, StrongSORT, OCSORT, ByteTrack, QDTrack, and video instance segmentation (VIS) algorithm MaskTrackRCNN, Mask2Former-VIS.
- Support ViTDet
- Supports inference and evaluation of multimodal algorithms GLIP and XDecoder, and also supports datasets such as COCO semantic segmentation, COCO Caption, ADE20k general segmentation, and RefCOCO. GLIP fine-tuning will be supported in the future.
- Provides a gradio demo for image type tasks of MMDetection, making it easy for users to experience.
Exciting Features
GLIP inference and evaluation
s multimodal vision algorithms continue to evolve, MMDetection has also supported such algorithms. This section demonstrates how to use the demo and eval scripts corresponding to multimodal algorithms using the GLIP algorithm and model as the example. Moreover, MMDetection integrated a gradio_demo project, which allows developers to quickly play with all image input tasks in MMDetection on their local devices. Check the document for more details.
Preparation
Please first make sure that you have the correct dependencies installed:
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
MMDetection has already implemented GLIP algorithms and provided the weights, you can download directly from urls:
cd mmdetection
wget https://download.openmmlab.com/mmdetection/v3.0/glip/glip_tiny_a_mmdet-b3654169.pth
Inference
Once the model is successfully downloaded, you can use the demo/image_demo.py
script to run the inference.
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts bench
Demo result will be similar to this:
If users would like to detect multiple targets, please declare them in the format of xx . xx .
after the --texts
.
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'bench . car .'
And the result will be like this one:
You can also use a sentence as the input prompt for the --texts
field, for example:
python demo/image_demo.py demo/demo.jpg glip_tiny_a_mmdet-b3654169.pth --texts 'There are a lot of cars here.'
The result will be similar to this:
Evaluation
The GLIP implementation in MMDetection does not have any performance degradation, our benchmark is as follows:
Model | official mAP | mmdet mAP |
---|---|---|
glip_A_Swin_T_O365.yaml | 42.9 | 43.0 |
glip_Swin_T_O365.yaml | 44.9 | 44.9 |
glip_Swin_L.yaml | 51.4 | 51.3 |
Users can use the test script we provided to run evaluation as well. Here is a basic example:
# 1 gpu
python tools/test.py configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth
# 8 GPU
./tools/dist_test.sh configs/glip/glip_atss_swin-t_fpn_dyhead_pretrain_obj365.py glip_tiny_a_mmdet-b3654169.pth 8
The result will be similar to this:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.428
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.594
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.466
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.300
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.477
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.534
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.634
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.473
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.690
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.789
XDecoder
Installation
# if source
pip install -r requirements/multimodal.txt
# if wheel
mim install mmdet[multimodal]
How to use it?
For convenience, you can download the weights to the mmdetection
root dir
wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_last_novg.pt
wget https://download.openmmlab.com/mmdetection/v3.0/xdecoder/xdecoder_focalt_best_openseg.pt
The above two weights are directly copied from the official website without any modification. The specific source is https://github.com/microsoft/X-Decoder
For convenience of demonstration, please download the folder and place it in the root directory of mmdetection.
(1) Open Vocabulary Semantic Segmentation
cd projects/XDecoder
python demo.py ../../images/animals.png configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts zebra.giraffe
(2) Open Vocabulary Instance Segmentation
cd projects/XDecoder
python demo.py ../../images/owls.jpeg configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py --weights ../../xdecoder_focalt_last_novg.pt --texts owl
(3) Open Vocabulary Panoptic Segmentation
cd projects/XDecoder
python demo.py ../../images/street.jpg configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py --weights ../../xdecoder_focalt_last_novg.pt --text car.person --stuff-text tree.sky
(4) Referring Expression Segmentation
cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py --weights ../../xdecoder_focalt_last_novg.pt --text "The larger watermelon. The front white flower. White tea pot."
(5) Image Caption
cd projects/XDecoder
python demo.py ../../images/penguin.jpeg configs/xdecoder-tiny_zeroshot_caption_coco2014.py --weights ../../xdecoder_focalt_last_novg.pt
(6) Referring Expression Image Caption
cd projects/XDecoder
python demo.py ../../images/fruit.jpg configs/xdecoder-tiny_zeroshot_ref-caption.py --weights ../../xdecoder_focalt_last_novg.pt --text 'White tea pot'
(7) Text Image Region Retrieval
cd projects/XDecoder
python demo.py ../../images/coco configs/xdecoder-tiny_zeroshot_text-image-retrieval.py --weights ../../xdecoder_focalt_last_novg.pt --text 'pizza on the plate'
The image that best matches the given text is ../../images/coco/000.jpg and probability is 0.998
We have also prepared a gradio program in the projects/gradio_demo
directory, which you can run interactively all the inference supported by mmdetection in your browser.
Models and results
Semantic segmentation on ADE20K
Prepare your dataset according to the docs.
Test Command
Since semantic segmentation is a pixel-level task, we don't need to use a threshold to filter out low-confidence predictions. So we set model.test_cfg.use_thr_for_mc=False
in the test command.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-semseg_ade20k.py xdecoder_focalt_best_openseg.pt 8 --cfg-options model.test_cfg.use_thr_for_mc=False
Model | mIoU | mIOU(official) | Config |
---|---|---|---|
xdecoder_focalt_best_openseg.pt
| 25.24 | 25.13 | config |
Instance segmentation on ADE20K
Prepare your dataset according to the docs.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-instance_ade20k.py xdecoder_focalt_best_openseg.pt 8
Model | mIoU | mIOU(official) | Config |
---|---|---|---|
xdecoder_focalt_best_openseg.pt
| 10.1 | 10.1 | config |
Panoptic segmentation on ADE20K
Prepare your dataset according to the docs.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_ade20k.py xdecoder_focalt_best_openseg.pt 8
Model | mIoU | mIOU(official) | Config |
---|---|---|---|
xdecoder_focalt_best_openseg.pt
| 19.11 | 18.97 | config |
Semantic segmentation on COCO2017
Prepare your dataset according to the docs of (2) use panoptic dataset
part.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-semseg_coco.py xdecoder_focalt_last_novg.pt 8 --cfg-options model.test_cfg.use_thr_for_mc=False
Model | mIOU | mIOU(official) | Config |
---|---|---|---|
xdecoder-tiny_zeroshot_open-vocab-semseg_coco
| 62.1 | 62.1 | config |
Instance segmentation on COCO2017
Prepare your dataset according to the docs.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-instance_coco.py xdecoder_focalt_last_novg.pt 8
Model | Mask mAP | Mask mAP(official) | Config |
---|---|---|---|
xdecoder-tiny_zeroshot_open-vocab-instance_coco
| 39.8 | 39.7 | config |
Panoptic segmentation on COCO2017
Prepare your dataset according to the docs.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-panoptic_coco.py xdecoder_focalt_last_novg.pt 8
Model | PQ | PQ(official) | Config |
---|---|---|---|
xdecoder-tiny_zeroshot_open-vocab-panoptic_coco
| 51.42 | 51.16 | config |
Referring segmentation on RefCOCO
Prepare your dataset according to the docs.
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_open-vocab-ref-seg_refcocog.py xdecoder_focalt_last_novg.pt 8 --cfg-options test_dataloader.dataset.split='val'
Model | text mode | cIoU | cIOU(official) | Config |
---|---|---|---|---|
xdecoder_focalt_last_novg.pt
| select first | 58.8415 | 57.85 | config |
xdecoder_focalt_last_novg.pt
| original | 60.0321 | - | config |
xdecoder_focalt_last_novg.pt
| concat | 60.3551 | - | config |
Note:
- If you set the scale of
Resize
to (1024, 512), the result will be57.69
. text mode
is theRefCoCoDataset
parameter in MMDetection, it determines the texts loaded to the data list. It can be set toselect_first
,original
,concat
andrandom
.select_first
: select the first text in the text list as the description to an instance.original
: use all texts in the text list as the description to an instance.concat
: concatenate all texts in the text list as the description to an instance.random
: randomly select one text in the text list as the description to an instance, usually used for training.
Image Caption on COCO2014
Prepare your dataset according to the docs.
Before testing, you need to install jdk 1.8, otherwise it will prompt that java does not exist during the evaluation process
./tools/dist_test.sh projects/XDecoder/configs/xdecoder-tiny_zeroshot_caption_coco2014.py xdecoder_focalt_last_novg.pt 8
Model | BLEU-4 | CIDER | Config |
---|---|---|---|
xdecoder-tiny_zeroshot_caption_coco2014
| 35.26 | 116.81 | config |
Gradio Demo
Please refer to https://github.com/open-mmlab/mmdetection/blob/dev-3.x/projects/gradio_demo/README.md for details.
Contributors
A total of 30 developers contributed to this release.
Thanks @jjjkkkjjj @lovelykite, @minato-ellie, @freepoet, @wufan-tb, @yalibian, @keyakiluo, @gihanjayatilaka, @i-aki-y, @xin-li-67, @RangeKing, @JingweiZhang12, @MambaWong, @lucianovk, @tall-josh, @xiuqhou, @jamiechoi1995, @YQisme, @yechenzhi, @bjzhb666, @xiexinch, @jamiechoi1995, @yarkable, @Renzhihan, @nijkah, @amaizr, @Lum1104, @zwhus, @Czm369, @hhaAndroid