Test Guidelines

The VisDrone data can be obtained from the download page. Each challenge has a different training / validation / testing set. We provide the detail information on the download page. We encourage the participants to use the provided training data for each task, but also allow them to use additional training data. The use of additional training data must be indicated in the “method description” when uploading results to the server. We emphasize that any form of annotation or use of the VisDrone testing sets for either supervised or unsupervised training is strictly forbidden.

Note: the participants are required to explicitly specify any an all external data used for training in the “method description” in submission. In addition, the participants are NOT allowed to train a model in one task using the training or validation sets in other tasks, e.g., train a detector for Task 1: object detection in images using the training or validation sets in Task 2: Multi-Object Tracking is strictly forbidden.

Test Set Splits

To limit overfitting while providing researchers more flexibility to test their algorithms, we have divided the test set into two splits, including test-challenge and test-dev.

Test-dev: The test-dev split is the default test data for general evaluation. We recommend the authors to generally report the results on test-dev for fair comparison. The annotation files of test-dev are available from the download page.

Test-challenge: The test-challenge split is used for VisDrone challenges hosted on the relevant workshop (e.g., at ICCV 2021). If you submit multiple entries, the best results will be used as your final results for the competition. Note that we only publish a single submission per participant on the leaderboard. The test-challenge server will only remain open during the competition.

A participant can NOT create multiple accounts for a single project to circumvent the submission upload limits. For challenges, a group can ONLY create multiple accounts or multiple entries in one account for evaluating substantially different methods to the challenge (e.g., based on different papers). We encourage the participants to use the validation set to debug their algorithms.

Task 1: Object Detection in Images

The test set for object detection in images challenge consists of 3,190 test images. The test set is divided into the test-dev and test-challenge sets. The test splits are as follow.

 Split#images 
 Test-dev 1,610
 Test-challenge 1,580

Task 2: Multi-Object Tracking

The test set for multi-object tracking challenge consists of 33 video clips (12,968 frames in total). The test set is divided into the test-dev and test-challenge sets. The test splits are as follow.

 Split#clips (frames)
 Test-dev17 (6,635)
 Test-challenge16 (6,333)

Task 3: Crowd Counting

The test set for crowd counting challenge consists of 30 video clips (900 frames in total). The test splits are as follow.

 Split#clips (frames)
 Test-challenge30 (900)