Test Guidelines

The VisDrone data can be obtained from the download page. Each challenge has a different training / validation / testing set. We provide the detail information on the download page. We encourage the participants to use the provided training data for each task, but also allow them to use additional training data. The use of additional training data must be indicated in the “method description” when uploading results to the server. We emphasize that any form of annotation or use of the VisDrone testing sets for either supervised or unsupervised training is strictly forbidden.

Note: the participants are required to explicitly specify any an all external data used for training in the “method description” in submission. In addition, the participants are NOT allowed to train a model in one task using the training or validation sets in other tasks, e.g., train a detector for Task 1: object detection in images using the training or validation sets in Task 3: Multi-Object Tracking is strictly forbidden.

Test Set Splits

To limit overfitting while providing researchers more flexibility to test their algorithms, we have divided the test set into two splits, including test-challenge and test-dev.

Test-dev: The test-dev split is the default test data for general evaluation. We recommend the authors to generally report the results on test-dev for fair comparison. The annotation files of test-dev are available from the download page.

Test-challenge: The test-challenge split is used for VisDrone challenges hosted on the relevant workshop (e.g., at ECCV 2020). The number of submissions per participant is limited to a maximum of 3 uploads total in the challenge. If you submit multiple entries, the best results will be used as your final results for the competition. Note that we only publish a single submission per participant on the leaderboard. The test-challenge server will only remain open during the competition.

A participant can NOT create multiple accounts for a single project to circumvent the submission upload limits. For challenges, a group can ONLY create multiple accounts or multiple entries in one account for evaluating substantially different methods to the challenge (e.g., based on different papers). We encourage the participants to use the validation set to debug their algorithms.

Task 1: Object Detection in Images

The test set for object detection in images challenge consists of 3,190 test images. The test set is divided into the test-dev and test-challenge sets. The test splits are as follow.

 Split#images submit limit scores available leaderboard 
 Test-dev 1,610 N immediate year-round
 Test-challenge 1,580 3 total workshop workshop

Task 2: Single-Object Tracking

The test set for single-object tracking challenge consists of 70 video clips (62,289 frames in total). The test set is divided into two equally sized splits of 35 video clips each: test-dev and test-challenge. The test splits are as follow.

 Split#clips (frames)submit limit scores available leaderboard 
 Test-dev 31 (32,922)  N  immediate year-round
 Test-challenge31 (29,367) 3 total workshop workshop

Task 3: Multi-Object Tracking

The test set for multi-object tracking challenge consists of 33 video clips (12,968 frames in total). The test set is divided into the test-dev and test-challenge sets. The test splits are as follow.

 Split#clips (frames)submit limit scores available leaderboard 
 Test-dev17 (6,635)  N  immediate year-round
 Test-challenge16 (6,333) 3 total workshop workshop

Task 4: Crowd Counting

The test set for crowd counting challenge consists of 30 video clips (900 frames in total). The test splits are as follow.

 Split#clips (frames)submit limit scores available leaderboard 
 Test-challenge30 (900) 3 total workshop workshop