Each participating algorithm is required to predict the bounding boxes of objects of seen and unseen classes with a real-valued confidence. We provide a remote sensing dataset with more than 20k static images for this task. We also provide word vector for each category. Average precision and Recall@100 under 0.5 threshold will be used for evaluating each algorithm. 

Note that the top three contestants need to provide the final training code. We will use this code to reproduce the results.


  • [05.20-05.31]: The organizing committee publishes competition information on the website, and the participating teams register and register.
  • [06.01-07.01]: Dataset available for download and start to submit result
  • [07.02-07.09]: Results submission deadline
  • [07.15]: Challenge results released
  • [07.28-07.30]: Convene a seminar to introduce competition methods and award awards.

*All times are at GMT – Greenwich Mean Time
*The deadline for the competition is 24:00 on July 9th 2024, GMT time

Challenge Guidelines:

(1) The object detection evaluation page lists detailed information regarding how submissions will be scored.  We only ask to submit the results of the Generalized zero-shot-detection (GZSD) setting. For the GZSD setting, we consider all detections from seen and unseen objects together.

(2) The dataset includes 21 categories, including background, seen categories (‘airplane’, ‘baseballfield’, ‘bridge’, ‘chimney’, ‘dam’, ‘Expressway-Service-area’,’Expressway-toll-station’, ‘golffield’, ‘harbor’, ‘overpass’, ‘ship’, ‘stadium’, ‘storagetank’,’tenniscourt’, ‘trainstation’, ‘vehicle’) and unseen categories(‘airport’, ‘basketballcourt’, ‘groundtrackfield’, ‘windmill’).

(3) Test-GZSD(3,337 images) is designed for the experiment of GZSD setting, respectively. The result is all used for workshop competition.

(4) The train images and corresponding annotations as well as the images in the test-GZSD set are available on the download page.

(5) The semantic descriptor is a 21×1,024 dimensional word vector.

For additional questions, please contact the BRAIN Lab (peilianghuang2017@gmail.com).