Overview

Each participating algorithm is required to predict the bounding boxes of objects of seen and unseen classes with a real-valued confidence. We provide a remote sensing dataset with more than 20k static images for this task. We also provide word vector for each category. Average precision and Recall@100 under 0.5 threshold will be used for evaluating each algorithm. 

Note that the top three contestants need to provide the final training code. We will use this code to reproduce the results.

Dates

  • [06.01]: Training, validation and testing data will be released
  • [06.15]: Dataset available for download and start to submit result
  • [07.15]: Results submission deadline
  • [10.03]: Challenge results released
  • [10.03]: Winner presents at ICCV 2023 Workshop

*All times are at GMT – Greenwich Mean Time
*The deadline for the competition is 24:00 on July 15th 2023, GMT time

Challenge Guidelines:

(1) The object detection evaluation page lists detailed information regarding how submissions will be scored.  We only ask to submit the results of the Generalized zero-shot-detection (GZSD) setting. For the GZSD setting, we consider all detections from seen and unseen objects together.

(2) The dataset includes 21 categories, including background, seen categories (‘airplane’, ‘baseballfield’, ‘bridge’, ‘chimney’, ‘dam’, ‘Expressway-Service-area’,’Expressway-toll-station’, ‘golffield’, ‘harbor’, ‘overpass’, ‘ship’, ‘stadium’, ‘storagetank’,’tenniscourt’, ‘trainstation’, ‘vehicle’) and unseen categories(‘airport’, ‘basketballcourt’, ‘groundtrackfield’, ‘windmill’).

(3) Test-GZSD(3,337 images) is designed for the experiment of GZSD setting, respectively. The result is all used for workshop competition.

(4) The train images and corresponding annotations as well as the images in the test-GZSD set are available on the download page.

(5) The semantic descriptor is a 21×1,024 dimensional word vector.

For additional questions, please contact the BRAIN Lab (peilianghuang2017@gmail.com).