Overview

Dates

  • [08.01]: Training, validation and testing data released
  • [09.30]: Result submission deadline
  • [10.16]: Challenge results released
  • [10.16]: Winner presents
  • The deadline for the competition is 24:00 on September 15th, BEIJING time

Notice

A team can only register one account. Quota can be obtained by joining the WeChat group.

In order to prevent the problem of a team registering multiple accounts, this competition requires all members of the participating team to join the WeChat group. If the QR code is invalid, we will update it in time. And the old account cannot be used, you need to re-register a new account.

If you do not have WeChat, please send your application to tju.drone.vision@gmail.com. The application information should include account name, real name, institution, country, email address and the name and institution of team members.

Hardware

  • The sensors are mounted rigidly on an aluminium platform for handheld operation. An FPGA is utilized to generate an external signal trigger to synchronize clocks of all sensors. We install the sensor rig on various platforms to simulate distinguishable motions of different equipments, including a handheld device with a gimbal stabilizer, a quadruped robot, and an autonomous vehicle.
SensorCharacteristics
3D LiDAR (not provided)Ouster OS1-128, 128 channels, 120m range
Frame Camera * 2 FILR BFS-U3-31S4C, resolution: 1024 × 768
Event Camera * 2 DAVIS346, resolution: 346 × 240,2 built-in imu
IMU (body_imu) STIM300
GPS ZED-F9P RTK-GPS
Ground Truth Leica BLK 360
  • Calibration: The calibration file in yaml format can be downloaded here. We provide intrinsics & extrinsics of cameras as well as noise parameters of the IMU and also the raw calibration data. Intriniscs are calibrated using the MATLAB tool, and the extrinsics are calibrated using the Kalibr. Taking the frame_cam00.yaml as an example, parameters are provided in the form as follows:
image_width: 1024
image_height: 768
camera_name: stereo_left_flir_bfsu3
camera_matrix: !!opencv-matrix
  rows: 3
  cols: 3
  dt: f
  data: [ 6.05128601e+02, 0., 5.21453430e+02, 
          0., 6.04974060e+02, 3.94878479e+02, 
          0., 0., 1. ]
...
# extrinsics from the sensor (reference) to bodyimu (target)
quaternion_sensor_bodyimu: !!opencv-matrix
  rows: 1
  cols: 4
  dt: f
  data: [0.501677, 0.491365, -0.508060, 0.498754]  # (qw, qx, qy, qz)
translation_sensor_bodyimu: !!opencv-matrix
  rows: 1
  cols: 3
  dt: f
  data: [0.066447, -0.019381, -0.077907]
timeshift_sensor_bodyimu: 0.03497752745342453

Rotational and translational calibration parameters from the camera (reference frame) to the IMU (target frame) are presented in the form of the Hamilton quaternion ([qw, qx, qy, qz]) and the translation vector ([tx, ty, tz]). The timeshift is obtained by the Kalibr.

Overview

  • We are pleased to announce the VisDrone2021 Object Detection in Images Challenge (Task 1). This competition is designed to push the state-of-the-art in object detection with drone platform forward. Teams are required to predict the bounding boxes of objects of ten predefined classes (i.e.pedestrianpersoncarvanbustruckmotorbicycleawning-tricycle, and tricycle) with real-valued confidences. Some rarely occurring special vehicles (e.g.machineshop truckforklift truck, and tanker) are ignored in evaluation. 
  • The challenge containing 10,209 static images (6,471 for training, 548 for validation and 3,190 for testing) captured by drone platforms in different places at different height, are available on the download page. We manually annotate the bounding boxes of different categories of objects in each image. In addition, we also provide two kinds of useful annotations, occlusion ratio and truncation ratio. Specifically, we use the fraction of objects being occluded to define the occlusion ratio. The truncation ratio is used to indicate the degree of object parts appears outside a frame. If an object is not fully captured within a frame, we annotate the bounding box across the frame boundary and estimate the truncation ratio based on the region outside the image. It is worth mentioning that a target is skipped during evaluation if its truncation ratio is larger than 50%. Annotations on the training and validation sets are publicly available.     

Dates

  • [06.05]: Training, validation and testing data released
  • [06.15]: evaluation software released
  • [07.15]: Result submission deadline
  • [10.16]: Challenge results released
  • [10.16]: Winner presents at ICCV 2021 Workshop
  • The deadline for the competition is 24:00 on July 15th 2021, AOE time

Challenge Guidelines

  • The object detection evaluation page lists detailed information regarding how submissions will be scored. To limit overfitting while providing researchers more flexibility to test their algorithms, we have divided the test set into two splits, including test-challenge and test-dev. 
  • Test-dev (1,610 images) is designed for debugging and validation experiments and allows for unlimited submission. The up-to-date results of the test-dev set are available to view on the leaderboard. 
  • Test-challenge (1,580 images) is used for workshop competition, and the results will be announced during the ICCV 2021 Vision Meets Drone: A Challenge workshop. We encourage the participants to use the provided training data, while also allow them to use additional training data. The use of external data must be indicated during submission. 
  • The train+val images and corresponding annotations as well as the images in the test-challenge set are available on the download page. Before participating, every user is required to create an account using an institutional email address. If you have any problems in registration, please contact us. After registration, the users should submit the results in their accounts. The submitted results will be evaluated according to the rules described on the evaluation page. Please refer to the evaluation page  for detailed explanation.

Tools and Instructions

We provide extensive API support for the VisDrone images, annotation and evaluation code. Please visit our GitHub repository to download the VisDrone API. For additional questions, please find the answers here or contact us.

发表回复

您的电子邮箱地址不会被公开。