Try to add our email address to your email whitelist: aiskyeye@qq.com and notification@aiskyeye.com. Alternatively, you can leave a message below to request manual account activation.
The bounding boxes in the submitted results overlapping the “others” objects with the threshold larger than the threshold or locating in the ignored regions will be filtered out in the evaluation.
NO. If you would like to report results of your algorithm with different settings (e.g., different parameters and training conditions), please use the training and validation sets for this purpose, and only submit one result to the evaluation server.
Similar to the PASCAL VOC dataset, we labeled some “ignore” regions, which indicates the regions containing the objects that are difficult to be annotated due to low resolution or crowd. Meanwhile, some rarely occurring objects (e.g., machineshop truck, forklift truck, and tanker) are labeled as “others”, which are ignored in evaluation.
Thank you! We are aware of some deficiencies in the provided annotations and are still trying to improve these. We do appreciate all kinds of feedback, so please don’t hesitate to contact us to report any findings.
The evaluation results will be displayed at the bottom of the submission page, and the leaderboard will be updated daily.
I you want to rename your algorithm, please contact us.

106 Comments

  1. How do we handle empty predictions i.e cases in which our model does not predict any bounding box for that image ? The evaluation code assumes at least one prediction made.

    Gateway
  2. Sorry, I have two more quesions.
    In the SUBMIT page , when I choose the ” OBHECT DETECTION” it shows”Your upload quota has been used up.”
    But actually I have just downloaded the test dataset, I have never submitted any file.
    When I choose the “Single-Object Tracking”, it comes to my accout page. Its really wired. Could u please help me?

    My second question is , could you please tell me the files in “VisDrone2019-SOT-test-dev/attributes/*attr.txt” What do they mean,especially the numbers like”0,0,1,0,0,0,1,0,0,0,1,0″. What do these parameters mean respectively. I didn’t find any document to give a definition. thank you

  3. I have two questions.
    First, the result format of the crowd counting task is the format of object detection, not the format of mentioned above. Could you tell me the proper format?
    Second, the EVALUATE directory tells me “The evaluation code for crowd counting is available on the VisDrone github”, but in this github I can not find eavluation code of crowd counting. Could you please help me ?

  4. Hi, I have three questions:

    1. If I understand correctly, each company/group has 3 test submissions per algorithm. So if there would be totally different algorithms used (not just some hyperparameters that change), one company/group would have 3 submissions per algorithm?

    2. Is it 3 (like in the FAQ) or 5 (like on the submission page) submissions per algorithm?

    3. How is the test-dev evaluation working? Just locally on my machine or is there a seperate evaluation page as well?
    Thanks in advance!

    1. Hello, we have updated the old FAQ page.

      1. In the ECCV2020 challenge, each company/group has 5 test submissions totally. We recommend that each team only participate in one task. If your team wants to participate in multiple tasks, it is recommended to register a new account to get more submission opportunities, or contact us to modify your account upload limit.

      2. The upload quota on the submission page is all your submission quota (shared in all tasks)

      3. Evaluation is done on our server. You need to download the test set, get the results locally, and submit the test set results to our server. The evaluation will be performed automatically. After the evaluation is completed, you will receive a notification email and the score will be displayed below the submission form.

      1. Sorry for that. Due to network issues, our evaluation server went down in the past few hours. Now the problem has been fixed, your evaluation score will be displayed on the page later. If you still have problems, please contact us.

  5. Hi, when I open the submit page of Multi-Object Tracking, the results are as follows:

    Not Found
    Apologies, but the page you requested could not be found. Perhaps searching will help.

    Could you tell me how to solve it?

  6. I have a question.
    The description document of the corresponding method is asked to submit when I submit the results.But in your download link, I don’t find document templates. There is only one download link for the dataset.
    Could you help me?

  7. Hi. I had submitted my predictions along with a description document before the document template was released. May I know to whom I can send the new description document of our method as per the template? Thanks.

  8. Hi, “Please note that when you submit the results, you must also submit the corresponding method description document. Document templates can be downloaded on the download page. ” in Submit Page
    The download page is not found!

  9. How to evaluate test-dev result on evaluation server? In webpage you guys said “Number of time for testing on evaluation server for test-dev dataset is unlimited” so it is allowed to test test-dev set right? But I did not see any info. Please let me know how to do it.

  10. Hi,
    I haven’t seen any discrimination for short-term and long-term trackers. Please let me know if we are allowed to use a detector/verifier for our short-term tracker in the VisDrone2020 challenge.

    Mojtaba
    1. We use UTC time. The submission system will close at 24:00 on the deadline. Note that due to the impact of COVID-19, the submission deadline is delayed until July 15th. And each team will have additional 5 submission opportunities.

    1. Sorry for that. Due to network issues, our evaluation server went down in the past few hours. Now the problem has been fixed, your evaluation score will be displayed on the page later. If you still have problems, please contact us.

    1. Sorry for that. Due to network issues, our evaluation server went down in the past few hours. Now the problem has been fixed, your evaluation score will be displayed on the page later. If you still have problems, please contact us.

  11. Hello, I am a member participating in 2020SOT. I submitted the zip in the format you gave, but the total submission was unsuccessful. Trouble to ask why. See the attachment for a schematic diagram of the submission format. Thank you.

  12. I can see having validation and test-dev datasets excluded from the training process for debugging/offline evaluation purposes. Regarding the final submission on the test-challenge set, are we supposed to train on the whole dataset including all the available annotations (train, val and test-dev) or we should rely on the trainining-set split explicitly?

    Ioannis
  13. Hello, I have a question about the paper submitition. If I want to submit a paper and have good results on serveral standard datasets, do I have to join your challenge and have results on visdrone dataset, or I just need to fit in one of your topics, for instance, people and object tracking?

    yukawa

Leave a Reply

Your email address will not be published. Required fields are marked *