Submission Results

Hi,

I just made a submission to test if my submission structure was correct for the VQ2D challenge. I do not see the results on how accurate it is (obviously it will be 0 since it is a set of empty bounding boxes). It had passed the submission structure checker.

I wanted to know if the results for our submissions would be immediately visible to us before they become available on the leaderboard? I wish to verify that my submission structure is indeed correct before I make an actual submission.

Regards

Hi,

If I understand your question correctly, you want to verify your results before having them visible on the leaderboard. That is certainly the case. Please refer to this page: Make Submission Public — EvalAI 1.1 documentation

Unless you mark them to appear on the leaderboard, they won’t be shown.

@suyogjain
That is correct. However, my execution time comes out to be “None”. I am unsure if that is because the submission is incorrect or just an EvalAI problem. I have made them allowed to appear on the leaderboard yet they continue to not being shown.

P.S. I see that usually the screenshots in the EvalAI documentation have a numeric execution time so I am unsure if my structure is incorrect (which is why it says None) or just a bug at the end of EvalAI

On my end it shows the status for your submission as “Cancelled”. Also, I am not able to download your submission file, so hard to say if it is a format issue vs evalAI issue. For some reason, I also see that you submitted a “.gz” file but the eval script expects a json file.

@suyogjain
I figured the “.json.gz” format may be a problem so cancelled it and submitted a “.json” file. The submission still says submitted with “None” as an execution time and no results showing on the leaderboard.

Can you please check once more for me?

Just checked, it says “failed” this time. My json file does pass the validate_challenge_predictions.py . Can you still take a look at it please?"

@asjad.s - Thank you for reaching out. It seems that at least one of your response-track predictions has no bounding boxes. The evaluation code fails because it expects each prediction to have at least one bounding box.

That might be it yes because I ran my code to debug a smaller set of videos.
Thankyou!!

Now that I think about it, maybe this wasn’t the best solution here:

Maybe you can try setting bboxes: [] to bboxes: [BBox(0, 0, 0, 5, 5)], where you can import BBox from here: