MQ_baseline differences

Hello, I’m confused with MQ_baseline published results. There is two MQ_baseline on Leaderboard, the first one with Recall 44.24 and average_mAP 23.99, but the other has 24.25 and 5.68

As I understand, MQ_baseline is the official baseline published. Why it’s twice?

Also I clonned the official Github Repo and trained the official model using slowfast features… getting similar results to 24.25 and 5.68 (the worst MQ_baseline). So… which setup has the “better” MQ_baseline? it’s the same model, the same repo? I’d like to reproduce it correctly.

Thanks!

@fedegonzal - Thanks for your interest in the MQ challenge. The MQ_baseline is based on the last year’s winning entry. We have updated the details on the baseline here (includes pointers to the codebase): https://eval.ai/web/challenges/challenge-page/1626/overview