Top-3 Teams of Our Grand Challenge:
Vehicle Re-Identification (ReID) aims to retrieve images of a query vehicle from a large-scale vehicle database, which is of great significance to the urban security and city management. However, to the best of our knowledge, all of the existing vehicle ReID datasets are captured under constrained conditions, and generally have limitations in the following aspects:
Figure 1. The extensively collected VERI-Wild dataset poses practical challenges for vehicle ReID, e.g., significant viewpoint, illumination, and background variations, and severe occlusion. Another challenge of our dataset is that one vehicle may appear across numerous cameras, e.g., in an extreme case, a vehicle appears in 46 surveillance cameras.
Vehicle Re-identification (ReID) is of great significance for the intelligent transportation and public security. With recent deep learning technology development, Vehicle ReID algorithm efficiency is significantly improved. However, several challenging issues of Vehicle ReID in real-world surveillance scenarios have not been fully investigated e.g., the high viewpoint variations, extreme illumination conditions, complex backgrounds, and different camera sources. These limitations may oversimplify the practical challenges of the ReID task, the ReID models being developed and evaluated on most existing datasets could be problematic regarding the generalization capability in the wild. A new vehicle ReID dataset in the Wild (VERI-Wild) [1] is released to address the aforementioned issues, researchers and developers from academia and industry are welcome to participate in this competition and further exploration on relevant technical and application issues is encouraged.
Download Challenge DetailsRegistration Open | 20-Jun-2019 |
Validation (on Test 1) | 20-Jun-2019 to 10-Aug-2019 |
Retrieval results (on Test 2) submission | 11-Aug-2019 to 25-Aug-2019 |
Final evaluation results announcement | 30-Aug-2019 |
Camera-ready paper submission | 15-Sep-2019 |
Bin He, Yihang Lou, Yan Bai, Wen Ji, Zhu Li, Ling-Yu Duan
Please feel free to send any question, comment, model with a brief method description to cs_hebin@pku.edu.cn
http://www.vcip2019.org