2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)

December 1-4, 2019 • Sydney, AUSTRALIA

Grand Challenges on Vehicle ReIdentification

Top-3 Teams of Our Grand Challenge:

  • Rush Team (from Institute of Automation, Chinese Academy of Sciences)
  • PES (from Pensees Pte Ltd.)
  • NPUST-MIS (from National Pingtung University of Science and Technology, Department of Management Information Systems)

Vehicle Re-Identification (ReID) aims to retrieve images of a query vehicle from a large-scale vehicle database, which is of great significance to the urban security and city management. However, to the best of our knowledge, all of the existing vehicle ReID datasets are captured under constrained conditions, and generally have limitations in the following aspects:

  • The number of vehicle identities and images are not large enough to the needs of practical application.
  • The limited camera numbers and covering areas do not involve complex and variant backgrounds in a variety of real-world scenarios.
  • The camera views are highly restricted as shown in Figure 1. For most vehicle datasets, the samples are collected from checkpoint cameras that only capture the front and rear views, and the severe occlusion is also not taken into consideration.
  • Most of current datasets are constructed from short-time surveillance videos without significant illumination and weather changes. The purpose of this competition is to promote research and development on unconstrained wild vehicle ReID, especially in complex situations, e.g. high viewpoint variations, extreme illumination conditions, complex backgrounds, and different camera sources.

Figure 1. The extensively collected VERI-Wild dataset poses practical challenges for vehicle ReID, e.g., significant viewpoint, illumination, and background variations, and severe occlusion. Another challenge of our dataset is that one vehicle may appear across numerous cameras, e.g., in an extreme case, a vehicle appears in 46 surveillance cameras.


  • Each team will be asked to register prior to the submission period. Registration is now opened, to register please submit the Registration Form to cs_hebin@pku.edu.cn.
  • The dataset is open for DOWNLOAD

Call for Participation

Vehicle Re-identification (ReID) is of great significance for the intelligent transportation and public security. With recent deep learning technology development, Vehicle ReID algorithm efficiency is significantly improved. However, several challenging issues of Vehicle ReID in real-world surveillance scenarios have not been fully investigated e.g., the high viewpoint variations, extreme illumination conditions, complex backgrounds, and different camera sources. These limitations may oversimplify the practical challenges of the ReID task, the ReID models being developed and evaluated on most existing datasets could be problematic regarding the generalization capability in the wild. A new vehicle ReID dataset in the Wild (VERI-Wild) [1] is released to address the aforementioned issues, researchers and developers from academia and industry are welcome to participate in this competition and further exploration on relevant technical and application issues is encouraged.

Download Challenge Details

Tentative Timetable

Registration Open 20-Jun-2019
Validation (on Test 1) 20-Jun-2019 to 10-Aug-2019
Retrieval results (on Test 2) submission 11-Aug-2019 to 25-Aug-2019
Final evaluation results announcement 30-Aug-2019
Camera-ready paper submission 15-Sep-2019

Host Organization

  • Peng Cheng Laboratory, Shenzhen, China
  • The National Engineering Lab for Video Technology, Peking University, China
  • Institute of Computing Technology, Chinese Academy of Sciences, China
  • University of Missouri, Kansas City, USA

Potential Participants

  • Institute of Computing Technology, Chinese Academy of Sciences
  • Tsinghua University
  • Beihang University
  • Beijing Institute of Technology
  • National University of Singapore
  • University of Toronto
  • The University of Melbourne
  • Imperial College London
  • University of Surrey
  • University of Technology Sydney
  • etc.


Bin He, Yihang Lou, Yan Bai, Wen Ji, Zhu Li, Ling-Yu Duan

Coordinator Contacts

Please feel free to send any question, comment, model with a brief method description to cs_hebin@pku.edu.cn

2019 IEEE International Conference on Visual Communications and Image Processing (VCIP)