There has been substantial progress to date in developing online methods for visual detection and tracking. However, few—if any—systems in the literature are capable of running reliably for long periods (days, weeks, or even months) without the need for human intervention to reset or re-initialize the tracker. The main focus of this workshop is to stimulate research towards attaining reliable, autonomous detection and tracking of single and/or multiple objects, over long-term sequences. The workshop will include oral presentations of peer-reviewed papers, invited talks and a session where presenters and other workshop participants show results of their tracking systems on long-term sequences, concluding with a floor discussion to identify open challenges and opportunities.
Papers addressing various aspects of detection and tracking in long-term sequences are invited. For the purpose of this workshop, a “long-term sequence” is a video that is at least 2 minutes long (at 25-30 fps), but ideally 10 minutes or longer.
Possible paper topics include, but are not limited to:
- Machine learning approaches
- Handling drift and/or learning in the presence of concept drift
- Exploiting the “big data” aspects of long-term detection and tracking
- Theoretical analysis of stability and performance bounds
- Data association
- Detection and tracking over extended space and extended time
- multi-camera systems, e.g., stereo, distributed camera networks
- moving cameras, e.g., vehicles, robots, aerial platforms
- handheld devices
- wearable devices, e.g., first person vision systems
- Quantitative evaluation
Authors of all submitted papers are asked to conduct and report results of quantitative evaluation on long-term sequences (provided on the this website, and/or uploaded by authors to this website).