Paper Submission

CMT Extended Abstract Submission

All submissions must follow the CVPR 2014 paper guidelines, as outlined here:

http://www.pamitc.org/cvpr14/author_guidelines.php.

Papers will be handled via the LTDT Workshop submission page in the following:

https://cmt2.research.microsoft.com/LTDT2014/

Extended Abstract will be allocated 1-2 pages. Extended Abstracts will be available on this website.

The submission deadline is April 30, 2014 (11:59 PST).
No extensions of this deadline will be granted.

CMT Paper Submission

All submissions must follow the CVPR 2014 paper guidelines, as outlined here:

http://www.pamitc.org/cvpr14/author_guidelines.php.

Papers will be handled via the LTDT Workshop submission page in the following:

https://cmt2.research.microsoft.com/LTDT2014/

Papers will be allocated 6 pages in the proceedings, with the option of purchasing up to 2 extra pages for US$100 per page. More details can be found here.

The submission deadline is April 7, 2014 (11:59 PST).
No extensions of this deadline will be granted.

Dataset Download

Submitted papers should include results of quantitative evaluation using the long-term sequences and evaluation framework that have been provided on this site (see below). You may find the files of the sequences and the MATLAB evaluation kit here:

http://www.micc.unifi.it/LTDT2014/LTDT2014.rar (~ 1.7 GB)

Here you can find the description of the dataset:

http://www.micc.unifi.it/LTDT2014/LTDT2014_Dataset.pdf

Papers may also report results on other extended video sequences, if these sequences and ground truth are made publicly available (copies or links to these contributed datasets and ground truth will be shared from the LTDT website).

LTDT 2014 Evaluation Kit

The LTDT2014 dataset is a collection of 6 video sequences and  an evaluation kit for comparison of object tracking algorithms. The figure below shows some snapshots from the sequences. The sequences are annotated with bounding boxes. More than 50% of occlusion or more than 90 degrees of out-of-plane rotation was annotated as “not visible”.

dataset

To evaluate the tracker, save the list of bounding box in a text file as follow:

[left_column, top_row, right_column, bottom_row]

and when the target is not detected as:

[NaN, NaN, NaN, NaN]

Here below a sample example:

…
284.33,72.3,305.02,91.782
286.89,40.028,307.8,59.729
286.02,31.393,307.37,51.499
287.26,16.633,308.66,36.793
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
287.26,16.633,308.66,36.793
292.78,6.1074,313.86,25.962
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
NaN,NaN,NaN,NaN
…

Assuming bbox is a Nx4 vector containing the bounding-boxes or the NaN values, the  MATLAB command dlmwrite(‘mytracker.txt’, bbox) creates the required file. The created file must be placed in the specific sequence folder.

To run the comparison go to the folder _matlab and run the matlab script run_comparison.

In order to evaluate the trackers the following parameters could be changed:

%name of the folder containing the video frames
Sequence = {'09_carchase','08_volkswagen','07_motocross'};
% name of the sequence that will be printed
Name     = {'Carchase','Volkswagen','Motocross'};
% name of the file containing the bounding box of the tracked object
Tracker  = {'gt','ALIEN','PREDATOR'};
% threshold on bounding box overlap to declare true positive
THR      = 0.5;
% threshold on center bounding box distance to declare true positive
THRD     = 20;
% It can be empty or a single number. If single number shows the figures and trajectories
% of the specific sequence and save it. If empty it saves a latex teble reporting the performance
show     = [1];
color    = {'r','g','b','c','y','k','m'};
  • If show = 1 performance figures and a video sequence with the superimposed bounding box will be created. Here we provide an example where ALIEN vs Predator trackers are evaluated [1,2].
  • If show = [] a latex table will be produced to show the trackers performance.

This evaluation kit and part of the sequences has been modified from the dataset of [1]. Other sequences are taken from [3] and [4]:

References

  1. Kalal, Zdenek, Krystian Mikolajczyk, and Jiri Matas. “Tracking-learning-detection.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 34.7 (2012): 1409-1422.
  2. Federico Pernici, Alberto Del Bimbo, “Object Tracking by Oversampling Local Features,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, no. PrePrints, p. 1, , 2013
  3. Lebeda, Karel, R. Bowden, and Jiri Matas. “Long-term tracking through failure cases.” Vis. Obj. Track. Challenge VOT2013, In conjunction with ICCV2013(2013).
  4. Arnold W. M. Smeulders, Dung M. Chu, Rita Cucchiara, Simone Calderara, Afshin Dehghan, Mubarak Shah, “Visual Tracking: An Experimental Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 99, no. PrePrints, p. 1, , 2013

Comments are closed.