Esta é unha revisión antiga do documento!
INDEX
Comparison with other Databases
We have run about 20 saliency models (static and dynamic) over 8 video datasets (with eye movement information). Preliminary results are shown in the following figures.
For the human fixation predictions, we have selected five extended downloadable databases that includes 376 videos with an amount of 321.870 frames.
- Actions in the Eye (AE) Dataset: Fixations were obtained over the Hollywood-2 and UCF-Sports. There are two groups of subjects in the experiment, one that includes 12 humans that recognize actions and the other 4 free viewing videos. The fixation patterns for both groups does not differ significantly. At this work we used the sports videos that will be referred to in this article under the generic heading AE-UCFS.
- Abnormal Surveillance Crowd Moving Noise (ASCMN/ACCV) Database: includes the fixations for a group of surviellance videos where exists anomalous object movements, sudden appearances of people or objects or camera movements.
- DIEM Project: this huge database has been done to demonstrate that the movement best guides the saliency than other low level keys. It includes audio.
- GazeCom (GC) Dataset: this database was compiled to provide information about the variability in the eye movements patterns of the subjects while they were observing natural scenes.
- CRCNS Datasets: was originally designed to investigate the influence of factors, like memory, in the visual attention over dynamic scenes. The observers saw the videos with normal content (CRCNS-ORG), and also a tv cuts subset characterized by the presence of abrupt transitions (CRCNS-MTV).
We are also analysing the results of all the models with the next database.
- Hollywood 2 Datasets: Collected by Ivan Laptev and col. It includes 69 movies used to generate the clips in this dataset. It was divided into 823 movie clips for training and 884 for test.
Comparison with Humans
Next table shows the comparison between the AWS-D model and the HumanPCT50 model, using the s-AUC and s-NSS values for all the vídeos of each one of the external databases tested. The HumanPCT50 model represents the mean behaviour of half the subjects included in each database. It has been obtained by randomly selecting half of the fixations of the database subjects at each instant.
ASCMN Video Database | |
---|---|
IMAGE | IMAGE |
s-AUC | s-NSS |
DIEM Video Database | |
---|---|
IMAGE | IMAGE |
s-AUC | s-NSS |
GC Video Database | |
---|---|
IMAGE | IMAGE |
s-AUC | s-NSS |
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This information is under the license Creative Commons Reconocimiento-Compartir Igual 4.0 Internacional>. You can use this dataset on your publication as long as you include a citation to the reference on this page. When including a link to this dataset, please use this page instead of linking the file directly.
Click here to return the top of this page …