A public version of my TvL experiment for my 2021 RSI project. https://link.springer.com/article/10.3758/s13414-022-02503-5
-
Updated
Dec 1, 2022 - JavaScript
A public version of my TvL experiment for my 2021 RSI project. https://link.springer.com/article/10.3758/s13414-022-02503-5
Image captioning with Visual Attention
The scope of this research is to determine if there is any correlation between the level of experience of surgeons and their visual attention while performing surgeries.
Image captioning of Flickr 8k dataset using Attention and Merge model
The analysis pipeline for our paper 'Functional connectivity fingerprints of the frontal eye field and inferior frontal junction suggest spatial versus nonspatial processing in the prefrontal cortex'.
We present SCENE-pathy, a dataset and a set of baselines to study the visual selective attention (VSA) of people towards the 3D scene in which they are located
Tools for the paper of IEEE Journal on Emerging and Selected Topics in Circuits and Systems: Visual Attention-Aware Omnidirectional Video Streaming Using Optimal Tiles for Virtual Reality
A model of mixed neural networks for step-by-step processing of dynamic visual scenes, activity recognition and behavioral prediciton
RARE2007 is a feature-engineered bottom-up salienct model only using color information (no orientation)
RARE2012 is a feature-engineered bottom-up visual attention model
ETTO (Eye-Tracking Through Objects) and EToCVD (Eye-Tracking of Colour Vision Deficiencies) datasets are shared with all who might be interested in working on Visual Attention/Visual Saliency.
Visual Attention : what is salient in an image with DeepRare2019
Implemenetation of 2016 paper "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" on Flick30k dataset.
Code for "Multiple decisions about one object involve parallel sensory acquisition but time-multiplexed evidence incorporation"
Deep Neural Network Image Captioner using visual Attention
Where do people look on images in average? At rare, thus surprising things! Let's compute them automatically
Official Implementation for NeurIPS 2023 Paper "What Do Deep Saliency Models Learn about Visual Attention"
Official Code for 'Exploring Language Prior for Mode-Sensitive Visual Attention Modeling' (ACM MM 2020)
Add a description, image, and links to the visual-attention topic page so that developers can more easily learn about it.
To associate your repository with the visual-attention topic, visit your repo's landing page and select "manage topics."