Sensor  Parameters dataset

A dataset of images that were captured under variable sensorshutter speeds and gain values. The dataset was compiled and  used as part of the following paper:

A. Andreopoulos, J. K. Tsotsos. "On Sensor Bias In  Experimental Methods for Comparing Interest Point,  Saliency and Recognition Algorithms". IEEE Transactions  On Pattern Analysis and Machine Intelligence (2011, in press).

We respectfully ask that if you use the dataset, you cite the  above paper as its source. DOWNLOAD

 



Cardiac MRI dataset

This webpage contains a dataset of short axis cardiac MR images and the ground truth of their left ventricles' endocardial and epicardial segmentations. The dataset was first compiled and used as part of the following paper:

Alexander Andreopoulos, John K. Tsotsos, Efficient and Generalizable Statistical Models of Shape and Appearance for Analysis of Cardiac MRI, Medical Image Analysis, Volume 12, Issue 3, June 2008, Pages 335-357. PDF


Use is free of charge; We respectfully ask that if you use this dataset, you cite the above paper as its source.

The authors would like to acknowledge Dr. Paul Babyn, Radiologist-in-Chief, and Dr. Shi-Joon Yoo, Cardiac Radiologist, of the Hospital for Sick Children, Toronto, for the data sets and their assistance with this research project.

Disclaimer: The dataset is provided for research purposes only and there are no warranties provided nor liabilities assumed by York University nor the researchers involved in the production of the dataset.

Downloads

  • Cardiac MR images acquired from 33 subjects. Each subject's sequence consists of 20 frames and 8-15 slices along the long axis, for a total of 7980 images. The sequence corresponding to each subject x is in a distinct .mat (MATLAB) file named sol_yxzt_patx.mat. These are the raw, unprocessed images, that were originally stored as 16-bit DICOM images. DOWNLOAD

  • Segmentations of the above sequences. We have manually segmented each of the 7980 images where both the endocardium and epicardium of the left ventricle were visible, for a total of 5011 segmented MR images and 10022 contours. The segmentation corresponding to each subject x is in a distinct .mat (MATLAB) file named manual_seg_32points_patx.mat. Each contour is described by 32 points given in pixel coordinates. DOWNLOAD

  • Two small MATLAB functions for visualizing the segmentations on their corresponding images. Please see the included README file for examples of their use. DOWNLOAD

  • Metadata containing the pixel-spacing (mm per pixel), the spacing between slices along the long axis (mm per slice) of each subject's sequence, each subject's age and diagnosis. DOWNLOAD




Fixation Data and Code

Fixation data and code are available here: AIM.zip. The code written in MATLAB and includes a variety of learned ICA bases. Note that the code given  expects a relatively low resolution image as the receptive fields are  small, for a high resolution larger image, you may wish to try some larger receptive fields. Also, if you have any questions about the  code, feel free to ask. To use within matlab, you should be able to simply do something along the lines of: info = AIM('21.jpg',0.5); with  the parameter being a rescaling factor. It is also possible to vary a variety of parameters on both the command line and within the code  itself, so feel free to experiment. There are also some comments and  notes specific to psychophysics examples within one of the included  files. Note that all of these bases should result in better performance  than that based on the *very* small 7x7 filters used in the original  NIPS paper. The eye tracking data may be found at eyetrackingdata.zip. This includes binary maps for each of the images which indicate which pixel locations were fixated in addition to the raw data. Correspondence is best addressed to Neil.Bruce[at]sophia.inria.fr




Facial Gestures dataset

This webpage contains a dataset of  images of facial gestures taken by a camera mounted on a wheelchair.  The dataset was first compiled by Gregory Fine and used as part of the his Master of  Science thesis:

Examining the  feasibility of face gesture detection using a wheelchair mounted camera.

Use is free of charge; We respectfully ask that if you  use this dataset, you cite the above paper as its source.

Disclaimer: The dataset is provided for research purposes only and   there are no warranties provided nor liabilities assumed by York University nor the  researchers involved in the production of the dataset.

Downloads:

  • Facial gestures images acquired from 10 subjects. Each subject's sequence consists of 10 gestures and 100 images for each gesture, for a total of 9140 images. DOWNLOAD

  • Images used to train AAM algorithm to detect the eyes and mouth. It contains images along with ground truth contours of the eyes and mouth. DOWNLOAD

  • Images of facial gestures used to test false positive rate of the algorithm. It contains 440 images of facial gestures produced by 5 subjects. DOWNLOAD