Proper Identification of Conventional Synapses and Ribbon Synapses
Mentor
Vignesh Jagadeesh
Abstract
The goal of this project is the proper identification of conventional synapses and ribbon synapses. The conventional synapses were detected using a Laplacian of Gaussian filter to first identify the unique characteristics of the synapse and then extracting features from the images using a Gabor filter. The filter responses were then classified using a k-nearest neighbor algorithm with a high number, 86% on conventional synapses and 88% of non-synapses properly identified. An attempt was also made to draw the cleft and to automate drawing the cleft by automatically detecting the cleft. The cleft was filtered and masked several times resulting in a fairly well drawn cleft. The detection of the cleft also yielded good results using a k-means algorithm with 88% of clefts and 81% of non-clefts correctly identified. Finally ribbon synapses were detected using an MSER approach that yielded good results overall. These results help to lay the groundwork for future, more intuitive synapse detection techniques that may be able to classify synapses not just in 2-D, but also in the context of a 3-D space.
Creating Automated Methods to Detect, Classify and Count Spines on Primary Dendritic Branches Before and After Treatment
Mentor
Aruna Jammalamadaka
Abstract
Our goal is to create automated methods to detect, classify and count spines on primary dendritic branches before and after treatment with a particular micro-RNA to see if the treatment affects spine type percentages and overall counts. This is important because spine population changes could be related to cognitive disorders such as autism, mental retardation and Fragile X Syndrom. We work with fetal rat hippocampal neuronal images provided by Dr. Ken Kosik's lab at UCSB and a software called NeuronStudio. Using this software, Chris created an automated method of finding the volume of spines, and then analyzed the spine type classifications provided by NeuronStudio by comparing them to several other classification methods using features such as the shape context descriptor and elliptical Fourier descriptor. He used a cross-validation to classify spines into 3 categories using both linear discrimant analysis as well as a simple euclidean distance classification method and compared the performance of the 2 methods.
Determining Useful Images from Observational Cameras
Mentor
Jiejun Xu, Zefeng Ni
Abstract
Cameras used for observational purposes collect high volumes of video data. Unfortunately, not all that is collected is useful. To determine what a viewer presumes is interesting we conduct experiments using an eye tracker. Video sequences from two neighboring cameras are displayed simultaneously using a split screen format, reenacting what is done in practice, and the eye tracker records the user’s gaze patterns. Our hypothesis is that viewers will generally focus on one video while briefly referring to the other. Without the viewer’s knowledge, the sequences were edited to include several inconsistencies. When the viewer is given an inconsistency we expect that they will fixate on that video for a longer duration. It is those fixations that may provide hints about patterns of human interest. Proving this phenomenon increases our understanding of human gaze patterns, making the question “what is interesting to a viewer” more explicit. Applying the results to an algorithm, we can automate choosing which video segments are worth displaying to the user, establishing a more concise viewing experience.
Evaluation of the Context-aware Saliency Detection Method
Student Interns
Christine Sawyer (SACNAS conference award)
Abstract
Visual Saliency is the subjective perceptual quality which makes certain parts of an image to stand out more than others. The traditional measurement of visual saliency generally detects the dominant object in the image. A major drawback of this method is that by mainly focusing on the dominant object, its context in the image is lost. The latest saliency detection method – context-aware saliency – detects not only the dominant object but also its surroundings that adds semantic meaning of the scene. In this project, we provide an extensive evaluation of a recently proposed context-aware saliency detection method. The main contributions of this work are in two folds: 1) subjective evaluation framework utilizing EyeLink 1000 eye-tracking system; 2) creation of a data set to provide ground truth data. A representative data set of 60 images was chosen to display to our 17 experiment participants for 4 seconds each. By using the eye tracker, we capture the human gaze pattern needed to understand the context of a scene and use it as ground-truth to evaluate the implemented context-aware saliency detection method. Through comparing the experiment results to the saliency maps created by the algorithm, we identified the strength and the weakness of the algorithm. In addition, we believe that the human fixation data we have collected will be beneficial to the evaluation of various saliency detection methods.
Finding Ways to Show how to Color Cells in Cell Networks
Mentor
Panuakdet Suwannatat ("Mock")
Student Interns
Rotem RavivUniversity of California, Santa Barbara
Abstract
Sometimes while looking at pictures of cell networks it is necessary to see every cell in the network. Rotem Raviv worked on trying to find ways to show how to color each cell in the network so that the cells could not only each be seen effectively, but that they be visually appealing to the users. This problem was looked at by changing different parts of the choosing process including how cells were picked to change their colors, how the colors were changed, and what was considered a better color. The most successful method chosen was to pick the cells in an order, so that all the cells got colored, to pick colors at random, and to use a condensed score in which to check whether the whole was better.
Developing a Novel Approach to Plant Identification from Photographic Images
Mentor
Golnaz Abdollahian, Diana Delibaltov
Student Interns
Jacob JustusCalifornia State University San Bernardino
Abstract
Worked on developing a novel approach to plant identification from photographic images. In the future this software will be implemented as smartphone application that will allow a user to take photographs of plants and have useful taxonomical information returned to them. Thus far, ground truth data has been collected and will be used to demonstrate the effectiveness of the various algorithms that have been employed in the plant identification task. These algorithms currently include texture feature extraction and shape registration. In the future other algorithms will be developed or adapted to aid in the process of plant identification. These include algorithms that will extract various morphological and geometric features from end-user supplied images.
In addition to establishing ground truth data Jacob has worked on the design/ modification of an XML based template that allowed for the parsing of image metadata. This template was used to annotate the plant images within the Bisque database and in the future, will allow for accurate querying.
Image Analysis of Magnetic Resonance Imaging (MRI) Brain Scans
Mentor
Karthikeyan, Swapna Joshi
Abstract
The project entails image analysis of Magnetic Resonance Imaging (MRI) brain scans. The aim of the project is to identify a region, if any, that distinguishes between psychopaths and non-psychopaths. The MRI scans of these people are quantized by many variables, e.g. age, pscyhopathy score, etc. Recently, an algorithm was developed called Regression based non-negative matrix factorization (RNMF) that has proved to be a successful tool for analyzing high dimensional data that has a regression pattern within it. RNMF is also useful in identifying a localized region that changes with respect to a regression variable. This technique has been used for analyzing regression patterns in brain MRI scans to understand how a subject’s score affected the anatomy of the brain. However, it was initially designed to identify the area of regression with respect to only a singular pattern. A natural extension to RNMF is to enable the algorithm to recognize and isolate separate regression patterns with respect to many variables (e.g. both age and pscyhopathy score) that may occur within the input data. The goal of this project is to extend RNMF to multi-output RNMF, which will be able to handle input that has multiple regression patterns occurring in the data and identify which regions of the input data change with respect to a particular dependent regression variable. This will be applied to the MRI brain scans to identify multiple regression patterns in the brain data and determine whether these patterns are independent or interrelated.
Hardware Computation in Research
Mentor
Brian Ruttenberg
Abstract
I joined this project to learn more about the capabilities of hardware computation in research; though my main interests include intelligent agents, machine learning, computer graphics, and physics modeling. Currently, I'm working to develop a database system and application to facilitate in the access of spatial information related to astroglia in the human retina. The goal is to provide researchers with a smooth process for comparing astrocyte structures in different retinas - making possible a biological understanding of their role in the human eye.
Understanding the spatial relationship of objects in scientific images is of immense interest to researchers. The analysis, comparison, and visualization of such relationships remains a bottleneck. To address this problem, we present RAKE—a highly accessible visual system to explore, query, and tabulate harvested data. RAKE addresses key concerns about global structural patterns and spatial relationships in images through the scalability of a relational database, the power of GIS spatial values, and the accessibility of direct interaction techniques.