Summer Internships

UCSB Center for Bio-Image Informatics (CBI) offers fellowships to qualified undergraduate students for a summer research program in Biology and Information Technology. Summer scholars interact with researchers and faculty involved in projects as well as participate in various activities designed to develop the skills necessary for success at the graduate level.

CBI is supported by the National Science Foundation and its mission is to establish a searchable digital library for bio-molecular images and to develop new information processing technologies for a better understanding of complex biological processes at the cellular and molecular level.

The Center for Bio-Image Informatics (CBI) hosted 5 undergraduate students from the California State University at San Bernardino during the 8-week (June 22- Aug 14) summer program at UCSB. Professor Art Concepcion helped in identifying the applicant pool and then selecting the five students. This is the twelfth year in a row that the CBI hosted this summer research program. In addition to the five CSUSB student, we also had one international student from Universidade Estadual do Rio Grande do Sul, Guaíba - Brazil, and a UCSB student from the Network Science IGERT Program (PI: Ambuj Singh). Each of the students had a graduate student and a faculty mentor, and the whole team met weekly to review progress. The undergraduate student interns gave weekly presentations and a final end-of-the program presentation was held on Aug 12th and attended by Prof. Concepcion. Students also prepared posters that were presented at the UCSB-wide event on August 14.

Micro-UAV Sensor Fusion with Latent-Dynamic Conditional Random Fields in Coronal Plane Estimation

Mentor

Amir Mohaymen Rahimi, B.S. Manjunath

Student Interns

raphael
Raphael Ruschel dos Santos
Universidade Estadual do Rio Grande do Sul, Guaíba - Brazil

Abstract

We present an autonomous unmanned air vehicle (UAV) system capable of the following tasks in real time: human detection, coronal plane estimation and face recognition. Due to the challenging environment of low-altitude hovering UAV, the on-board camera is more susceptible to parallax effect. P-N learning from tracking-detection-learning (TLD) is a fast and robust technique that uses many positive and negative templates for modeling the visual appearance of the target. We decided to use P-N learning technique to model the appearance of human body mainly because of the robustness of the TLD algorithm to camera movements. We create appearance models for eight surrounding viewpoints of human body. Each model is then evaluated on a real time video sequence and the UAV is automatically sent to face the front of the person. We search for the face within the top part of human body using cascade of Haar features, after the face has been detected, we use optical flow to continuously track the face. Our current dataset consist of 124 videos captured from different altitude, orientation, gimbal angle, location and time. Using the frontal view videos, we created a face dataset containing images from eight selected person and this data was used to train a Fisherface classifier that is used for face recognition.

Automated Diabetic Retinopathy Detection Using Deep Neural Networks

Mentor

Oytun Ulutan, B.S. Manjunath

Student Interns

keaton_boardman
Keaton Boardman
California State University, San Bernardino

Abstract

Diabetic retinopathy, (DR), is increasingly prevalent in populations. According to the National Eye Institute, "From 2000 to 2010, the number of cases of diabetic retinopathy increased 89 percent from 4.06 million to 7.69 million" within the USA.  Due to the high cost of examinations and the lack of physicians, an automated process for early diagnosis of DR is necessary. In this project we explore the utility of deep neural networks for retinal image analysis. Given retinal fundus images, our method classifies images into five distinct classes, healthy, mild, moderate, severe, and proliferative DR. Multiple neural network models are trained for this purpose where each model is specialized for different objectives which include per class classification (one vs rest), regression etc. The results of these models are treated as features, and are combined using an early feature fusion algorithm to obtain the final classification of the images. The model is trained and tested on the Kaggle DR challenge data set which consists of 88,704 retinopathy images with 35,127 labeled for training and 53,577 for testing.

Gaze Scanpath Prediction Using Recurrent Neural Networks

Mentor

Thuyen Ngo, Rohan Jain, B.S. Manjunath

Student Interns

michael_monaghen
Michael Monaghan
California State University, San Bernardino
joshua_dunham
Joshua Dunham
California State University, San Bernardino

Abstract

Human overt attention consists of only 4 to 8 degrees of our 205 degree field of view. A significant portion of all vision processing is performed within this local region. In order to understand the environment, we continuously move our eyes to build a concept of the scene. In this project we aim to model this behavior using human eye tracking data. Given an input image we predict the most likely sequence of locations that humans look at. We build and compare two different models using Long Short Term Memory and Reservoir Computing. Both models are trained using stochastic gradient descent with features extracted from a pretrained convolutional network. We validate our models using the MIT1003 dataset.

 

Deep Learning for Object Recognition

Mentor

Utkarsh Gaur, B.S. Manjunath

Student Interns

mark_swoope
Mark Swoope
California State University, San Bernardino

Abstract

Object recognition is an important problem in computer vision with various interesting applications such as autonomous driving, image-based search and robotics. With the advent of large internet databases such as Flickr and YouTube, the computer vision research community now has access to terabytes of data. Recent research has shown models known as deep neural networks (DNNs) to be capable of taking advantage of such large image databases. DNNs are comprised of basic linear models called “neurons” placed in hierarchical stacks to attain a “deep”, non-linear overall structure. These DNNs are highly scalable and have been shown to effectively model complex visual concepts. In this project, we implemented multiple machine learning algorithms and simple neural network models to classify handwritten digits from the MNIST dataset. Next, we extended these models to construct a deep, convolutional neural network to recognize objects from a challenging large scale dataset called Tiny-ImageNet. This dataset consists of 100 thousand images collected from the web which span over 200 object categories including pedestrians, vehicles, and buildings. 

Understanding the Perceptual Importance of Camera Views (Best Project Award)

Mentor

S. Karthikeyan, B.S. Manjunath

Student Interns

mark_martinez
Mark Martinez
Computer Systems, California State University, San Bernardino

Abstract

When an analyst queries a multi-camera network, selection of the most relevant camera views to transmit is a challenging problem. We quantify the relevance of events occurring in the video feeds by generating a perceptual rating. This rating, on a scale of 1-10,  is obtained from multiple subjects who simulate analysts. The primary objective of this project is to predict the analysts’ perceptual rating given a video feed. We propose a regression based learning algorithm that processes videos and computes low-level (background subtraction, optical flow), mid-level (face/person detection), and high-level features (action bank, tweets) from videos. These multiple features are fused using state-of-the-art early and late fusion techniques to predict the perceptual rating. Our regression methods utilize a leave-one-view-out testing scheme to ensure generalizability to unseen camera views. The proposed method  is evaluated on a large-scale high-definition video dataset of about 45 hours of videos. We demonstrate promising results  and obtain mean absolute error of less than one to predict the human perceptual rating.

2014 Summer Internships group photo

The Summer of 2014 CBI hosted interns from:

California State University, San Bernadino
Worcester Polytechnic Institute
University of California, Santa Barbara

 

Unconstrained Activity Recognition in an Office Environment

Mentor

Amir M. Rahimi, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Christopher Ray Ramirez photo
Christopher Ray Ramirez
California State University San Bernardino
Parker Sankey photo
Parker Sankey
California State University San Bernardino

Abstract

We present a systematic approach to automatically detect and classify limited number of human actions in an office environment. Our setup includes two Pan-Tilt-Zoom (PTZ) network cameras that track faces and recognize people using linear discriminant analysis (LDA). The subjects are tracked in the image plane and the PTZ camera parameters are updated in real time to keep the person at the center of the image. Our office dataset includes 863 samples covering five actions (Interaction between two or more people, walking, sitting, writing on a whiteboard, and getting coffee) from two different viewpoints. A set of spatio-temporal visual features are computed to represent these actions. DenseTrack features include Histogram of Oriented Gradients (HOG), Histogram of Optical Flow (HOF), Motion Boundary Histogram (MBH) and trajectory data. Using support vector machine (SVM) we train and test DenseTrack features with k-fold cross validation to achieve an average accuracy of 67%.

Multiscale Modeling of Biological Networks

Mentor

Xuan Hong Dang, Sourav Medya, Hongyuan You, Kyoungmin Roh, Professor Ambuj Singh, Department of Computer Science

Student Interns

Kara Goodman photo
Kara Goodman
California State University San Bernardino
Austin Piers
University of California, Santa Barbara

Abstract

A genetic network consists of gene expression levels and the genes’ underlying PPI (protein-protein interaction) network. The project’s goal is to identify a small number of sub-network biomarkers within three genetic networks that predict a phenotype. Our data consists of microarray data from breast and liver cancer patients, as well as cell proliferation in Caenorhabditis elegans. The collected microarray data has features in the low thousands; allowing for a large number of possible sub-networks, which, in turn, makes the search for discriminative sub-networks NP-hard. Our lab’s machine learning algorithms MINDS (MINing Discriminative Subgraphs) and SNL (Sub-Network spectral Learning) are two methods that overcome this intractability. MINDS performs MH (Metropolis Hastings) sampling to discover discriminative sub-networks that are used to create NCDTs (Network Constrained Decision Trees), which classify network snapshots. SNL uses regularized subspace learning under network topology constraints to discover discriminative sub-networks. Both SNL and MINDS reveal influential genetic biomarkers of the underlying phenotype with accuracies above 70 percent.

Object Recognition Online Demo

Mentor

Niloufar Pourian, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Alex Krause photo
Alex Krause
California State University San Bernardino

Abstract

Object recognition is an important problem in computer vision research. We explore in this project a new image segmentation based visual feature representation for detecting and localizing objects of interest. New methods are used to capture spatial information into visual features to better represent an image/object. An online tool is developed for visualizing these features using the BISQUE (Bio-Image Semantic Query User Environment) platform. This application allows users to select an image from a database and provides corresponding segmented regions with an overlay outlined by vertices and localized features mapped to those segments. These features are used to retrieve database images with the highest visual similarity scores. This enables one to better understand these features and leads to feature enhancement. Improved localized visual features benefit many computer vision applications such as image classification, search and retrieval, and activity recognition.

Segmentation of Light Microscopy Images

Mentor

Renuka Shenoy, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering, Prof. Kenneth Rose, Department of Electrical and Computer Engineering

Student Interns

Marsha Perez
California State University San Bernardino

Abstract

Due to the large volume of data present in many biological datasets, manual annotation is often impractical. This project focuses on accurate semi-automated segmentation of cells in light microscopy images. The data consists of six consecutive sections from the RC1 connectome, a volume of rabbit retinal data from a 0.25 mm diameter section imaged at 70 nm resolution using the Computational Molecular Phenotyping (CMP) paradigm. Each section is probed using a unique molecular marker, resulting in an intensity image indicating protein activity.  The six images together give information on the different cell types and are all used in our formulation. The images are downsampled from their original size (65536-by-65536 pixels) to 2048-by-2048 pixels for ease of computation.

We preprocessed the images using median filtering to denoise the data. Standard k-means with simple manual initialization was used to obtain clusters corresponding to each cell type. The result from k-means suffered from both over- and under-segmentation in different regions. Different strategies were used to address these problems.  We constructed a region adjacency graph to indicate neighboring cell segments. Similar adjacent cell segments were combined based on both size- and shape-based merging criteria. Further, we used marker-based watershed along with the distance transform to separate clumped cells.

We manually annotated ground truth for cell boundaries using labels obtained from the proprietary Viking software. When validating our results against the ground truth, we observed accurate segmentation of cells.

Open Source Hardware-Software Interface for a Pressure Mat

Mentor

Archith Bency, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Sebastian Rojas photo
Sebastian Rojas
Worcester Polytechnic Institute

Abstract

Pressure mats are useful sensors in a variety of applications that range from screening for high pressure body sores and pressure wounds to monitoring for sleep therapy. The pressure mats themselves are affordable, but the currently available hardware-software interfaces required to extract data are expensive, and their proprietary nature impedes improvements. The main objective of this project is to develop an open-source alternative that works with the Beaglebone Black, a widely available single-board computer that runs Linux. The developed interface visualizes the mat measurements as a grayscale image. This alternative will provide an important impetus for a wider adoption of pressure mats in research to solve healthcare and elderly care problems.

The Summer of 2013 CBI hosted interns from:

Dos Pueblos High School in Santa Barbara
California State University, San Bernadino
​Brown University
University of California, Santa Barbara

Confocal Image Analysis

Mentor

Renuka Shenoy

Student Interns

Brandon Ringsletter
Brandon Ringlstetter
California State University, San Bernardino

Abstract

The purpose of this project is to analyze spatial statistics of cells in confocal images of rabbit retina. These images allow examination of different types of cells at various optical slices in a section of rabbit retina. We try to gain insight on the presence and location of these types of cells in the Ganglion cell layer and the inner-nuclear layer of the retina, using images of sections stained by various macro-molecule and micro-molecule markers. First, we perform segmentation on the images using a combination of the mean-shift algorithm and morphological processing. These segmented cells are then classified based on cell signatures from markers. Depending on the markers used, the classification is done either by examining the intensity of specific markers in the segmented cell or by clustering. Statistics are collected from all the classified cells, and this information makes it possible to analyze spatial patterns that occur among the cells in the images. We approximate our segmented, classified data as a marked point process. We use Ripley's K function to examine the magnitude of clustering at various separations, both between cells of the a given type and between cells of different types. Further, we can inspect K function curves to determine which combination of markers is optimal for subsequent analysis.

TAQOS: A Tweet Analysis Query Ontology System for Topic-Specific Social Media Investigation

Mentor

Petko Bogdanov

Student Interns

Daniel Richman
Daniel Richman
Dos Pueblos High School

Abstract

The spread of information on social media reflects large-scale trends, such as influenza pandemics and the Arab Spring. Prior research at UCSB has shown that topic-specific networks on Twitter (subnetworks involved in categories like business or sports) exhibit distinct behaviors. Analyzing the characteristics of a given topic-specific network can, for instance, yield more accurate predictions of how a new piece of information will spread. For each Twitter user, we can compute a genotype, a numeric representation of interest in each of several topics. To aid in genotyping users, we develop a system to automatically categorize their tweets into one of five topics: Arts & Culture, Business, Politics, Science & Technology, or Sports. Using the tweet text, we query a database of Wikipedia articles. Based on the results, our system scores the tweet’s connection with each topic. We conducted a large-scale survey online to obtain ground truth information for 6,000 tweets. We then evaluated our system by comparing its results with this ground truth [insert numbers here]. The system can next be applied to deeper analysis of topic-specific networks. As one possible future direction, the classifier’s accuracy can be improved by incorporating additional data, such as text from articles linked to from the tweets.

Embedded 3D Systems For Human Action Recognition​

Mentor

Carlos Torres

Student Interns

Isaac Flores
Isaac Flores
University of California, Santa Barbara

Abstract

Sensor networks can now be easily deployed, improved, and used by many applications through advancements in embedded technology. More specifically, cameras can be used intelligently and efficiently by performing application specific computations per image, and then only send interpreted information. The focus of this project is to perform human action recognition within such a sensor network using OMAP/ARM technology and existing frameworks. Kinect cameras will be used along with a BeagleBoard xM microcomputer to track human joints, with the goal of creating a model of how a particular human action can be represented and recognized in real time. The four basic human actions standing, bending over, walking, and sitting will be used initially for creating such a model. The human joints will be tracked using Primesense NiTE algorithms and OpenNI libraries which retrieves data from the Kinect. Using an embedded network will provide a low cost and non-intrusive method for action recognition which can be applied especially for an ICU room where monitoring patients and real-time feedback is very important.

Accurate GPS Image Location Based On Natural Features

Mentor

Dmitry Fedorov

Student Interns

Ryan Kashi
Ryan Kashi
University of California, Santa Barbara

Abstract

By recognizing particular parts of an image, one is often able to distinguish the exact location of where a photo was taken. Applications of this can be used to geotag images without any GPS location on them, in addition to getting directions based on an image as opposed to an address. The user provides an image as input, and the UCSB Bisque (an environment for handling and analysing images) feature service is then utilized to find the scale invariant feature transform (SIFT) features of the image. With a detailed nearest neighbor search utilizing an algorithem constructed to quickly sort through n-dimensional spaces, a number of these features are matched to a database of features collected from 3,016 public geotagged images taken in a 22 by 24 square kilometer region around Santa Barbara. The GPS coordinates with the highest number of matches are then weighed, and a location for the photo is produced along with a certainty measure.

Modeling Epistatic Interactions Using Machine Learning Techniques

Mentor

Petko Bogdanov, Nick Beck

Student Interns

Sky Adams
Sky Adams
Brown University
Regie Felix
Regie Felix
California State University, San Bernardino

Abstract

Epistasis involves interactions within the genome that contribute to the phenotype of an organism. The main goal is to accurately identify regions of the genome that cause a genetic disorder, such as Alzheimer’s and autism. Two factors in particular are involved with these interactions: single nucleotide polymorphism (SNP) genotypes and gene expression levels. We compared the ability of five machine learning methods to find the subset of SNPs that significantly correlates with the phenotype. Our results demonstrate that when using small synthetic data sets, four out of the five methods found the causal SNPs. However, when using more realistic data sets, the methods either became infeasible due to long computation time or yielded inaccurate results due to the large number of uncorrelated SNPs in the data. We are currently developing more refined methods that can find a model in which a set of SNPs and gene expression levels in real datasets correlate with the phenotype. This is a complex problem because there is an extremely large number of SNPs and genes that potentially affect diseases. By creating a more efficient and accurate method, we hope to better predict the genetic causes of various diseases.

Interactive Cell Segmentation Tool

Mentor

Dmitry Fedorov, Diana Delibaltov

Student Interns

Ryan Williams
Ryan Williams
California State University, San Bernardino
Bryan Johnson
California State University, San Bernardino

Abstract

The purpose of this project is to produce “...a tool for cell analysis in 3-D confocal microscopy membrane volumes.” The tool uses the seeded watershed technique to provide the segmentation and predicts uncertain areas for easier identification of areas to be manually corrected by the user. The volumes are of Ascidian embryos because of their close relativity to human embryos and their simplistic layout. We are extending this tool to work in a web-based format to be accessible through the BISQUE system. The BISQUE system has many existing analysis modules that will make the transition to a web-based tool smooth and standardized. The tool's capabilities will include adding seeds to both existing and new labels(a label is each segmented nuclei), merging labels, and saving data in a lineage for later comparison with multiple 3-D volumes. This project uses both EXTJS and EaselJS libraries to support the capabilities and improve performance of the javascript in a browser. The tool is important for manual correction of the watershed output to provide better nuclei detection and segmentation.

2012 Summer Internship group photo

The Summer of 2012 CBI hosted interns from:

Dos Pueblos High School in Santa Barbara
California State University, San Bernadino
École polytechnique de l'université de Nantes in Nice, France

 

Botanicam system

Mentor

Dmitry Federov

Student Interns

Mike Korcha photo
Mike Korcha

Abstract

The Botanicam system is designed for plant image identification backed by the Bisque database. Botanicam’s workflow allows a user to upload an image of a plant to the server via the web interface or mobile application and receive back plant’s information,  such as, genus, species, wikipedia entry, etc. The plant identification is performed on the server by first computing various image features and then using a trained model to classify the input image. We are using a local dataset of bushes from the Coal Oil Point Reserve that contains 11 classes as well as adding a new publicly available dataset from CLEF 2011 which consists of several thousands of images of leaves, trees and bushes. Our project consists of improving classification performance for speed and accuracy, automating model training process and accommodating new datasets and data types.

Probabilistic spatial object representation in databases

Mentor

James Schaffer

Student Interns

Cristobal Guerrero photo
Cristobal Guerrero

Abstract

 

Raster to vector conversions have been traditionally used to speed up spatial queries but there has been no work in the case when the objects are modeled with spatial uncertainty.  Our project will be to design and evaluate methods to convert 2D uncertain spatial representations of image objects from raster to vector formal. This project will be done in Java, JavaScript, SQL and GIS relational databases, but the final chosen 2D representation/algorithms will be implemented in the BISQUE system. The main issue with converting a raster image segmentation to a Vector data structure is the minimization of errors for the represented region. Our uncertainty model will build on the work by Erlend Tøssebro and Mads Nygård . Utilizing their methods and models as a base, we will fit uncertain vector representations to the original raster data. This conversion will allow us to visualize and query the uncertain extent and center of a cell with greater effectiveness than the naive raster representation . The results from this project will be used to to design and evaluate generalized models that can capture and query spatial/morphological uncertainty in 3 or more dimensions and assess their impact on traditional biological analysis.

Predicting Visual Attention Under Varying Camera Focus

Mentor

Karthikeyan S.

Student Interns

Taylor Sanchez photo
Taylor Sanchez

Abstract

A saliency map is the prediction of regions in a photograph (or any visual scene) which captures the visual attention of the viewer. Until recently, most of these predictions have been bottom-up approaches using low-level features. Low-level features can be reliably computed from images which include bright colors, hard edges, and strong contrast. Relatively new algorithms make use of high-level semantic information, such as face, text, people and other object detections to predict visual attention. Some of the recent state-of-the-art advances come from Tilke Judd's work at MIT. Apart from high-level semantics we observe that camera focus plays a significant role in directing visual attention. Our work targets understanding and quantifying the role of camera focus in visual saliency. With the recently available Lytro camera we are able to take a snapshot of the complete light field of the scene which essentially contains multiple images, each with different focused regions. We will have users view all the images and track the eye movements and fixations of the subjects. Further, we compare the results of the visual attention map with our predicted saliency map. This predicted pixelwise saliency map is learned using a support vector machine. Finally we will discern the role of focus on the user’s attention from other semantics. This technique can also be applied to create futuristic autofocus algorithms when object detectors will be built into commercial cameras.

Computer Vision and Robot Control

Mentor

Carlos Torres

Student Interns

Daniel Richman photo
Daniel Richman
Brenna Hensley photo
Brenna Hensley

Abstract

The Microsoft Kinect is a small, mountable device with both a standard (RGB) camera and an infrared sensor that produces a point cloud. The goal of our project is to implement computer vision algorithms that use both types of image data to detect and track various objects. Ultimately we will track objects (e.g., obstacles and game tokens) in real time to autonomously control an iRobot Create, a small and inexpensive robot intended for educational purposes. A second goal is to incorporate gesture recognition using skeletal tracking so that human users may control the robot.

​Time Series Analysis and Classification

Mentor

Nazli Dereli

Student Interns

Regie Felix and Sophie Darcy

Abstract

This summer we are aiming to gain a better understanding of time series analysis and classification. Time series is a sequence of data that is taken in consistent time intervals. Using the data-mining software R, we will cover topics such as decomposition, classification, transformations, model-fitting, forecasting, and machine learning techniques such as decision trees and clustering. We will be applying these techniques to a variety of data sets to determine significant trends and predict future observations.

Improving Part Detection Algorithms using Functional MRI

Mentor

Carter De Leo

Student Interns

Doriane Peresse photo
Doriane Peresse

Abstract

Literature shows that humans can detect people in images better than machines.  After breaking person detection into a four step algorithm, we hypothesize that several combinations using humans and/or machines for these different steps will show that the detection is especially more effective when humans do the features extraction.

Based on this analysis, we are trying to find out if the human brains react any differently when it sees human bodies (or human body parts) compared to when it sees any other kind of image (representing objects, blur, etc...). Using a functional MRI, we record the brain activities of the subject when he sees different type of images.

The next step is to extract the features from the functional MRI so as to create our own detection model and hopefully get better results than the detections algorithms already existing.

Instance Search on a Large Scale Data Set of Videos

Mentor

Niloufer Pourian

Student Interns

Michael Shabasin photo
Michael Shabasin

Abstract

An important need in many situations involving video collections (archive video search, personal video organization, surveillance, law enforcement, protection of brand/logo use) is to find more video segments of a certain specific person, object, or place, given a visual example. We are developing a system that given a collection of test clips and a collection of queries that delimit a person, object, or place entity in some example video, locates for each query clips most likely to contain a recognizable instance of the entity. This algorithm should be invariant to changes in illumination, viewpoint, and scale. We are investigating a system that works on a large scale database containing 70,000 video clips taken from different cameras with 21 topics. 

2011 group photo






 

Priming Effects on Visual Scene Search

Mentor

Carlos Torres

Student Interns

Jared Bruhn photo
Jared Bruhn

Abstract

The objective of this user study is to determine priming effects on searching cluttered images.  In this context, priming is defined as the information given to a subject prior to performing a given task.The experiment is concerned with user eye movement during search as well as the success of the search in terms of accurately finding the object and the time it takes to find it. It is hypothesized that one of the primer configurations presented to a subject may have a greater impact on the optimization of search time, pattern, and accuracy. Hence finding such configuration is an inherent objective. Furthermore, if certain eye patterns are more effective than others, this could support the design of computer algorithms to simulate the successful pattern and improve performance. 

The user data is acquired using the SR-Eyelink1000 and by presenting each subject different primers: a single image of, a collection of related images to, and a text description of the object to be searched for. For each of the ten objects, one of ten scenes containing the object is presented. The objects of interest are limited to appear in only one of the scenes. This prevents viewers from guessing what is to come and eliminates bias. 

The experimental setup allows for distinct observations on the effect of peripheral vision on the patterns a subject’s eye follows. This may allow researchers to more accurately compare the process a person uses to search an image with the process a computer uses. In addition, the experimental setup allows the proctor to control how much of the image a subject sees: the whole scene or a bounding box (one of two sizes) that allows the subjects to see only the area around their visual focus point.

Evaluation of Video Summarization Algorithms

Mentor

Carter De Leo

Student Interns

Christopher Goldsmith photo
Christopher Goldsmith

Abstract

Video summarization is a technique to discover how much of a video is necessary to show to a viewer to convey its key content, or similarly, to find the limits of removing video before essential information is lost. Multi-view video summarization seeks to extend this problem beyond a single video to a set of related videos, such as those coming from a surveillance network. Because of the subjective nature of preserving “important” parts of a video and because of the challenges in presenting content from many different views to a user, it can be difficult to quantitatively compare the quality of results from different summarization approaches. In this work, we present experiments for assessing summarization quality using human feedback. We define a summary as being “good” if it captures both the common behaviors observed in the network while also pointing out deviations. In our experiments, we create synthetic videos of a road network with objects representing people traversing the path concurrently. The video is then summarized using several different approaches.  Participants are asked to convey typical  paths and anomalies, while indicating whether the summary is an effective representation of the original via a website. Based on the results, conclusions can be drawn on how efficient the given summarization algorithms are compared to each other and can help optimize video summarization techniques for multi-view video.

Update: Christopher Goldsmith received an award for his poster on the work done during his internship at The Emerging Researchers National (ERN) Conference in Science, Technology, Engineering and Mathematics (STEM). This conference is hosted by the American Association for the Advancement of Science (AAAS), Education and Human Resources Programs (EHR) and the National Science Foundation (NSF) Division of Human Resource Development (HRD), within the Directorate for Education and Human Resources (EHR). The conference is aimed at college and university undergraduate and graduate students who participate in programs funded by the NSF HRD Unit, including underrepresented minorities and persons with disabilities. He received 2nd place in the Poster Presentations in the category of Computer Sciences and Information Systems and Computer Engineering. More information can be found here: http://www.emerging-researchers.org

Christopher also went on to receive an LSAMP-BD fellowship (http://www.calstatela.edu/centers/moreprograms/lsamp.html) from the NSF which entails completion of a master's degree at CSULA in 2 years with fully paid tuition and a $30,000 yearly salary and application to PhD. programs in the fall of 2013 with the intentions of starting a program in the fall of 2014.

GeoMapping The Image Database

Mentor

Golnaz Abdollahian, Dmitry Fedorov

Student Interns

Alex Tovar photo
Alex Tovar

Abstract

GeoMapping the Image Database is a web application that works to geographically display contents of an image repository. The image repository is updated with images and their associated annotation of a plant by the Botanicam App. Utilizing the Google Maps API, the page executes correspondence with the image server to get the required information to show a plant picture, its position, and all its relevant annotation. It will serve as a platform for rendering of the scientific documentation of the plants in a local habitat. The GeoMap web application can also be used as a digital flora walking guide and visualizer of plant distributions, making it a record-keeper useful to plant researchers and enthusiasts alike.

Automatic Plant Recognition for Mobile Applications

Mentor

Golnaz Abdollahian

Student Interns

Kenneth Williams photo
Kenneth Williams
Chris Patten phot
Chris Patten

Abstract

"Plant blindness" is a term introduced by Wandersee and Schussler in 1999 to describe "the inability to see or notice the plants in one's own environment, leading to the inability to recognize the importance of plants in the biosphere and in human affairs." The goal of this project is to develop a mobile application aimed toward increasing awareness and appreciation for plants in our environment. We are developing the mobile application entitled "Botanicam," which is a front-end for handheld mobile devices (e.g, mobile phones, PDAs, and tablets) used to interact with an autonomous plant recognizer located on an external server. The server is capable of identifying the genus, species, and common name of a plant that is sent back to the mobile device which then produces textual and visual results that are useful to the user. Once identified, all relevant plant information (genus, species, common name, a link to further information about the species, etc.) are displayed on the device's screen as the output of the application. This provides a convenient user interface that facilitates the process of image collection and annotation for botanists and enables amateur users to learn about the plants in their environment.

2010 Summer Internship group photo

 

 

 

 

Proper Identification of Conventional Synapses and Ribbon Synapses

Mentor

Vignesh Jagadeesh

Student Interns

Mark Mata

Abstract

The goal of this project is the proper identification of conventional synapses and ribbon synapses. The conventional synapses were detected using a Laplacian of Gaussian filter to first identify the unique characteristics of the synapse and then extracting features from the images using a Gabor filter. The filter responses were then classified using a k-nearest neighbor algorithm with a high number, 86% on conventional synapses and 88% of non-synapses properly identified. An attempt was also made to draw the cleft and to automate drawing the cleft by automatically detecting the cleft. The cleft was filtered and masked several times resulting in a fairly well drawn cleft. The detection of the cleft also yielded good results using a k-means algorithm with 88% of clefts and 81% of non-clefts correctly identified. Finally ribbon synapses were detected using an MSER approach that yielded good results overall. These results help to lay the groundwork for future, more intuitive synapse detection techniques that may be able to classify synapses not just in 2-D, but also in the context of a 3-D space.

Creating Automated Methods to Detect, Classify and Count Spines on Primary Dendritic Branches Before and After Treatment

Mentor

Aruna Jammalamadaka

Student Interns

Christopher Douglas

Abstract

Our goal is to create automated methods to detect, classify and count spines on primary dendritic branches before and after treatment with a particular micro-RNA to see if the treatment affects spine type percentages and overall counts. This is important because spine population changes could be related to cognitive disorders such as autism, mental retardation and Fragile X Syndrom. We work with fetal rat hippocampal neuronal images provided by Dr. Ken Kosik's lab at UCSB and a software called NeuronStudio. Using this software, Chris created an automated method of finding the volume of spines, and then analyzed the spine type classifications provided by NeuronStudio by comparing them to several other classification methods using features such as the shape context descriptor and elliptical Fourier descriptor. He used a cross-validation to classify spines into 3 categories using both linear discrimant analysis as well as a simple euclidean distance classification method and compared the performance of the 2 methods.

Determining Useful Images from Observational Cameras

Mentor

Jiejun Xu, Zefeng Ni

Student Interns

Joseph Minter

Abstract

Cameras used for observational purposes collect high volumes of video data. Unfortunately, not all that is collected is useful. To determine what a viewer presumes is interesting we conduct experiments using an eye tracker. Video sequences from two neighboring cameras are displayed simultaneously using a split screen format, reenacting what is done in practice, and the eye tracker records the user’s gaze patterns. Our hypothesis is that viewers will generally focus on one video while briefly referring to the other. Without the viewer’s knowledge, the sequences were edited to include several inconsistencies. When the viewer is given an inconsistency we expect that they will fixate on that video for a longer duration. It is those fixations that may provide hints about patterns of human interest. Proving this phenomenon increases our understanding of human gaze patterns, making the question “what is interesting to a viewer” more explicit. Applying the results to an algorithm, we can automate choosing which video segments are worth displaying to the user, establishing a more concise viewing experience.

Evaluation of the Context-aware Saliency Detection Method

Student Interns

Christine Sawyer photo
Christine Sawyer (SACNAS conference award)

Abstract

 

Visual Saliency is the subjective perceptual quality which makes certain parts of an image to stand out more than others. The traditional measurement of visual saliency generally detects the dominant object in the image. A major drawback of this method is that by mainly focusing on the dominant object, its context in the image is lost. The latest saliency detection method – context-aware saliency – detects not only the dominant object but also its surroundings that adds semantic meaning of the scene. In this project, we provide an extensive evaluation of a recently proposed context-aware saliency detection method. The main contributions of this work are in two folds: 1) subjective evaluation framework utilizing EyeLink 1000 eye-tracking system; 2) creation of a data set to provide ground truth data. A representative data set of 60 images was chosen to display to our 17 experiment participants for 4 seconds each. By using the eye tracker, we capture the human gaze pattern needed to understand the context of a scene and use it as ground-truth to evaluate the implemented context-aware saliency detection method. Through comparing the experiment results to the saliency maps created by the algorithm, we identified the strength and the weakness of the algorithm. In addition, we believe that the human fixation data we have collected will be beneficial to the evaluation of various saliency detection methods.

Finding Ways to Show how to Color Cells in Cell Networks

Mentor

Panuakdet Suwannatat ("Mock")

Student Interns

Rotem Raviv photos
Rotem Raviv
University of California, Santa Barbara

Abstract

Sometimes while looking at pictures of cell networks it is necessary to see every cell in the network. Rotem Raviv worked on trying to find ways to show how to color each cell in the network so that the cells could not only each be seen effectively, but that they be visually appealing to the users. This problem was looked at by changing different parts of the choosing process including how cells were picked to change their colors, how the colors were changed, and what was considered a better color. The most successful method chosen was to pick the cells in an order, so that all the cells got colored, to pick colors at random, and to use a condensed score in which to check whether the whole was better.

Developing a Novel Approach to Plant Identification from Photographic Images

Mentor

Golnaz Abdollahian, Diana Delibaltov

Student Interns

Jacob Justus
California State University San Bernardino

Abstract

Worked on developing a novel approach to plant identification from photographic images. In the future this software will be implemented as smartphone application that will allow a user to take photographs of plants and have useful taxonomical information returned to them. Thus far, ground truth data has been collected and will be used to demonstrate the effectiveness of the various algorithms that have been employed in the plant identification task. These algorithms currently include texture feature extraction and shape registration. In the future other algorithms will be developed or adapted to aid in the process of plant identification. These include algorithms that will extract various morphological and geometric features from end-user supplied images. 

In addition to establishing ground truth data Jacob has worked on the design/ modification of an XML based template that allowed for the parsing of image metadata. This template was used to annotate the plant images within the Bisque database and in the future, will allow for accurate querying.

Image Analysis of Magnetic Resonance Imaging (MRI) Brain Scans

Mentor

Karthikeyan, Swapna Joshi

Student Interns

Michael Stephens

Abstract

The project entails image analysis of Magnetic Resonance Imaging (MRI) brain scans. The aim of the project is to identify a region, if any, that distinguishes between psychopaths and non-psychopaths. The MRI scans of these people are quantized by many variables, e.g. age, pscyhopathy score, etc. Recently, an algorithm was developed called Regression based non-negative matrix factorization (RNMF) that has proved to be a successful tool for analyzing high dimensional data that has a regression pattern within it. RNMF is also useful in identifying a localized region that changes with respect to a regression variable. This technique has been used for analyzing regression patterns in brain MRI scans to understand how a subject’s score affected the anatomy of the brain.  However, it was initially designed to identify the area of regression with respect to only a singular pattern. A natural extension to RNMF is to enable the algorithm to recognize and isolate separate regression patterns with respect to many variables (e.g. both age and pscyhopathy score) that may occur within the input data. The goal of this project is to extend RNMF to multi-output RNMF, which will be able to handle input that has multiple regression patterns occurring in the data and identify which regions of the input data change with respect to a particular dependent regression variable. This will be applied to the MRI brain scans to identify multiple regression patterns in the brain data and determine whether these patterns are independent or interrelated.

Hardware Computation in Research

Mentor

Brian Ruttenberg

Student Interns

James Schaffer photo
James Schaffer

Abstract

I joined this project to learn more about the capabilities of hardware computation in research; though my main interests include intelligent agents, machine learning, computer graphics, and physics modeling. Currently, I'm working to develop a database system and application to facilitate in the access of spatial information related to astroglia in the human retina. The goal is to provide researchers with a smooth process for comparing astrocyte structures in different retinas - making possible a biological understanding of their role in the human eye.

Understanding the spatial relationship of objects in scientific images is of immense interest to researchers. The analysis, comparison, and visualization of such relationships remains a bottleneck. To address this problem, we present RAKE—a highly accessible visual system to explore, query, and tabulate harvested data. RAKE addresses key concerns about global structural patterns and spatial relationships in images through the scalability of a relational database, the power of GIS spatial values, and the accessibility of direct interaction techniques.

2009 Internship Group photo

Four high school and four undergraduate students participated as a part of Apprentice Researchers (AR) program.

 

 

Object Recognition

Mentor

Aruna Jammalamadaka

Student Interns

​Chris Wiest

Abstract

The project will focus on existing methods of object recognition. The bag of words model, parts based model and models based on boosting texton features have gained popularity in the vision community over the past decade. Model performance is tied very closely to the type of descriptors (local and global) driving these models. The goal would be to evolve a matlab toolbox bringing together open source implementations of the different models, along with the descriptors driving these models.  Based on time availability, the project would also investigate the applicability of manifold learning in the above mentioned models of object recognition. The project is part of our effort to investigate the applicability of object recognition models on natural images to biological images (to recognize recurring structures e.g syanpses).

Image Forensics and Tamper Detection

Mentor

Anindya Sarkar

Student Interns

​Erick Spaan

Abstract

​​Digital image forensics is a topic of enormous current interest and involves various challenges, especially with regards to authentication of images and estimating the reliability of the image content. With easy-to-use image editing tools, portions of an image are easily cropped and inserted into other images; image resizing is also done followed by suitable blending so as to make the insertion of external image content appear perceptually transparent. Seam carving is another state-of-the-art content-aware image resizing method that is used for removing local regions of interests (e.g. objects). During the course of the summer project, we will strive to improve upon various state-of-the-art tamper detection methods and study their performance even under severe compression attacks. E.g. most current re-sampling detection methods fail when the re-sampled image is subjected to mild/severe JPEG compression. Also, when JPEG images at different quality factors are combined, the change can be easily captured when a coarser quality (lower quality factor JPEG) image is inserted into a finer quality image (higher quality factor). We will look at avenues to tackle the reverse and more difficult problem of inserting a higher quality image into a relatively poorer quality JPEG image. Apart from seam-carving based object detection, there are other techniques which look to seamlessly remove the salient content – e.g. image impainting based approaches. We aim to come up with generic methods to detect and localize image regions which are more likely to correspond to the removed object. The final aim will be to develop a holistic view of the challenges that lie ahead in image forensics and identify the image editing software functions (e.g. Photoshop filters) which can be detected using our proposed schemes.

Rapidly-deployed Sensor Networks

Mentor

Carter De Leo

Student Interns

Anina Cooter

Abstract

​This project is focused on the development of rapidly-deployed sensor networks. The concept is to be able to enter a new environment without any special modifications and quickly drop any number of self-contained sensors (right now wireless-enabled smart video cameras) without much care in their placement. When the sensors are in place, they should be able to automatically discover their positions relative to each other and start exchanging information about what they can see. This collaboration should enable automatic tracking of interesting objects, like people, through the environment and allow the network to report its results in real-time.  An important part of this effort is that each sensor needs to reliably discover when and how its view overlaps with the views of the other sensors in the network. Traditionally, this is accomplished by moving a known calibration pattern, such as a large chessboard, through the scene. Each camera can look for the pattern and report to the network when it is in view. When two or more cameras see the pattern at the same time, they can extract features in their image, such as the corners of the chessboard blocks, and share the results with the other cameras. This allows the network to discover the correspondences between sensors with overlapping views, which is necessary for later computer vision tasks. In the rapidly-deployable setting, however, moving a calibration pattern through the area is not feasible. To help solve this problem, this project will use infrared lasers to give our sensors the ability to briefly project a pattern onto the scene, allowing overlapping cameras to find their correspondences without relying on outside objects.

Smart Camera Network

Mentor

Thomas Kuo

Student Interns

Eli Flores

Abstract

One goal with a network of smart cameras is to track objects across the views of the cameras.  This means that a person appearing in one camera can be identified in another camera even if it leaves the views of both for a short period of time.  Part of this project will involve investigating methods for this type of tracking.  Another part of this project involves the physical implementation of the cameras.  Our network consists of both ground cameras and aerial cameras mounted on helicopters.  Currently the helicopters are remote-controlled, but that makes them difficult to control.  Thus they are being retrofitted with better sensors that will allow them to fly autonomously.  The project will include working on the controls to this system to allow it to stay in one place.

Modern Tomographic Imaging Methods

Mentor

Swapna Joshi

Student Interns

​Natalie Williams

Abstract

Modern tomographic imaging methods are playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development , aging and pathology Information obtained through the analysis of brain images can be used to explain anatomical differences between normal and pathologic populations, as well as to potentially help in the early diagnosis of pathology . Recent studies have shown, approximately 5% of males are characterized by a pattern of antisocial behavior that onsets in early childhood and remains stable across the life-span. These men are responsible for 50% to 70% of all violent crimes scribed, and not comparable.  The goal of this project is to help Psychologists identify patterns that can distinguish psychopaths brains from that of normal brains. It is not known if such men present abnormalities in brain structure. To our knowledge, no other quantitative data have been reported on the neuroanatomy of persistent violent offenders with a history of antisocial behavior going back to at least mid-adolescence.

Magnetic Resonance Imaging (MRI)

Mentor

Emre Sargin

Student Interns

llen Feldman

Abstract

Recent research suggests that there is a link between psychopathic behavior and brain structure. One method of analyzing this relationship is Magnetic Resonance Imaging (MRI), an innovative technique that allows certain regions of the brain to be visualized. This provides useful information about the structural differences between people exhibiting normal behavior, contrasted with those who exhibit psychopathic behavior. Furthermore, current computer vision tools can mark these regions on the MRI image. Given these regions, we are interested in measuring their thickness because it is known that the thickness is one way of representing the structure. This information is fundamental in identifying the people with psychopathic behavior from their MRI images. This project focuses on extraction of interfaces between the brain regions in the images taken with the Structural MRI technique. The interfaces will then be used to measure the thickness of these regions. We will be working with three main brain regions: Gray Matter, White Matter , Cerebrospinal fluid (CSF)

Prediction and Modeling of the Cytoplasm of Retinal Astrocytes

Mentor

Brian Ruttenberg

Student Interns

Cari

Abstract

Cari is assisting Brian Ruttenberg with the prediction and modeling of the cytoplasm of retinal Astrocytes. Astrocytes are a glial cell in the retina, and visualizing the complete morphology of the cell is a difficult and cumbersome process. Cari is helping to develop and test a neighborhood classification scheme to predict the extent of Astrocyte cytoplasm from GFAP labeled cells, in order that Astrocyte interaction can be modeled on a large scale.  Cari will quantify and present the results on a series of hand injected ground truth images.

2008 Summer Interns

A total of 4 undergraduates from the universities below and 4 local high school students participated in the summer internship in 2008.

  • California State University, San Bernadino
  • University of California, San Diego

2008 Research Report

 

Querying Significant Patterns in Image Database

Mentor

Ambuj Singh (faculty advisor), Vishwakarma Singh

Student Interns

Deepak Bali
California State University, San Bernadino

Abstract

Images are becoming an important source in many scientific and commercial domains. Analysing such images often requires the retrieval of the best subregions matching a given query. A content based similarity search of images finds the closest images to a given query image based on the distances between specified feature vectors of the query image and of the images in the collection searched. This is especially useful for biological images when used with the feature extraction methods developed by researchers at UCSB Algorithms lab. The Quip Algorithm approaches the problem in a dynamic way and demonstrates efficient and fast searches.

A Benchmarking Website

Mentor

B.S. Manjunath (faculty advisor), Boguslaw Obara

Student Interns

Matthew Strader
California State University, San Bernadino

Abstract

The importance of evaluating image analysis algorithms is the ability to compare the effectiveness of algorithms that attempt to solve the same class of problem. With a comprehensive score that incorporates all of the metrics important to a particular class of problem, one algorithm may be deemed more effective than another. Competition among algorithms facilitates the sharing of ideas and the progression towards better algorithms. A benchmarking website makes image datasets available for algorithm designers to test their image analysis algorithms and receive feedback on how their algorithms compare to others.

Evaluating the Performance of 3D Nuclei Detection Algorithms

Mentor

B.S. Manjunath (faculty advisor), Elisa Drelie Gelasca

Student Interns

Aram Acemyan
California State University, San Bernadino

Abstract

In the growing field of biological computing it has become important to create standard methods of evaluating segmentation and detection algorithms. With many researchers coming up with there own methods of segmentation it is important to have a way to score each algorithm and determine who’s algorithms are performing best. These evaluation algorithms are important for creating benchmarks for Evaluation and Analysis. The 3D Nuclei Detection Evaluation will be used to rate the results of 3D Nuclei Detection algorithms.

Ganglion Cell Ground Truths

Mentor

Steven Fisher (faculty advisor), Chris Banna

Student Interns

Jonathan Okerblom
University of California, San Diego

Abstract

Retinal ganglion cells carry the pre-processed impulses of the retina to the brain where vision is finally perceived. Our project aims to identify changes in retinal ganglion cell morphology induced by an experimental mouse model of retinal detachment. Historically, immunohistochemical (IHC) techniques have labeled growth associated protein upregulation in ganglion cells, but have had minimal success identifying morphological changes, especially in mice. This project aims to take a much closer look at ganglion cells in the mouse model to determine their true morphological nature after detachment.

group photo of the 2007 interns

A total of 12 undergraduates from the universities below and 2 local high school students participated in the 2007 summer internship program.

Polytechnic University of Puerto Rico
California State University, San Bernardino
California Polytechnic Institute
Allen Hancock College
University of California, Santa Barbara

2007 Research Report

Studying Abnormal Phosphorylation of Microtubule Associated Protein Tau

Mentor

Stu Feinstein (faculty advisor), Erkan Kiris

Student Interns

Jonathan Okerblom
​Allen Hancock College

Abstract

Studying Abnormal Phosphorylation of Microtubule Associated Protein Tau — which has long been associated with Alzheimer's disease and related dementias 

Final Poster 

Develop a Method for the Segmentation of Cone Photoreceptors in Cross Section of Retinal Images Based on Hough Transform and Fast Marching

Mentor

B.S. Manjunath (faculty advisor), Luca Bertelli

Student Interns

Nicholas Navaroli
California State University, San Bernardino

Abstract

Final Poster 

Creating Ground Truth for Horizontal Cells from 3-D Confocal Microscope Retinal Images.

Mentor

Ambuj Singh (faculty advisor) , Nick Larusso

Student Interns

Albert Garcia
California State University, San Bernardino

Abstract

Final Poster​

Developing an Evaluation Method for Various Segmentation Algorithms to Assess the Performance of the Segmentation Algorithms

Mentor

B.S. Manjunath (faculty advisor), Elisa Drelie Gelasca

Student Interns

Jose Freire
Jose Freire
California State University, San Bernardino

Abstract

Developing an Evaluation Method for Various Segmentation Algorithms to Assess the Performance of the Segmentation Algorithms by Comparing the Segmentation Results to the Manually Obtained Segmentation through the Implemented Quality Evaluation Measures. 

Final Poster

Testing R-tree package — a data structure designed to index data with multiple dimensions

Mentor

B.S. Manjunath (faculty advisor), Zhiqiang Bi

Student Interns

Matthew Strader
Matthew Strader
California State University, San Bernardino

Abstract

​Final Poster

Creating Ground Truth for Microtubule Tracking and Testing Various Microtubule Tracing/Tracking Methods to Improve the Performance of the Methods

Mentor

Leslie Wilson (faculty advisor), Emin Oroudjev

Student Interns

Stephanie Perez
Stephanie Perez
California State University, San Bernardino

Abstract

Final Poster​

Integrating Currently Developed Image Analysis Tools to the Graphical User Interface for Microtubule Tracking/Tracing

Mentor

B.S. Manjunath (faculty advisor), Ken Rose (faculty advisor), Emre Sargin, Alphan Altinok

Student Interns

Sachitra Udunuwarage
Sachithra Udunuwarage and Steven Parker
California State University, San Bernardino

Abstract

Final Poster

Developing a Search Engine that Queries Databases and Extracting Information on a Specific Word Relating to the Biological Images

Mentor

B.S. Manjunath (faculty advisor), Boguslaw Obara, Austin Peck

Student Interns

mar-iam nieves
Mar-Iam Nieves​
Polytechnic University of Puerto Rico
Sadot Banuet
California State University, San Bernardino

Abstract

​Final Poster

Ground Truth for Evaluating an Automated Program to Segment Nuclei in the Inner Nuclear Layer of the Retina

Mentor

Chris Banna

Student Interns

miranda kapin
Miranda Kapin (high school student)

Abstract

To provide ground truth for evaluating an automated program to segment nuclei in the inner nuclear layer of the retina. Miranda started by learning the different layers of the retina. Next, she progressed to enucleating the eye, sectioning the eye, and finally applying antibodies to visualize the different layers of the retina. She then watched and learned how the images were taken on a laser scanning confocal microscope. To provide the ground truth, Miranda painstakingly outlines nuclei after nuclei within the inner nuclear layer of retina from 10 images. She then repeated the process a second time. The ground truth will then be compared to the ground truth created by others on the same data set and used to compute intra-person errors and inter-person errors. This will provide the range of error that an automated program needs to achieve in order to be useful.

Final Presentation

Image Segmentation using Graph Cuts and to Apply the Algorithm to Segment Retinal Layers in Confocal Images

Mentor

Nhat Vu

Student Interns

Wei Wu (high school student)

Abstract

​To learn about image segmentation using graph cuts and to apply the algorithm to segment retinal layers in confocal images. Starting with little knowledge of image processing, Wei quickly learned fundamental concepts such as 2D Fourier transforms, image filtering, and high dimensional feature spaces. Using the Matlab programming environment, she applied these concepts to define edge weights for graph cuts, having only minimal programming experience before the apprenticeship program. By the conclusion of the program, Wei successfully applied graph cuts to segment images based on intensity and color.

Final Presentation

CBI mentors and faculty mentors worked on projects with a teacher from Westlake High School and students from the following universities:

  • California State University
  • University of the Virgin Islands
  • University of California, Santa Barbara

 

A New Image Retrieval System with Efficient Indexing and Active Learning

Mentor

B.S. Manjunath (faculty mentor), Zhiqiang Bi

Student Interns

George Kamaz
California State University, San Bernardino

Local Image Processing Using a Mosaicking Based Framework

Mentor

B.S. Manjunath (faculty mentor), Dmitry Fedorov

Student Interns

Greg Sparks
California State University, San Bernardino

An Extensible Digital Notebook for Scientific Data Gathering

Mentor

B.S. Manjunath (faculty mentor), Kris Kvilekval, August Black, Dmitry Federov

Student Interns

Glenn Hollander
California State University, San Bernardino

Imaging the Roles of Tau and Kinesin in Neurodegenerative Disease

Mentor

Stuart Feinstein (faculty mentor), Leslie Wilson (faculty mentor), Austin Peck, Erkan Kiris

Student Interns

Emi Suzuki
University of California, Santa Barbara

Database Support for Retinal Ontologies

Mentor

B.S. Manjunath (faculty mentor), Kris Kvilekval, Jiyun Byun

Student Interns

David Hollingsworth, CSUSB & Verleen McSween, U of the VI

Improving Segmentation Analysis and Evaluation

Mentor

B.S. Manjunath (faculty mentor), Luca Bertelli, Jiyun Byun

Student Interns

Julius Aubain
University of the Virgin Islands

Visual Vocabulary (VIVO): Online Biomedical Image Mining

Mentor

Ambuj Singh (faculty mentor), Arnab Bhattacharya, Vebjorn Ljosa

Student Interns

Adrian Westall and Brian Strader
California State University, San Bernardino

Glial Scar Formation in Retinoschisin (RS) Knockout Mice

Mentor

Steven K. Fisher (faculty mentor), Geoffrey P. Lewis

Student Interns

Gabe Luna
University of California, Santa Barbara

Retinal Penetration of Avastin (Bevacizumab) Following Intravitreal Injection

Mentor

Steven K. Fisher (faculty mentor), Geoffrey P. Lewis, Ethan Chapin

Student Interns

Carol Cortez (teacher)
Westlake High School
group photo of the 2005 interns

Nine undergraduate students from the four universities below and two high school students participated in the 2005 internship program:

  • California State University, San Bernadino
  • California State University, Fresno
  • University of the Virgin Islands
  • University of California, Santa Barbara

2005 Research Report

Invisible Embedding of Meta-Data into Biological Images

Mentor

B.S. Manjunath (faculty advisor), Kaushal Solanki, Kenneth Sullivan

Student Interns

George Kaymaz
CSU San Bernadino

Edge Detection and Feature Extraction of Retinal Images

Mentor

B.S. Manjunath (faculty advisor), Baris Sumengen

Student Interns

Alexander David
University of the Virgin Islands

Visual Vocabulary Construction Using Principal Component Analysis

Mentor

Ambuj Singh (faculty advisor), Vebjorn Ljosa

Student Interns

David Renteria
CSU San Bernadino

Retinal Image Production & Analysis: Sample Preparation and Nuclei Counting Too

Mentor

B.S. Manjunath (faculty advisor), Steven Fisher (faculty advisor), Mark Verardo, Jiyun Byun

Student Interns

Danielle Izaak
University of the Virgin Islands

Time Series Analysis on Microtubule Behavior

Mentor

Ambuj Singh (faculty advisor), Arnab Bhattacharya

Student Interns

Richard Rivera
CSU San Bernadino

Accurate Detection of Microtubules in Image Series

Mentor

Kenneth Rose (faculty advisor), Alphan Altinok

Student Interns

JieJun Xu
CSU Fresno

High Resolution Imaging of Microtubules using DIC and AFM

Mentor

Stu Feinstein (faculty advisor), Austin Peck

Student Interns

Elliot Meeer
UC Santa Barbara

Enabling Microscopy in the Macro Scale: The Imaging Wall in the Biological Laboratory

Mentor

B.S. Manjunath (faculty advisor), Dmitry Federov, Kristian Kvilekval

Student Interns

Daniel Wee-Kuok Chieng
CSU San Bernadino

Enabling Rapid Data Ingest: The Scientist Digital Notebook

Mentor

B.S. Manjunath (faculty advisor), Dmitry Federov, Kristian Kvilekval

Student Interns

Daniel Havey
CSU San Bernadino
group photo of the 2004 interns

Ten students from the four universities below participated in the 2004 Summer Intern Program:

  • California State University, San Bernadino
  • California State University, Channel Islands
  • University of California, Santa Barbara
  • California State University, Freson

2004 Research Report

Automated Feature Extraction from Retinal Images

Mentor

Sreenivasa Rao Jammalamdaka, Samuel Frame

Student Interns

Victor Arellano
California State University, San Bernardino

Creating a Web Accessible Interface to an Image Database

Mentor

B.S. Manjunath (faculty advisor), Zhiqiang Bi

Student Interns

Timothy Berger
California State University, San Bernardino

A Data Storage, Processing, and Retrieval System for Microtubule Tracking Data

Mentor

Ambuj Singh (faculty advisor) , Arnab Bhattacharya

Student Interns

Robert Coulier
California State University, Channel Islands

Effects of Different Tau Isoforms on Kinesin-driven Motility: Implications for Axonal Transport and Neurodegenerative Disease

Mentor

Stu Feinstein (faculty advisor), Austin Peck

Student Interns

Riki Stevenson
California State University, Channel Islands

Characterizing Highly Ordered Pyrolytic Graphite Using Atomic Force Microscopy

Mentor

Sanjoy Banerjee (faculty advisor), Brian Piorek

Student Interns

Michael Conry
California Institute of Technology

Images of Retinal Detachment on a Mouse Model

Mentor

Steven K. Fisher (faculty advisor), Mark Verardo

Student Interns

Lorraine Dansie
University of California, San Bernadino

Warehousing and Integration of Biological Databases

Mentor

Ambuj Singh (faculty advisor) , Vebjorn Ljosa

Student Interns

Kevin Hawkins
University of California, Santa Barbara

Search Engines for Images of Retinas Exposed to Hyperoxic and Nomoxic Coniditions

Mentor

B.S. Manjunath (faculty advisor), Laura Boucheron

Student Interns

Joriz De Guzman
California State University, San Bernardino

Clustering Bio-images based on Texture

Mentor

Ken Rose (faculty advisor), Alphon Altinok

Student Interns

Jie Jun Xu
California State University, Fresno

Data Hiding in Biology Images

Mentor

B.S. Manjunath (faculty advisor), Ken Sullivan

Student Interns

David Wheland
University of California, Santa Barbara