2014 Internship Projects

2014 Summer Internships group photo

The Summer of 2014 CBI hosted interns from:

California State University, San Bernadino
Worcester Polytechnic Institute
University of California, Santa Barbara

 

Projects

Unconstrained Activity Recognition in an Office Environment

Mentor

Amir M. Rahimi, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Christopher Ray Ramirez photo
Christopher Ray Ramirez
California State University San Bernardino
Parker Sankey photo
Parker Sankey
California State University San Bernardino

Abstract

We present a systematic approach to automatically detect and classify limited number of human actions in an office environment. Our setup includes two Pan-Tilt-Zoom (PTZ) network cameras that track faces and recognize people using linear discriminant analysis (LDA). The subjects are tracked in the image plane and the PTZ camera parameters are updated in real time to keep the person at the center of the image. Our office dataset includes 863 samples covering five actions (Interaction between two or more people, walking, sitting, writing on a whiteboard, and getting coffee) from two different viewpoints. A set of spatio-temporal visual features are computed to represent these actions. DenseTrack features include Histogram of Oriented Gradients (HOG), Histogram of Optical Flow (HOF), Motion Boundary Histogram (MBH) and trajectory data. Using support vector machine (SVM) we train and test DenseTrack features with k-fold cross validation to achieve an average accuracy of 67%.

Multiscale Modeling of Biological Networks

Mentor

Xuan Hong Dang, Sourav Medya, Hongyuan You, Kyoungmin Roh, Professor Ambuj Singh, Department of Computer Science

Student Interns

Kara Goodman photo
Kara Goodman
California State University San Bernardino
Austin Piers
University of California, Santa Barbara

Abstract

A genetic network consists of gene expression levels and the genes’ underlying PPI (protein-protein interaction) network. The project’s goal is to identify a small number of sub-network biomarkers within three genetic networks that predict a phenotype. Our data consists of microarray data from breast and liver cancer patients, as well as cell proliferation in Caenorhabditis elegans. The collected microarray data has features in the low thousands; allowing for a large number of possible sub-networks, which, in turn, makes the search for discriminative sub-networks NP-hard. Our lab’s machine learning algorithms MINDS (MINing Discriminative Subgraphs) and SNL (Sub-Network spectral Learning) are two methods that overcome this intractability. MINDS performs MH (Metropolis Hastings) sampling to discover discriminative sub-networks that are used to create NCDTs (Network Constrained Decision Trees), which classify network snapshots. SNL uses regularized subspace learning under network topology constraints to discover discriminative sub-networks. Both SNL and MINDS reveal influential genetic biomarkers of the underlying phenotype with accuracies above 70 percent.

Object Recognition Online Demo

Mentor

Niloufar Pourian, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Alex Krause photo
Alex Krause
California State University San Bernardino

Abstract

Object recognition is an important problem in computer vision research. We explore in this project a new image segmentation based visual feature representation for detecting and localizing objects of interest. New methods are used to capture spatial information into visual features to better represent an image/object. An online tool is developed for visualizing these features using the BISQUE (Bio-Image Semantic Query User Environment) platform. This application allows users to select an image from a database and provides corresponding segmented regions with an overlay outlined by vertices and localized features mapped to those segments. These features are used to retrieve database images with the highest visual similarity scores. This enables one to better understand these features and leads to feature enhancement. Improved localized visual features benefit many computer vision applications such as image classification, search and retrieval, and activity recognition.

Segmentation of Light Microscopy Images

Mentor

Renuka Shenoy, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering, Prof. Kenneth Rose, Department of Electrical and Computer Engineering

Student Interns

Marsha Perez
California State University San Bernardino

Abstract

Due to the large volume of data present in many biological datasets, manual annotation is often impractical. This project focuses on accurate semi-automated segmentation of cells in light microscopy images. The data consists of six consecutive sections from the RC1 connectome, a volume of rabbit retinal data from a 0.25 mm diameter section imaged at 70 nm resolution using the Computational Molecular Phenotyping (CMP) paradigm. Each section is probed using a unique molecular marker, resulting in an intensity image indicating protein activity.  The six images together give information on the different cell types and are all used in our formulation. The images are downsampled from their original size (65536-by-65536 pixels) to 2048-by-2048 pixels for ease of computation.

We preprocessed the images using median filtering to denoise the data. Standard k-means with simple manual initialization was used to obtain clusters corresponding to each cell type. The result from k-means suffered from both over- and under-segmentation in different regions. Different strategies were used to address these problems.  We constructed a region adjacency graph to indicate neighboring cell segments. Similar adjacent cell segments were combined based on both size- and shape-based merging criteria. Further, we used marker-based watershed along with the distance transform to separate clumped cells.

We manually annotated ground truth for cell boundaries using labels obtained from the proprietary Viking software. When validating our results against the ground truth, we observed accurate segmentation of cells.

Open Source Hardware-Software Interface for a Pressure Mat

Mentor

Archith Bency, Prof. B.S. Manjunath, Department of Electrical and Computer Engineering

Student Interns

Sebastian Rojas photo
Sebastian Rojas
Worcester Polytechnic Institute

Abstract

Pressure mats are useful sensors in a variety of applications that range from screening for high pressure body sores and pressure wounds to monitoring for sleep therapy. The pressure mats themselves are affordable, but the currently available hardware-software interfaces required to extract data are expensive, and their proprietary nature impedes improvements. The main objective of this project is to develop an open-source alternative that works with the Beaglebone Black, a widely available single-board computer that runs Linux. The developed interface visualizes the mat measurements as a grayscale image. This alternative will provide an important impetus for a wider adoption of pressure mats in research to solve healthcare and elderly care problems.