2011 Internship Projects

2011 group photo






 

Projects

Priming Effects on Visual Scene Search

Mentor

Carlos Torres

Student Interns

Jared Bruhn photo
Jared Bruhn

Abstract

The objective of this user study is to determine priming effects on searching cluttered images.  In this context, priming is defined as the information given to a subject prior to performing a given task.The experiment is concerned with user eye movement during search as well as the success of the search in terms of accurately finding the object and the time it takes to find it. It is hypothesized that one of the primer configurations presented to a subject may have a greater impact on the optimization of search time, pattern, and accuracy. Hence finding such configuration is an inherent objective. Furthermore, if certain eye patterns are more effective than others, this could support the design of computer algorithms to simulate the successful pattern and improve performance. 

The user data is acquired using the SR-Eyelink1000 and by presenting each subject different primers: a single image of, a collection of related images to, and a text description of the object to be searched for. For each of the ten objects, one of ten scenes containing the object is presented. The objects of interest are limited to appear in only one of the scenes. This prevents viewers from guessing what is to come and eliminates bias. 

The experimental setup allows for distinct observations on the effect of peripheral vision on the patterns a subject’s eye follows. This may allow researchers to more accurately compare the process a person uses to search an image with the process a computer uses. In addition, the experimental setup allows the proctor to control how much of the image a subject sees: the whole scene or a bounding box (one of two sizes) that allows the subjects to see only the area around their visual focus point.

Evaluation of Video Summarization Algorithms

Mentor

Carter De Leo

Student Interns

Christopher Goldsmith photo
Christopher Goldsmith

Abstract

Video summarization is a technique to discover how much of a video is necessary to show to a viewer to convey its key content, or similarly, to find the limits of removing video before essential information is lost. Multi-view video summarization seeks to extend this problem beyond a single video to a set of related videos, such as those coming from a surveillance network. Because of the subjective nature of preserving “important” parts of a video and because of the challenges in presenting content from many different views to a user, it can be difficult to quantitatively compare the quality of results from different summarization approaches. In this work, we present experiments for assessing summarization quality using human feedback. We define a summary as being “good” if it captures both the common behaviors observed in the network while also pointing out deviations. In our experiments, we create synthetic videos of a road network with objects representing people traversing the path concurrently. The video is then summarized using several different approaches.  Participants are asked to convey typical  paths and anomalies, while indicating whether the summary is an effective representation of the original via a website. Based on the results, conclusions can be drawn on how efficient the given summarization algorithms are compared to each other and can help optimize video summarization techniques for multi-view video.

Update: Christopher Goldsmith received an award for his poster on the work done during his internship at The Emerging Researchers National (ERN) Conference in Science, Technology, Engineering and Mathematics (STEM). This conference is hosted by the American Association for the Advancement of Science (AAAS), Education and Human Resources Programs (EHR) and the National Science Foundation (NSF) Division of Human Resource Development (HRD), within the Directorate for Education and Human Resources (EHR). The conference is aimed at college and university undergraduate and graduate students who participate in programs funded by the NSF HRD Unit, including underrepresented minorities and persons with disabilities. He received 2nd place in the Poster Presentations in the category of Computer Sciences and Information Systems and Computer Engineering. More information can be found here: http://www.emerging-researchers.org

Christopher also went on to receive an LSAMP-BD fellowship (http://www.calstatela.edu/centers/moreprograms/lsamp.html) from the NSF which entails completion of a master's degree at CSULA in 2 years with fully paid tuition and a $30,000 yearly salary and application to PhD. programs in the fall of 2013 with the intentions of starting a program in the fall of 2014.

GeoMapping The Image Database

Mentor

Golnaz Abdollahian, Dmitry Fedorov

Student Interns

Alex Tovar photo
Alex Tovar

Abstract

GeoMapping the Image Database is a web application that works to geographically display contents of an image repository. The image repository is updated with images and their associated annotation of a plant by the Botanicam App. Utilizing the Google Maps API, the page executes correspondence with the image server to get the required information to show a plant picture, its position, and all its relevant annotation. It will serve as a platform for rendering of the scientific documentation of the plants in a local habitat. The GeoMap web application can also be used as a digital flora walking guide and visualizer of plant distributions, making it a record-keeper useful to plant researchers and enthusiasts alike.

Automatic Plant Recognition for Mobile Applications

Mentor

Golnaz Abdollahian

Student Interns

Kenneth Williams photo
Kenneth Williams
Chris Patten phot
Chris Patten

Abstract

"Plant blindness" is a term introduced by Wandersee and Schussler in 1999 to describe "the inability to see or notice the plants in one's own environment, leading to the inability to recognize the importance of plants in the biosphere and in human affairs." The goal of this project is to develop a mobile application aimed toward increasing awareness and appreciation for plants in our environment. We are developing the mobile application entitled "Botanicam," which is a front-end for handheld mobile devices (e.g, mobile phones, PDAs, and tablets) used to interact with an autonomous plant recognizer located on an external server. The server is capable of identifying the genus, species, and common name of a plant that is sent back to the mobile device which then produces textual and visual results that are useful to the user. Once identified, all relevant plant information (genus, species, common name, a link to further information about the species, etc.) are displayed on the device's screen as the output of the application. This provides a convenient user interface that facilitates the process of image collection and annotation for botanists and enables amateur users to learn about the plants in their environment.