Figure 1: Target wireless sensor network deployment with traditional sensors, our multimedia sensors (Cj), storage bricks (Si) and compute hubs (Hk)

The ultimate goal of this project is to develop technologies that can connect a number of uncoordinated views of a scene in order to deduce the global threat. Uncertain scene recognition from any single view is checked and validated with potential captures from another angle in order to improve the recognition accuracy. Our infrastructure widely deploys a number of high fidelity video sensors and stores all the captured streams for certain well defined durations. These sensors are deployed as necessary in an ad hoc fashion. Smarter sensors and compute hubs can then analyze these stored streams to detect and review emerging threats, or potentially capturing events that are easily missed by simpler and real time algorithms. Like the mythical creature Hydra, our system is designed to be robust against the loss of video sensing and processing components. We describe a few representative application scenarios enabled by our proposed infrastructure to further motivate this research:

These target applications point to a need to liberally capture the scene. The captured scenes can either be analyzed in real-time or stored for retrospective analysis. Real-time processing is inherently not scalable, the system needs to have been actively processing all the views of interest. On the other hand, storing the stream allows for the luxury of not having to evaluate and recognize the threat quickly. The recognition algorithm can take turns to analyze as many views as necessary. This is especially true for threats such as loitering that happen over a period of time. Also, video streams consume tremendous amounts of data; high fidelity video streams easily consume 4-5 GB/hour. It is not feasible to transport all this data out of the deployment for real time processing (for lack of wireless link capacity, battery resources etc.). There is a need for in-situ storage so that only interesting scenes need be transported out of the deployment. Given the scale of these deployments, we advocate a fully distributed storage approach to allow resiliency to failure of any components.

Note that we expect all these interesting recognition tasks outlined to be extremely resource intensive, both in terms of their computational and network requirements. Our goal then is to first build the infrastructure that will allow us to explore the nature and complexity of these recognition tasks.

Prof. Surendar Chandra leads the research efforts to build the scalabale storage infrastructure and Prof. Pat Flynn will build the recognition applications on this infrastructure.

Our Multimedia Sensor Storage architecture

Figure 2: component functionality
[Multimedia sensor] \resizebox*{0.32\columnwidth}{!}{\rotatebox{0}{\includegraphics{images/Cj.eps}}} [Storage brick] \resizebox*{0.32\columnwidth}{!}{\rotatebox{0}{\includegraphics{images/Si.eps}}} [Compute Hub] \resizebox*{0.32\columnwidth}{!}{\rotatebox{0}{\includegraphics{images/Hk.eps}}}

The target system consists of a number of sensors (Ci), storage bricks (Sj) and compute hubs (Hk), potentially connected using ad hoc wireless networking technologies for quick deployment of the system components. We advocate a distributed approach; storage bricks are freely deployed alongside multimedia sensors to spatially localize the streams and allow for incrementally scalable storage. The sensors and storage bricks self-organize themselves such that the sensors and bricks can identify potential bricks to replicate and migrate content. The compute hubs use these same location mechanisms to locate various streams and build interesting recognition tasks outlined.

The minimum functionality required of the sensors, storage bricks and compute hubs are illustrated in Figure 2.

Availability of inexpensive and high capacity storage, wireless networks and multimedia sensors makes such deployment scenarios increasingly feasible.

Key Research Components and Research Plan

Deploying the system components in our target environments brings its own set of unique challenges; components can experience transient failure (e.g. lack of energy, network connectivity), fail permanently, moved to a new location as well as be deployed in non-ideal locations. Given the scale of the expected deployment, managing the infrastructure to provide resilient capture and storage for the compute hubs is of the utmost importance. We are developing a fully distributed mechanism; the capture sensors locate the desired bricks that can provide the requisite resiliency and replicate the captured streams. The bricks independently manage the stream replication, rejuvenation and migration in order to manage their local resources. The system must not depend on any single component for its correct functioning.

Research Challenges Addressed

The Hydra system consists of two important components; a self managing media capture and storage system that will form the basis for innovative retrospective analysis algorithms.

Self managing multimedia capture and storage system

We advocate three important mechanisms: a) an abstraction to manage the voluminous data from media capture; b) a mechanism for the various system components to locate each other, allowing for efficient media transfers while reducing the maintenance overhead and c) mechanisms that allow each component to balance its local resource requirements with global requirements. The size of multimedia segments drive these policy choices.

Retrospective analysis algorithms explored:

Building on the capture and storage infrastructure, we will investigate the following recognition tasks:

Test bed: Continuous multimedia capture and storage

Sensor platform

We are building our own multimedia sensor platform to test our innovative software techniques. Our goal is to deploy these sensors throughout our lab space for gaining valuable practical experience. Our goal is to assemble a multimedia sensor using off the shelf components. We are currently building our multimedia sensors using inexpensive commodity components; VIA processor and mother board (VIA EPIA MII 6000E Fanless Mini-ITX Motherboard with 600 MHz Eden processor for the storage bricks and a VIA EPIA MII12000 with 1.2 GHz processor with similar peripheral resources for the sensing nodes), bluetooth and compact flash IEEE 802.11b wireless NIC. This particular processor board was chosen because a) The micro-atx motherboards are small b) x86 compatible processor allowing us to run our OS of choice, FreeBSD/Linux c) supports Firewire/IEEE 1384 for connecting high fidelity images and audio capture devices, d) compact flash slot on the mother board itself and e) inexpensive (this hardware costs around $500, including the case). We built our first prototypes in the summer of 2004. We are also investigating upcoming nano-atx based mother boards as well as Oxford AV940 multimedia processor boards for our storage and capture platforms, respectively.

Presently they continuously monitor Surendar's office and the Experimental systems lab. Check out mmsensor01, mmsensor02 and mmsensor03.

Sponsors

We gratefully acknowledge generous support from the National Science Foundation (NSF) and Defense Intelligence Agency (DIA).


Surendar Chandra
Last modified: 02/05/2005 0:30
[an error occurred while processing this directive]