|Home||Papers||Reports||Projects||Code Fragments||Dissertations||Presentations||Posters||Proposals||Lectures given||Course notes|
Common Test Case
Koen Debosschere1*+ - firstname.lastname@example.org
Abstract : This document reports on task 1.1: visits at the companies of the user group by the partners of the consortium. One of the objectives of this task is: Choice of one common test case that will be worked out in detail during the project. This case should be chosen to be as representative as possible, and its size should be such that it can be developed in the foreseen timeframe, integrating the results obtained during the first 2 years. The output of this activity is deliverable D1.3: description of typical applications that will be used as lighthouse and of the chosen common test case.
video surveillance seescoa common test case
This document reports on task 1.1: visits at the companies of the user group by the partners of the consortium. One of the objectives of this task is:
Choice of one common test case that will be worked out in detail during the project. This case should be chosen to be as representative as possible, and its size should be such that it can be developed in the foreseen timeframe, integrating the results obtained during the first 2 years.
The output of this activity is deliverable D 1.3: Description of typical applications that will be used as lighthouse and of the chosen common test case.
The objective of the SEESCOA case study has been further refined to:
Reproduce in a controlled environment a representative number of situations observed during the various interviews with members of the embedded systems user group.
Have an operational environment in which to experiment with methods and tools that are conventionally used to develop embedded systems.
Have full access to a real life embedded system in an operational setting in order to evaluate a high level development system (e.g. Java based) and a software architecture based methodology.
a showcase in which to illustrate the results from the project to the
embedded systems community.
The typical applications that we have seen during our visits are described in deliverable D1.2: State of the Art in Software Engineering for Embedded Systems in Flanders. This list is not complete in the sense that the companies visited build other systems than these, and that there are more than six companies building embedded systems in Flanders.
Digital video + audio
TVs; set-top boxes
High-end printers; printer servers
Battery management systems
This list can be clustered in a number of domains:
Communication: modems, network devices, telephone switches
Signal processing: digital audio, TVs, set-top boxes, hearing devices
Imaging: digital video, high-end printers, printer servers, display systems
Wireless: GPS, remote controls, battery management systems
These are the lighthouse applications we want to use in our project: an embedded system processes images or signals, can be mobile, and communicates via a network. Furthermore, the functionality of these systems largely depends on the software, has some real-time constraints, and is developed by teams that are not necessarily all at the same location. Future systems might require live updates and remote diagnostics over the network.
Given these characteristics of the lighthouse applications, we have selected a common case that features most of them. We have chosen for a smart video surveillance system.
Conventional, commercially available video surveillance systems can hardly be called smart. In fact, they tend to consist of a closed circuit TV network with generally very poor intelligence in the cameras, and most of the control centralized in the controlling unit. Remote control of some elementary functions (zooming, panning) is typically available. The more sophisticated systems perform bandwidth optimization (communication and/or recording) by applying compression techniques, using motion detection, etc.
It is our conjecture that this relative lack of intelligence in existing video surveillance systems offers an excellent opportunity to define a case that makes sense from a project perspective, but which might ultimately constitute a meaningful commercial venture.
The basic concept of the smart video surveillance system is depicted in . An intelligent surveillance system should consist of plugable nodes, i.e. hardware units that can be configured dynamically (hot plugged) in a network that supports a variety of protocols (e.g., a LAN). This means that it must be possible to add any type of node to the network, and that it should identify itself to the other nodes connected to the network, and start cooperating with them.
At this stage we limit ourselves to three types of nodes: camera nodes, controller nodes and archival nodes. The architecture should however provide a framework for extension of the node concept (e.g. security nodes).
Every camera node should be aware of its identity and its location/orientation. Our architecture should provide for procedures to keep this information up to date (e.g., by adding access to a GPS system, or the ability to download this information from a controller unit). It should also have access to a model of its immediate physical surroundings including the position of the other cameras nodes in the system.
The software in a camera node is component-based and open to accept a variety of communication protocols, image compression techniques, camera operation modes, peripheral input/output extensions (audio communication, access control, etc.) and geographical algorithms. The latter represent the real intelligence of the system; they can be used to implement guarantees for spatial coverage, elimination of redundancy, spatial reconstruction etc. Interpretation of movement would seem to be too ambitious a task at this stage.
A camera node should provide the necessary wrappers and interfaces to integrate available image processing components. It is not the objective of this project to investigate image processing; we will reuse existing software. It will be an excellent example of integrating legacy components.
The camera will only contact the controller node to notify events that it should report or to let the controller know that it is still alive.
The other nodes (e.g. controller units and archival nodes) are straightforward; they implement user interfacing and data persistency requirements. A less than obvious experiment would involve a symbolic visualization of the area observed by a given camera; this would require the representation in virtual space of a pre-established domain, and the merging in real time with digital images transmitted by the camera.
We believe that such a system is an ideal subject for a case study as set out in the introduction. The hardware components are readily available over the counter, i.e., cameras, recording equipment, monitoring stations, interconnection network etc. As a matter of fact, a setup using ordinary web cameras, PCs and the Internet is at a very reasonable lower end of the spectrum of products that we envisage.
So we have on the one hand a configuration of all the proper hardware components that one expects in a video surveillance system, while on the other hand we propose a sophisticated software system, built according to current software engineering practice. This does seem a proper setting in which to explore the main tenets of the SEESCOA project.
For all practical purposes, a smart video surveillance system as proposed here constitutes a network of collaborating software agents. Each node is indeed a self-contained unit, exhibiting the persistency and autonomy of a proper agent; mobility would be another useful feature (consider fault tolerance and automatic reconfiguration) but is beyond the scope of this case
Extending the basic system can be done through expanding the simple behavior of the system. Here are some possibilities:
Other capturing devices: adding smart microphones to the sensing subsystem could extend the surveillance system. These devices could be smart in that they would react to ‘non-common’ noises, or voices.
Other controller units: It could also be interesting to add actuators to the system. Imagine the use the surveillance system as an electronic butler – if someone enters a room the light in the room is switched on; if the last person leaves the room, the light is switched off again. Such a lightweight controller unit could ask a camera node to provide it with the proper signals. The component required to produce these signals could be downloaded in the camera on demand.
We would first advocate the elaboration of an empty smart video surveillance system, i.e. bare of any of the modules that render it smart. It should contain all of the hooks and stubs to extend it into any of the directions mentioned in the above section. We would then consider exploring this system in a limited number of its dimensions of variation. An obvious challenge is the notion of equipping a camera node with symbolic processing capabilities (and the potential of exchanging sophisticated symbolic information with the monitoring node). At this stage, the interpretation of digital images in a 3D model of a pre-established environment (e.g. based on building plans) seems a true challenge.
This surveillance system would cover the following important SEESCOA topics:
A great deal of this case has to do with communication. The cameras communicate via a network to the controller units.
The camera system works on video signals and images.
As an extension, the cameras could be made wireless. In that case they will have to be battery powered, and have access to a wireless network. We will however not take this requirement into account in our first prototype.
Furthermore, the intelligence of the case depends uniquely on the software; it has soft real-time constraints, and is developed by teams that are at four different universities. The proposed case allows for remote diagnostics over the network, and for live updates.
Components: The software for the smart cameras must be component based. This approach should allow us to building more sophisticated systems, and to dynamically adapt them to a changing working environment (e.g., when different kind of nodes are added to the network).
Connected to a network: The system uses a network to connect the real-time cameras to the controlling unit. The network could be private or the Internet. Protocol stacks will be needed to allow the capturing devices and the controlling unit to communicate with each other.
User interface: The camera will be equipped with a small LCD-display and a few buttons to initialize some basic parameters (e.g., location). A second user interface, that allows to remotely control the camera will also be investigated.
We believe that a smart surveillance system as described in this report is a good case to produce a proof of concept of the software engineering methodology and tools that will be developed in this project. It features also several properties of systems that were presented during our company visits.