Demonstration videos for Object Tracking Station
In the next couple of videos we shall demonstrate our basic technologies, the image processing and video content analysing algorithms applied by the Object Tracking Stations, which form the core technologies of the whole IDENTRACE system.
Using a single camera view set-up we show how IDENTRACE uses the synthesized background information to detect and track objects in the form of moving shapes in surveillance camera images.
In the video below we see a trick picture where the recently segmented shapes are projected back to the background image and fade out with time. This example demonstrates how the extracted objects are tracked from frame to frame.
The second part of this trick picture is the result of processing the same sequence, but in this case the moving person is only represented by a blurred shape. This indicates that the system can literally see behind people moving in front of the camera.
An additional demonstration video shows some further tricks allowed by this technology.
Once we extract and track the shapes from frame to frame, we can project those shapes back onto the background and make them - for example - shorter than they appear in the real world, or we can also make them taller.
Using a set-up with four cameras the demonstration video below shows how information from different views are merged to get understanding of spatial behavior of the objects detected and tracked on the observed scene.
The live camera images are shown on the three small frames in the lower left corner of the screen, while the position of a person moving around in the area is denoted with a green sign on the map - i.e. the top view of the model - of the room.
Please note that there are no other sensors included in this demonstration. The position of the person moving around the scene is calculated purely by the multiple-view tracking technology of IDENTRACE.