DIGITAL IMAGE PROCESSING: APPLICATION FOR ABNORMAL INCIDENT DETECTION
A PRESENTATION BY
D.ARAVIND
III/IV B.Tech
Sri Sarathi Institute Of Engineering & Technology
Nuzvid
Ph:9885501050
Email:aravind_devarapalli@yahoo.co.in
P.NAVEEN
III/IV B.Tech
. Sri Sarathi Institute Of Engineering & Technology
Ph:9866221733
Email:navin_pasupuleti@yahoo.co.in
Sri Sarathi Institute Of Engineering & Technology
Nuzvid-521201
ANDHRA PRADESH
Abstract- Intelligent vision systems (IVS) represent an exciting part of modern sensing, computing, and engineering systems. The principal information source in IVS is the image, a two dimensional representation of a three dimensional scene. The main advantage of using IVS systems is that the information is in a form that can be interpreted by humans.
Our paper is an image process application for abnormal incident detection, which can be used in high security installation, subways, etc. In our work, motion cues are used to classify dynamic scenes and subsequently allow the detection of abnormal movements, which may be related critical situations.
Successive frames are extracted from the video stream and compared. By subtracting the second image from the first, that difference image is obtained. This is the segmented to aid error measurement and thresholding. If is the threshold is exceeded, the human operator is alerted. So, that he / she may take remedial action. Thus by processing the input image suitably, our system alerts operators to any abnormal incidents, which might lead to critical situations.
1. Introduction
1.1. Need for automated Surveillance
Motion-based automated surveillance or intelligent scene-monitoring systems were introduced in the recent past. Video motion detection and other similar systems aim to alert operators or start a high-resolution video recording when the motion conditions of a specific area in the scene are changed.
In recent years interest in automated surveillance systems has grown dramatically as the advances in image processing and computer hardware technologies have made it possible to design intelligent incident detection algorithms and implemented them as real-time systems. The need for such equipment has been obvious for quite some time now, as human operators are unreliable, fallible and expensive to employ.
1.2. Motion analysis for incident detection
Interest in motion processing has increased with advance in motion analysis methodology and processing capabilities. The concept of automated incident detection is based on the idea of finding suitable image cues that can represent the specific event of interest with minimum overlapping with other classes. In this paper, motion is adopted as the main cue for abnormal incident detection.
1.3. Image acquisition
Obtaining the images is the first step in implementing the system.
1.4. Camera position
The camera is placed at a fixed height in the subway or corridor. This portion need not be changed along the course of operation.
fig1. Camera Position
1.5. Frame extraction
In this system, motion is used as the main cue for abnormal incident detection. It is henceforth obvious that the first concern is obtaining the images required from the source. In the circumstances described (subways, high security installations) usually a closed circuit television system is employed.
Any ordinary video systems use 25 frames per second. The system described here uses scene motion information extracted at the rate of 8.33 times per second. These amounts to capturing a frame once every two frames in the video camera system. In practical real time operation a hardware block-matching motion detector is used for frame extraction.
2. THE DIFFERENCE IMAGE:
There are two major approaches to extracting two-dimensional motion from image sequential optical flow and motion correspondence. Simple subtraction of images acquired at different instants in time makes motion detection possible, when there is a stationary camera and constant illumination. Both of these conditions are satisfied in the areas of application of our system.
A difference image is nothing but a binary image d (i , j) where non-zero values represent image areas with motion, that is areas where these was a substantial difference between gray levels in consecutive images p1 and p2:
d (i, j) = 0 if ½ p1 (i, j) – p2 (i, j) ½ <= ε
= 1 otherwise
Where ε is a small positive number. The figure 3 shows the resultant image obtained by subtracting the images 1 and 2. The threshold level used in the system is 0.8, which is found to be sufficient for obtaining a good binary difference image
Initial position final position
Difference image
The second figure shows a slightly displaced version of the first figure.
The system errors mentioned in the last item must be suppressed. If it is required to find the direction of motion, it can do by constructing a cumulative difference image. This cumulative difference image can be constructed from a sequence
of images. This , however is not necessary our system as the direction of motion is invariably the same.
Obtaining the difference image is simplified in the MATLAB image processing Toolbox. The input images are read using the ‘imread’ function and converted to a binary image using the ‘ im2bw’ function. The ‘ im2bw’ function converts the input image to gray scale image. The output binary image BW is 0.0 (black) for all pixels in the input image with luminance less than a user defined level and 1.0 (white) for all other pixels.
2.1. Segmentation details
This says about the segmentation details of the object. There are two basic forms of segmentation.
- Complete Segmentation.
- Partial Segmentation.
2.1.1. Complete and partial segmentation
Complete segmentation results in a set of disjoint region corresponding uniquely with objects in the input image. In partial segmentation, the regions may not correspond directly with the image objects.
If partial segmentation is the goal, an image is divided into separate regions that are homogeneous with respect to a chosen property such as brightness, color, reflectivity, texture etc.
Segmentation methods can be divided into three groups according to the dominant features they employ. First is global knowledge about an image or its past; edge-based segmentation forms the second group and region based segmentation, the third. In the second and third group each region can be represented by its closed boundary and each closed boundary describes the region. Edge based segmentation methods find the borders between regions while region based methods construct regions directly.
Region growing techniques are generally better in noisy images where borders are not very easy to detect. Homogeneity is an important property of regions and is used as the main segmentation criterion in region growing, where the basic idea is to divide an image into to zones of maximum homogeneity.
A complete segmentation of an image R is a finite set of regions R1...Rs ,
s
R = U Ri Ri ∩ Rj = Φ i ≠ j
i =1
Further, for region-based segmentation, the conditions need to be satisfied.
H (Ri) = TRUE i = 1,2,s
H (Ri U Rj) = FALSE i≠j
Ri adjacent to Rj where S is the total number of regions in an image and H(Ri) is a binary homogeneity evaluation of region Ri. Resulting regions of the segmented image must be both homogeneous and maximal where ‘maximal’ means that the homogeneity criterion would not be true after merging a region with any adjacent region.
2.2. Region merging and splitting
The basic approaches to region-based segmentation are
- Region Merging
- Region Splitting
- Split-and-Merge processing.
Region merging starts with an over segmented image and merges similar or homogeneous regions to form larger regions until no further merging are possible. Region splitting is the opposite of region merging. It begins with an under segmented image where the regions are not homogeneous. The existing image regions are sequentially split to form regions properly.
2.3. Region growing and segmentation
Our system uses the region growing segmentation method to video the image in to regions. In region growing segmentation, a seed point is first chosen in the image. Then the eight neighbours of the pixel are checked for a specific threshold condition. If the condition is satisfied it is incorporated as part of the region. This process is repeated for each of the eight neighbours and this continues until every pixel has been checked, and the whole image has been segmented into regions.
In our system, the MATLAB function ‘bwlabel’ which performs region-growing segmentation. This function accepts the image to be segmented as input and returns a matrix representing the segmented image along with the number of segments. It is to be noted that the image at this stage of processing is a binary image with only two levels-black (1) and white (0).
2.3.1. Segmentation algorithm
Ø AN INITIAL SET OF SMALL AREAS ARBITERATIVELY MERGED ACCORDING TO SIMILARITY CONSTRAINTS.
Ø START BY CHOOSING AN ARBITRARY SEED PIXEL, COMPARE IT WITH NEIGHBOURING PIXELS.
Ø REGION IS GROWN FROM THE SEED PIXEL BY ADDING IN NEIGHBOURING PIXELS THAT ARE SIMILAR, INCREASING THE SIZE OF REGION.
Ø WHEN THE GROWTH OF ONE REGION STOPS WE SIMPLY CHOOSE ANOTHER SEED PIXEL WHICH DOES NOT YET BELONG TO ANY REGION AND START AGAIN.
Ø THE WHOLE PROCESS IS CONTINUED UNTIL ALL PIXELS BELONG TO SOME REGION.
example of difference image
Segment 1:
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 0 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 0 1 1 1 1 1 1
0 0 0 0 0 1 0 1 1 1 1 1
0 0 0 0 0 1 1 0 0 0 0 0
Segment 2:
0 0 0 0 2 2 2 2 0 0 0 0
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 2 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
0 0 2 0 2 2 2 2 2 2 2 2
0 0 2 2 2 2 2 2 2 2 2 2
0 0 0 2 2 2 2 2 2 2 2 2
0 0 0 0 2 2 2 2 2 2 2 2
A portion of the corresponding segment matrix
3. THRESHOLDING AND ABNORMAL INCIDENT DETECTION
3.1. Need for thresholding
The process of segmentation aids in extracting the require information separately – in this case, the segments are a representation of the amount of motion of the subjects in the scene from one frame to the next.
Difference image
3.2. Thersholding algorithm
- Get the total number of segments k.
- Repeat through steps 3 to 7 for all k segments.
- Scan the matrix to find the kith segment.
- Store the column indices of the kith segment.
- Find the maximum and minimum index values; subtract to find their difference.
- If difference is greater than or equal to 16 pixels, sound alarm to alert the human operator.
- Continue with next segment.
An example of how the differences are stored in the form of a column vector is shown. If any value in the difference matrix is greater than or equal to 16, the human operator is alerted.
Sample difference matrix
6 20
8 20
13 20
17 20
20 20
20 20
20 20
20 20
20 20
20 18
20 11
In the sample matrix for the difference image shown above, the threshold of 16 pixels is exceeded and the human operator is alerted.
RESULTS
First image
Second image
4. ADVANTAGES:
The system we have explained can be used as mentioned earlier as an efficient and easily implementable pedestrian monitoring system in subways. It can detect quickly any fast or abnormal movement, which may lead to dangerous situations. Further, surveillance by humans is dependent on the quality of the human operator and a lot of factors like operator fatigue, negligence may lead to degradation of performance. These factors may can intelligent vision system a better option.
5. CONCLUSION:
Further, surveillance by humans is dependent on the quality of the human operator and a lot of factors like operator fatigue, negligence may lead to degradation of performance. These factors may can intelligent vision system a better option. as in systems that use gait signature for recognition in vehicle video sensors for driver assistance.
No comments:
Post a Comment