- Crowd Modeling using Temporal Association Rules
Imran N.Junejo, University of Sharjah, U.A.E
Understanding crowd behavior has attracted a tremendous attention from researcher over the recent years. In this work, we propose an unsupervised approach for crowd scene modeling and anomaly detection using a association rules mining. Using tracked object tracklets, we identify the different paths/routes, i.e., the distinct events taking place at various locations in the scene. Intervalbased frequent temporal patterns characterizing the scene model are mined using a temporal mining algorithm using Allen’s interval-based temporal logic. The resulting frequent patterns are used to generate temporal association rules, which convey the semantic information contained in the scene. Our overall aim is to generate rules that govern the dynamics of scene. Finally the anomalies, both spatial and spatio-temporal, are found by considering the behavioral interactions among the different objects.We apply the proposed approach on a publicly available dataset and demonstrate its efficient use.
- A Robust Distributed Notch Filtering Algorithm For Frequency Estimation
Wael M. Bazzi1, Amir Rastegarnia2, Azam Khalili2, and Milad Latifi2, 1American University in Dubai, UAE, 2Malayer University, Iran
In this paper we consider the frequency estimation problem over networks where a set of nodes collaborate with each other to estimate frequency of single-frequency signal from measurements corrupted by impulsive noise. To reduce the effects of impulsive noise, in the proposed algorithm we use the maximum correntropy criteria (MCC) as the cost function which is a robust optimality criterion for non-Gaussian signal processing. In the proposed algorithm, each node employs an adaptive notch filter to filter the input noisy measurements and the nodes collaborate with each other to optimize a cost function (given in terms of the MCC) in such a way that the filter output resembles as closely as possible, the desired signal. To derive the proposed algorithm, we first formulate the distributed frequency estimation problem in terms of the MCC. Then, we resort to iterative gradient ascent approach to solve it and derive the proposed algorithm, which will be referred to as the diffusion notch filter-MCC (dNFMCC) algorithm. We also present simulation results to show the effectiveness of the new proposed algorithm.
- Segmentation and classification of brain tumor CT images using SVM with weighted kernel width
Kimia rezaei, Hamed agahi, Islamic Azad University, Iran
In this article a method is proposed for segmentation and classification of benign and malignant tumor slices inbrain Computed Tomography (CT) images. In this study image noises are removed using median and wiener filter and brain tumors are segmented using Support Vector Machine (SVM). Then a two-level discrete wavelet decomposition of tumor image is performed and the approximation at the second level is obtained to replace the original image to be used for texture analysis. Here, 17 features are extracted that 6 of them are selected using Student’s t-test. Dominant gray level run length and gray level co-occurrence texture features are used for SVM training. Malignant and benign tumors are classified using SVM with kernel width and Weighted kernel width (WSVM) and k-Nearest Neighbors (k-NN) classifier. Classification accuracy of classifiers are evaluated using 10 fold cross validation method.The segmentation results are also compared with the experienced radiologist ground truth. The experimental results show that the proposed WSVM classifier is able to achieve high classification accuracy effectiveness as measured by sensitivity and specificity.
- A Novel Adaptive -Wavelet Based Detection Algorithm for Chipless RFID System
Meriam A.Bibile, Nemai C.Karmakar, Monash University, Australia
In this paper, a novel wavelet-based detection algorithm is introduced for the detection of chipless RFID tags. The chipless RFID tag has a frequency signature which is identical to itself. Here a vector network analyser is used where the received backscatter signal is analysed in frequency domain. Thus the frequency signature is decoded by comparing the wavelet coefficients which identifies the bits accurately. Further, the detection algorithm has been applied for the tag detection under different dynamic environments to check the robustness of the detection algorithm. The new method doesn’t rely on calibration tags and shows robust detection under different environments and movement.
- Early Mass Detection System For Mammography Image Based On Fuzzy Classifier
Vikas Upadhyay1, Surbhi 2, 1IIT, India, 2IARI, India
Breast cancer is the most common cancer found in women in the last decade. In India it accounts to about 27% of all the cancers in women . The mortality rate is about more than 50% of the total number of cases found with breast cancer. The reason behind this in India is that we lack early detection mechanisms which can only reduce the mortality rate. Mammographic images are the most prevalent medium for early detection of cancer. In the early stages of breast cancer sometimes it becomes different to detect from the mammographic. In this paper a CAD system for an automatic detection has been designed based on the radiologist’s feedback. Our designed CAD system is divided into 3 major steps. These three steps are pre-processing, segmentation and last and the most important feature extraction using fuzzy classifier. The fuzzy classifier has been designed on the basis of 4 major features like mean, standard deviation, kurtosis and skewness. The system has been applied on 100 cancerous cases taken from the database of PGI Chandigarh and DMSS online data base, Florida. The mass detection system identified around 2000 ROI after pre-processing and segmentation, the fuzzy classifier reduce the actual ROI for radiologist feed up-to 250 and increases the accuracy of detection up to 93 % classification Harr discrete wavelet transform is used for feature extraction of segmented regions.
- Handwritten Character Recognition using Structural Shape Decomposition
Abdullah A. Al-Shaher1, Edwin R. Hancock 2, 1Public Authority for Applied Education and Training, Kuwait ,2University of York, United Kingdom
This paper presents a statistical framework for recognising 2D shapes which are represented as an arrangement of curves or strokes. The approach is a hierarchical one which mixes geometric and symbolic information in a three-layer architecture. Each curve primitive is represented using a point-distribution model which describes how its shape varies over a set of training data. We assign stroke labels to the primitives and these indicate to which class they belong. Shapes are decomposed into an arrangement of primitives and the global shape representation has two components. The first of these is a second point distribution model that is used to represent the geometric arrangement of the curve centre-points. The second component is a string of stroke labels that represents the symbolic arrangement of strokes. Hence each shape can be represented by a set of centre-point deformation parameters and a dictionary of permissible stroke label configurations. The hierarchy is a two-level architecture in which the curve models reside at the nonterminal lower level of the tree. The top level represents the curve arrangements allowed by the dictionary of permissible stroke combinations. The aim in recognition is to minimise the cross entropy between the probability distributions for geometric alignment errors and curve label errors. We show how the stroke parameters, shape-alignment parameters and stroke labels may be recovered by applying the expectation maximization EM algorithm to the utility measure. We apply the resulting shape-recognition method to Arabic character recognition.
- Diving Performance Assessment By Means Of Video Processing
Stefano Frassinelli, Alessandro Niccolai and Riccardo E. Zich,Polytechnic University of Milan,Italy
In this paper a procedure for video analysis has been applied in an innovative way to diving performance assessment. Sport performance analysis is a discipline that is growing exponentially for all level athletes. The technique here shown is based on two important requirements: flexibility and low cost. These two requirements lead to many problems in the video processing that have been faced and solved in this paper.