Aveneu Park, Starling, Australia

As binary silhouette is divided into cycles with the

               As the second step, period detection is estimation the gait period; the numbers of image frames in each gait cycle. Toby H.W.Lam et.al proposed to obtain the gait period for the GFI generation process. In the mid-stance position, the smallest number of foreground pixels while greatest number ib double support position. By calculating the number of pixels in the silhouette image can get the gait period. In their research, they calculate the gait period by finding the median of the three-consecutive gait cycle’s distance. To generate the GFIs, the binary silhouette is divided into cycles with the result of gait period 5. Negin K.Hosseini et.al suggested the simpler way for period detection by counting the white pixel of the frames in the gait sequence of the subject. The number of white pixels is minimized while taking the initial swing phases and legs are becoming closer than other phases. And then, they counted all the white pixels in each frame and selected the frames placed in between three minimum white pixel frames7. Shajina.T et.al use the key poses extraction instead of period extraction/detection. They represented the gait cycle with key poses which can be extracted by using the K-mean clustering and Eigen space projection algorithm. In Eigen space projection, the silhouette images are represented a column vector and find the mean of the column vectors. And then, they subtract each silhouette column vector from the mean column vector thus normalized silhouette image can be found. The second method is using K-means clustering which means K-means clustering is applied to this weight vector after finding the weight vectors 6.Choa Li et.al proposed the deep gait generation for appearance-based gait recognition. They calculated the NAC(Normalized Auto Correction) of each normalized gait sequence along the temporal axis. A state-of-the-art deep convolutional model(VGG-D) also adopted in their paper. VGG-D consists of 16 convolutional layers and 3 fully connected layers (19 parameterized layer) and uses an architecture with very small convolution filters to evaluated very deep convolutional network 1.F.M Castro et.al also proposed the CNN architecture for the gait signature extraction. In their case, they use optical flow(OF) as the input data for representation in the video with CNN and they tried to decode the simple and low-resolution optical flow. The original video might have different temporal length and need to extract the fixed sized needed by CNN architecture. So, the input data with the size of 60×60×50 that can be obtained from OF frames8. Zhang et.al also suggested the CNN architecture in their research. They combine raw video sequence from the surveillance camera into GEIs( gait energy Image) as the input for the DCNN (deep convolutional neural network).And then they introduce the new algorithm called Siamese network to learn sufficient features representation of the gait.Differently, two parallel CNN architectures wich shared the same parameter, contained in the Siamese network. Since CNN architecture more focuses on classification problem in recognition and there is a huge gap between the recognition and classification.So they proposed to use Siamese network to solve this problem2.

x

Hi!
I'm Edward!

Would you like to get a custom essay? How about receiving a customized one?

Check it out