Click here to load whole tree
NITAAI-Veda.nyf > Soul Science God Philosophy > Science and Spiritual Quest > Section 4 Towards a New Biology > MATHEMATICAL TECHNIQUES FOR STUDY OF EEG > 2. Methods > 2.2 Neural network classification

2.2 Neural network classification


The methods implemented for neural network classification are explained below.


Multilayer Perceptrons:


A Back propagation network or Multilayer perceptron consists of at least three layers of units: an input layer, at least one intermediate hidden layer, and an output layer. The units are connected in a feed-forward fashion. With Back propagation networks, learning occurs during a training phase. After a Back propagation network has learned the correct classification for a set of inputs, it can be tested on a second set of inputs to see how well it classifies untrained patterns.


Radial-Basis Function Networks:


Radial basis function networks are also feed forward, but have only one hidden layer. RBF hidden layer units have a receptive field, which has a center that is, a particular input value at which they have a maximal output. Their output tails off as the input moves away from this point. Generally, the hidden unit function is a Gaussian.


Support Vector Machines:


Support vector machine (SVM) is a popular technique for classification. Given a training set of instance-label pairs (xi, yi), i = 1, . . . , 1 where xi e Rn and ye{l, -1}', the support vector machines Here training vectors are mapped into a higher (maybe infinite) dimensional space by the function (j). Then SVM finds a linear separating hyper plane with the maximal margin in this higher dimensional space. C > 0 is the penalty parameter of the error term. Furthermore, K(xi, Xj) = <KX')'<I> (Xj) is called the kernel function.K- Nearest Neighbor Classifier:Among statistical approaches, a k-nearest neighbor classifier was selected because it does not assume any underlying distribution of data. In the k-nearest neighbor rule, a test sample is assigned the class most frequently represented among the k nearest training samples. If two or more such classes exist, then the test sample is assigned the class with minimum average distance to it.