The SDL Component Suite is an industry leading collection of components supporting scientific and engineering computing. Please visit the SDL Web site for more information....



Kohonen Network - Background Information


The following short survey on Kohonen maps is not intended to be an extensive introduction to this technology. It rather presents a very short abstract of the algorithm involved. More details can be found in the literature.

The Kohonen network (or "self-organizing map", or SOM, for short) has been developed by Teuvo Kohonen. The basic idea behind the Kohonen network is to setup a structure of interconnected processing units ("neurons") which compete for the signal. While the structure of the map may be quite arbitrary, this package supports only rectangular and linear maps.

KOHONEN1.gif

The SOM defines a mapping from the input data space spanned by x1..xn onto a one- or two-dimensional array of nodes. The mapping is performed in a way that the topological relationships in the n-dimensional input space are maintained when mapped to the SOM. In addition, the local density of data is also reflected by the map: areas of the input data space which are represented by more data are mapped to a larger area of the SOM.

Each node of the map is defined by a vector wij whose elements are adjusted during the training. The basic training algorithm is quite simple:

  • select an object from the training set
  • find the node which is closest to the selected data (i.e. the distance between wij and the training data is a minimum)
  • adjust the weight vectors of the closest node and the nodes around it in a way that the wij move towards the training data
  • repeat from step 1 for a fixed number of repetitions

The amount of adjustment in step 3 as well as the range of the neighborhood decreases during the training. This ensures that there are coarse adjustments in the first phase of the training, while fine tuning occurs during the end of the training.

A special feature of the particular implementation of this package is the availability of cyclic maps. This means that the neighborhood is extended beyond the map borders and wrapped to the opposite boundary. In this case a rectangular map becomes a torus, a linear map will be a circle.

The Kohonen map reflects the inner structure of the training data. However, one cannot say which neurons are activated by which input vectors. In addition, the neurons corresponding to some input vectors after a particular training, will correspond to another set of vectors after another training run. So the SOM has to be calibrated. This can be achieved by presenting well known examples to the net and by recording which neuron is activated with a given example vector. As Kohonen maps tend to form some kind of elastic surface on the range of input vectors of the training data, neurons which are not activated in the calibration process may be interpreted by interpolation.



Last Update: 2023-Feb-06