Reputation: 144
What is the activation function used by self organizing map? Is it the same sigmoid or tansig activation.
Upvotes: 2
Views: 590
Reputation: 121
The activation function for individual neurons is a pure linear function i.e. f(net)=net. SOM then uses a competitive transfer function that works over entire layer ( called 'compete()' in MATLAB). This Transfer functions calculate a layer’s output from its net inputs. It returns 1 for the maximum output and 0 elsewhere.
The following snapshot is taken from MATLAB help documentation for ready reference. The details can be found on this link: https://in.mathworks.com/help/deeplearning/ref/compet.html
Upvotes: 0
Reputation: 118
Self organizing maps are a bit different from standard ANNs, for starters it's unsupervised. It works by assigning weights with the same dimensionality as the input data to each node in a low-dimensional map. You then train the map by adjusting these weight to the input data, which eventually creates regions on the map dependent on the structure of the data. In that sense it does not have an activation function, it's really just based on calculating euclidian space and adjusting weights. See here for a great walkthrough.
Upvotes: 2