The type double function is imsls_d_mlff_initialize_weights.
Required Arguments
Imsls_f_NN_Network*network (Input/Output) Pointer to a structure of type Imsls_f_NN_Network containing the parameters that define the feedforward network’s architecture, including network weights and bias values. For more details, see imsls_f_mlff_network. When network training is successful, the weights and bias values in network are replaced with the values calculated for the optimum trained network.
intn_patterns (Input) Number of training patterns.
intn_nominal (Input) Number of unencoded nominal attributes.
intnominal[] (Input) Array of size n_patterns by n_nominal containing the nominal input variables.
intn_continuous (Input) Number of continuous attributes, including ordinal attributes encoded using cumulative percentage.
floatcontinuous[] (Input) Array of size n_patterns by n_continuous containing the continuous and scaled ordinal input variables.
Return Value
Pointer to an array of length network->n_links+ (network->n_nodes-network->n_inputs) containing the initialized weights. See the Description section for details on weight ordering. This space can be released by using the imsls_free function.
IMSLS_METHOD, intmethod (Input) Specifies the algorithm to use for initializing weights. method contains the weight initialization method to be used. Valid values for method are:
method
Algorithm
IMSLS_EQUAL
Equal weights
IMSLS_RANDOM
Random Weights
IMSLS_PRINCIPAL_COMPONENTS
Principal Component Weights
IMSLS_DISCRIMINANT
Discriminant Analysis Weights
The discriminant weights method can only be used to initialize weights for classification networks without binary encoded nominal attributes. See the Description section for details.
Default: method = IMSLS_RANDOM.
IMSLS_PRINT, (Input) Initial weights are printed.
Default: No printing is performed.
IMSLS_CLASSIFICATION, intclassification[] (Input) An array of length n_patterns containing the encoded training target classifications which must be integers from 0 to n_classes-1. Here n_classes=network‑>n_outputs except when n_outputs=1 then n_classes =2. classification[i] is the target classification for the i-th training pattern described by nominal[i] and continuous[i]. This option is used by the discriminant analysis weight initialization. This option is ignored for all other methods.
IMSLS_RETURN_USER, floatweights[] (Output) If specified, the initialized weights are returned in a user provided array of length -network--->n_links---(network->n_nodes–network->n_inputs)
Description
Function imsls_f_mlff_initialize_weights calculates initial values for the weights of a feedforward neural network using one of the following algorithms:
method
Algorithm
IMSLS_EQUAL
Equal weights
IMSLS_RANDOM
Random Weights
IMSLS_PRINCIPAL_COMPONENTS
Principal Component Weights
IMSLS_DISCRIMINANT
Discriminant Analysis Weights
The keyword IMSLS_METHOD can be used to select the algorithm for weight initialization. By default, the random weights algorithm will be used.
The 3-layer feed forward network with 3 input attributes and 6 perceptrons in Figure 13.33 is used to describe the initialization algorithms. In this example, one of the input attributes is continuous (X3) and the others are nominal (X1 and X2).
Figure 13.33 — A 3-layer, Feed Forward Network with 3 Input Attributes and 6 Perceptrons
This network has a total of 23 weights. The first nine weights, labeled W1, W2, …, W9, are the weights assigned to the links connecting the network inputs to the perceptrons in the first hidden layer. Note that W1, W2, W4, W5, W7, and W8 are assigned to the two nominal attributes and W3, W6 and W9 are assigned to the continuous attribute. All neural network functions in the C Stat Library use this weight ordering. Weights for all nominal attributes are placed before the weights for any continuous attributes.
PERCEPTRON
POTENTIAL
H1,1
H1,2
H1,3
H2,1
H2,2
Z1
The next six weights are the weights between the first and second hidden layers, and W16 and W17 are the weights for the links connecting the second hidden layer to the output layer. The last six elements in the weights array are the perceptron bias weights. These weights, W18, W19, …, W23 are the weights for perceptrons H1,1, …,H1,3, H2,1…, H2,3, and Z1, respectively.
The perceptron potential calculations for this network are described in the table above. Following the notation presented in the introduction to this chapter, are the perceptron activations from perceptrons H1,1, …,H1,3, H2,1…, H2,3, respectively.
All initialization algorithms in mlff_initialize_weights set the weights for perceptrons not linked directly to the input perceptrons in the same manner. Bias weights for perceptrons not directly linked to input attributes are set to zero. All non-bias weights for these same perceptrons are assigned a value of 1/k where k=the number of links into that perceptron (network->nodes[i].n_inlinks).
For example, in this network, the last three bias weights W21, W22 and W23 are initialized to zero since perceptrons H2,1, H2,1 and Z1 and not directly connected to the input attributes. The other weights to perceptrons H2,1 and H2,2 are assigned a value of one half since these perceptrons each have only two input links. The weights to the output perceptron, Z1, are also one half since Z1 has two inputs links.
The calculation of the weights for the links between the input attributes and their perceptrons are initialized differently by the four algorithms. All algorithms, however, scale these weights so that the average potential for the first layer perceptrons is zero. This reduces the possibility of saturation or numerical overflow during the initial stages of optimization.
Equal Weights (method=IMSLS_EQUAL)
In this algorithm, the non-bias weights for each link between the input attributes and the perceptrons in the first layer are initialized to:
where Wi is the weight for all links between the i-th input attributes, n is equal to the total number of input attributes and Si is equal to the standard deviation of the potential for the i-th input attribute. In the above example, the values for weights W1, W2, …, W9, each would be set to:
since this network has three input attributes.
Next the average potential for each of the perceptrons connected to the input layer is calculated by:
where is equal to the average potential for the i-th input attribute. All other bias weights are set to zero.
Random Weights (method=IMSLS_RANDOM)
This algorithm first generates random values for the input layer weights using the Uniform [-0.5, +0.5] distribution. These are then scaled using the standard deviation of the input layer potentials.
where U is a random number uniformly distributed on the interval [-0.5,+0.5] and Si is equal to the standard deviation of the potential for the i-th input attribute.
Next the average potential for each of the perceptrons connected to the input layer is calculated by:
where is equal to the average potential for the i-th input attribute. All other bias weights are set to zero.
Principal Component Weights (method=IMSLS_PRINCIPAL_COMPONENTS)
This uses principal component analysis to generate weights. The arrays nominal and continuous are combined into a single matrix. The correlation matrix of this matrix is decomposed using principal component analysis. The elements of the principal components from this analysis are used to initialize weights associated with the network inputs. As with the other methods the principal component weights are scaled by using the standard deviation of the potential for the perceptrons connected to the input layer:
where Wi is the weight for the link between the i-th input attribute and the j-th perceptron, ξij is the i-th value of the j-th principal component, and Si is equal to the standard deviation of the potential for the i-th input attribute.
If the number of principal components is less than the number of perceptrons in the first layer, i.e., (n_continuous+n_nominal) < n_layer1, where n_layer1 is the number of perceptrons in the first layer, then it is not possible to initialize all weights with principal components. In this case, the first (n_continuous + n_nominal) perceptrons are initialized using the principal components and then the remainder are initialized using random weights (method=IMSLS_RANDOM).
As with the other methods, the bias weights for each of the first layer perceptrons is set to ensure that the average potential in this layer is equal to zero:
where is equal to the average potential for the link between i-th input attribute and the j-th first layer perceptron, and is the standard deviation for this same potential.
Discriminant Weights (method=IMSLS_DISCRIMINANT)
This method is very similar to principal component weights. Instead the discriminant analysis elements replace the principal component elements. The weights between the i-th input attribute and the j-th perceptron in the first layer are calculated by:
Where Wi is the weight for the link between the i-th input attribute and the j-th perceptron, θij is the i-th value of the j-th discriminant component, and Si is equal to the standard deviation of the potential for the i-th input attribute.
If the number of discriminant components is less than the number of perceptrons in the first layer, i.e., (n_continuous + n_nominal) < n_layer1, where n_layer1 is the number of perceptrons in the first layer, then it is not possible to initialize all weights with components from the discriminant analysis. In this case, the first (n_continuous + n_nominal) perceptrons are initialized using the discriminant components and then the remainder are initialized using random weights (method=IMSLS_RANDOM).
As with the other methods, the bias weights for each of the first layer perceptrons is set to ensure that the average potential in this layer is equal to zero:
where is equal to the average potential for the link between i-th input attribute and the j-th first layer perceptron, and Sij is the standard deviation for this same potential.
Examples
Example 1
This example illustrates random initialization algorithms for a three layer network with one output. The first and second hidden layers contain three and two perceptrons for a total of five network perceptrons, respectively.
The nine input attributes consist of two continuous attributes plus seven binary attributes encoded from two nominal attributes using binary encoding.
The weights are initialized using the random weights algorithm. This results in different weights for every perceptron in the first hidden layer. The weights in other layers are initialized using equal weights. It should be noted that the bias weights in the first layer are not random. Except for the discriminant weights algorithm, the bias weights are always calculated to ensure that the average potential for each perceptron in the first layer is zero.
#include <stdio.h>
#include <imsls.h>
int main(){
Imsls_f_NN_Network *network;
int i, j, k, m;
int n_patterns =24; /* no. of training patterns */
int n_nvars =2; /* 2 nominal unencoded variables */
int n_nominal =7; /* 7 inputs for the binary encoded
nominal vars */
int n_continuous =2; /* 2 continuous input attributes */
int nominalIn[24]; /* work arrays used to encode */
This example illustrates the discriminant weights initialization algorithm for a three layer network with one output. The first and second hidden layers contain three and two perceptrons for a total of five network perceptrons, respectively.
The data are the same as Example 1, and the network structure is the same except that all nominal input attributes are removed. This was necessary since the discriminant weights algorithm only works when all input attributes are continuous.
The discriminant weights algorithm initializes the weights in the first hidden layer to the coefficients of the discriminant functions. Since this example is a binary classification example, the number of discriminant functions is equal to the number of classes, two, but there are three perceptrons in the first layer. The weights for the first two perceptrons in this layer are the discriminant function coefficients, including the bias weight. The weights for the last perceptron in this layer were determined randomly.
#include <stdio.h>
#include <imsls.h>
int main(){
Imsls_f_NN_Network *network;
int i, j, k, m;
int n_patterns =24; /* no. of training patterns */
int n_continuous =2; /* 2 continuous input attributes */