mlffNetwork¶
Creates a multilayered feedforward neural network.
Synopsis¶
mlffNetwork (network)
Required Arguments¶
- Imsls_d_NN_Network
network
(Input/Output) - The structure containing the neural network that was initialized by mlffNetworkInit. On output, the data structure will be updated depending on the optional arguments used.
Optional Arguments for mlffNetwork¶
createHiddenLayer
, int (Input)Creates a hidden layer with
nPerceptrons
. To create one or more hidden layersmlffNetwork
must be called multiple times with optional argumentcreateHiddenLayer
.Default: No hidden layer is created.
activationFcn
, intlayerId
, intactivationFcn[]
(Input)Specifies the activation function for each perceptron in a hidden layer or the output layer, indicated by
layerId
.layerId
must be between 1 and the number of layers. If a hidden layer has been created,layerId
set to 1 will indicate the first hidden layer. If there are zero hidden layers,layerId
set to 1 indicates the output layer. ArgumentactivationFcn
is an array of lengthnPerceptrons
inlayerId
, wherenPerceptrons
is the number of perceptrons inlayer_id
.activation_fcn
contains the activation function for the i-th perceptron. Valid values foractivationFcn
are:Activation Function Description LINEAR Linear LOGISTIC Logistic TANH Hyperbolic-tangent SQUASH Squash Default: Output Layer
activationFcn
[i
] =LINEAR
. All hidden layersactivationFcn
[i
] =LOGISTIC
.bias
, intlayerId
, floatbias[]
, (Input)Specifies the bias values for each perceptron in a hidden layer or the output layer, indicated by
layerId
.layerId
must be between 1 and the number of layers. If a hidden layer has been created,layerId
set to 1 indicates the first hidden layer. If there are zero hidden layers,layerId
set to 1 indicates the output layer. Argumentbias
is an array of lengthnPerceptrons
inlayerId
, wherenPerceptrons
is the number of perceptrons inlayer_id
.bias
contains the initial bias values for the i-th perceptron.Default:
bias
[i] = 0.0linkAll
, (Input)- Connects all nodes in a layer to each node in the next layer, for all
layers in the network. To create a valid network, use
linkAll
,linkLayer
, orlinkNode
.
or
linkLayer
, intto
, intfrom
(Input)- Creates a link between all nodes in layer
from
to all nodes in layerto
. Layers are numbered starting at zero with the input layer, then the hidden layers in the order they are created, and finally the output layer. To create a valid network, uselinkAll
,linkLayer
, orlinkNode
.
or
linkNode
, intto
, intfrom
(Input)- Links node
from
to nodeto
. Nodes are numbered starting at zero with the input nodes, then the hidden layer perceptrons, and finally the output perceptrons. To create a valid network, uselinkAll
,linkLayer
, orlinkNode
.
or
removeLink
, intto
, intfrom
(Input)- Removes the link between node
from
and nodeto
. Nodes are numbered starting at zero with the input nodes, then the hidden layer perceptrons, and finally output perceptrons. nLinks
(Output)- Returns the number of links in the network.
displayNetwork
(Input)Displays the contents of the network structure.
Default: No printing is done.
Description¶
A multilayered feedforward network contains an input layer, an output layer
and zero or more hidden layers. The input and output layers are created by
the function mlffNetworkInit
. The hidden layers are created by one or
more calls to mlffNetwork
with the keyword createHiddenLayer
, where
nPerceptrons
specifies the number of perceptrons in the hidden layer.
The network also contains links or connections between nodes. Links are
created by using one of the three optional arguments in the mlffNetwork
function, linkAll
, linkLayer
, linkNode
. The most useful is the
linkAll
, which connects every node in each layer to every node in the
next layer. A feedforward network is a network in which links are only
allowed from one layer to a following layer.
Each link has a weight and gradient value. Each perceptron node has a
bias value. When the network is trained, the weight and bias values
are used as initial guesses. After the network is trained using
mlffNetworkTrainer
, the weight, gradient and bias values are
updated in the Imsls_d_NN_Network structure.
Each perceptron has an activation function g, and a bias, μ. The value of the percepton is given by g(Z), where g is the activation function and Z is the potential calculated using
where \(x_i\) are the values of nodes input to this perceptron with weights \(w_i\).
All information for the network is stored in the structure called
Imsls_d_NN_Network. This structure describes the network that is trained
by mlffNetworkTrainer
.
The following code gives a detailed description of Imsls_d_NN_Network:
class Imsls_d_NN_Network (Structure):
('n_inputs', c_int),
('n_outputs', c_int),
('n_layers', c_int),
('layers', POINTER(Imsls_NN_Layer)),
('n_links', c_int),
('next_link', c_int),
('links', POINTER(Imsls_d_NN_Link)),
('n_nodes', c_int),
('nodes', POINTER(Imsls_d_NN_Node))
where Imsls_NN_Layer is:
class Imsls_NN_Layer (Structure):
('n_nodes', c_int),
('nodes', POINTER(c_int))
Imsls_NN_Link is:
class Imsls_d_NN_Link (Structure):
('weight', c_double),
('gradient', c_double),
('to_node', c_int),
('from_node', c_int)
and, Imsls_NN_Node is:
class Imsls_d_NN_Node (Structure):
('layer_id', c_int),
('n_inLinks', c_int),
('n_outLinks', c_int),
('inLinks', POINTER(c_int)),
('outLinks', POINTER(c_int)),
('gradient', c_double),
('bias', c_double),
('ActivationFcn', c_int)
In particular, if network
is the structure of type Imsls_d_NN_Network
, then:
Structure member | Description |
---|---|
network.n_layers |
Number of layers in network. Layers are numbered starting at 0 for the input layer. |
network.n_nodes |
Total number of nodes in network, including the input attributes. |
network.n_links |
Total number of links or connections between input attributes and perceptrons and between perceptrons from layer to layer. |
network.layers[0] |
Input layer with
n_inputs
attributes. |
network.layers[network.n_layers-1] |
Output layer with
n_outputs
perceptrons. |
|
n_inputs
(number of input
attributes). |
|
n_outputs
(number of output
perceptrons). |
network.layers[1].n_nodes |
Number of perceptrons in first hidden layer, or number of output perceptrons if no hidden layer. |
network.links[i].weight |
Initial weight for the i-th link in network. After the training has completed the structure member contains the weight used for forecasting. |
network.nodes[i].bias |
Initial bias value for the i-th node. After the training has completed the bias value is updated. |
Nodes are numbered starting at zero with the input nodes, followed by the hidden layer perceptrons and finally the output perceptrons.
Layers are numbered starting at zero with the input layer, followed by the hidden layers and finally the output layer. If there are no hidden layers, the output layer is numbered one.
Links are numbered starting at zero in the order the links were created. If
the linkAll
option was used, the first link is the input link from the
first input node to the first node in the next layer. The second link is the
input link from the first input node to the second node in the next layer,
continuing to the link from the last node in the next to last layer to the
last node in the output layer. However, due to the possible variations in
the order the links may be created, it is advised to initialize the weights
using the mlffInitializeWeights routine or use the
optional argument weightInitializationMethod
in functions
mlffNetworkTrainer and
mlffClassificationTrainer. Alternatively, the
weights can be initialized in the Imsls_d_NN_Network data structure. The
following code is an example of how to initialize the network weights in an
Imsls_d_NN_Network
variable created with the name network
:
for (j=network.n_inputs; j < network.n_nodes; j++)
{
for (k=0; k < network.nodes[j].n_inLinks; k++)
{
wIdx = network.nodes[j].inLinks[k];
/ set specific layer weights /
if (network.nodes[j].layer_id == 1) {
network.links[wIdx].weight = 0.5;
} else if (network.nodes[j].layer_id == 2) {
network.links[wIdx].weight = 0.33;
} else {
network.links[wIdx].weight = 0.25;
}
}
}
The first for loop, j
iterates through each perceptron in the network.
Since input nodes are not perceptrons, they are excluded. The second for
loop, k
iterates through each of the perceptron’s input links,
network.nodes[j].inLinks[k]
. network.nodes[j].n_inLinks
is the
number of input links for network.nodes[j]
.
network.nodes[j].inLinks[k]
contains the index of each input link to
network.nodes[j]
in the network.links
array.
This example also illustrates how to set the weights based on the
layerId
number. network.nodes[j].layer_id
contains the layer
identification number. This is used to set the weights for the first hidden
layer to 0.5, the second hidden layer weights to 0.33 and all others to
0.25.
Examples¶
Example 1¶
This example creates a single-layer feedforward network. The network inputs
are directly connected to the output perceptrons using the linkAll
argument. The output perceptrons use the default linear activation function
and default bias values of 0.0. The displayNetwork
argument is used to
show the default settings of the network.
Figure 13.9 — A Single-Layer Feedforward Neural Net
from numpy import *
from readMlffNetworkData import readMlffNetworkData
from pyimsl.stat.mlffNetwork import mlffNetwork
from pyimsl.stat.mlffNetworkFree import mlffNetworkFree
from pyimsl.stat.mlffNetworkInit import mlffNetworkInit
from pyimsl.stat.mlffNetworkTrainer import mlffNetworkTrainer
from pyimsl.util.VersionFacade import VersionFacade
network = mlffNetworkInit(3, 2)
if (VersionFacade.getCnlVersion().majorVersion <= 6):
mlffNetwork(network, linkAll=True)
else:
mlffNetwork(network, linkAll=True, displayNetwork=True)
mlffNetworkFree(network)
Output¶
Input Layer
-----------------
NODE_0
Activation Fcn = 0
Bias = 0.000000
Output Links : 0 1
NODE_1
Activation Fcn = 0
Bias = 0.000000
Output Links : 2 3
NODE_2
Activation Fcn = 0
Bias = 0.000000
Output Links : 4 5
Output Layer
-----------------
NODE_3
Activation Fcn = 0
Bias = 0.000000
Input Links : 0 2 4
NODE_4
Activation Fcn = 0
Bias = 0.000000
Input Links : 1 3 5
******* Links ********
network->links[0].weight = 0.00000000000000000000
network->links[0].gradient = 1.00000000000000000000
network->links[0].to_node = 3
network->links[0].from_node = 0
network->links[1].weight = 0.00000000000000000000
network->links[1].gradient = 1.00000000000000000000
network->links[1].to_node = 4
network->links[1].from_node = 0
network->links[2].weight = 0.00000000000000000000
network->links[2].gradient = 1.00000000000000000000
network->links[2].to_node = 3
network->links[2].from_node = 1
network->links[3].weight = 0.00000000000000000000
network->links[3].gradient = 1.00000000000000000000
network->links[3].to_node = 4
network->links[3].from_node = 1
network->links[4].weight = 0.00000000000000000000
network->links[4].gradient = 1.00000000000000000000
network->links[4].to_node = 3
network->links[4].from_node = 2
network->links[5].weight = 0.00000000000000000000
network->links[5].gradient = 1.00000000000000000000
network->links[5].to_node = 4
network->links[5].from_node = 2
Example 2¶
This example creates a two-layer feedforward network with four inputs, one hidden layer with three perceptrons and two outputs.
Since the default activation function is linear for output and logistic for the hidden layers, to create a network that uses only linear activation you must specify the linear activation for each hidden layer in the network. This example demonstrates how to change the activation function and bias values for hidden and output layer perceptrons as shown in Figure 13.10 below.
Figure 13.10 — A 2-layer Feedforward Network with 4 Inputs and 2 Outputs
from numpy import *
from readMlffNetworkData import readMlffNetworkData
from pyimsl.stat.mlffNetwork import mlffNetwork, LINEAR
from pyimsl.stat.mlffNetworkFree import mlffNetworkFree
from pyimsl.stat.mlffNetworkInit import mlffNetworkInit
from pyimsl.stat.mlffNetworkTrainer import mlffNetworkTrainer
nominal, continuous, output = readMlffNetworkData()
hidActFcn = [LINEAR, LINEAR, LINEAR]
outBias = [1.0, 1.0]
hidBias = [1.0, 1.0, 1.0]
network = mlffNetworkInit(4, 2)
mlffNetwork(network, createHiddenLayer=3,
activationFcn={'layerId': 1, 'activationFcn': hidActFcn},
bias={'layerId': 2, 'bias': outBias}, linkAll=True)
mlffNetwork(network, bias={'layerId': 1, 'bias': hidBias})
mlffNetworkFree(network)
Example 3¶
This example creates a three-layer feedforward network with six input nodes and they are not all connected to every node in the first hidden layer.
Note also that the four perceptrons in the first hidden layer are not connected to every node in the second hidden layer, and the perceptrons in the second hidden layer are not all connected to the two outputs:
Figure 13.11 — A network that uses a total of nine perceptrons to produce two forecasts from six input attributes
This network uses a total of nine perceptrons to produce two forecasts from six input attributes.
Links among the input nodes and perceptrons can be created using one of
several approaches. If all inputs are connected to every perceptron in the
first hidden layer, and if all perceptrons are connected to every perceptron
in the following layer, which is a standard architecture for feed forward
networks, then a call to the linkAll
method can be used to create these
links.
However, this example does not use that standard configuration. Some links
are missing. The keyword linkNode
can be used to construct individual
links, or, an alternative approach is to first create all links and then
remove those that are not needed. This example illustrates the latter
approach.
from numpy import *
from readMlffNetworkData import readMlffNetworkData
from pyimsl.stat.mlffNetwork import mlffNetwork
from pyimsl.stat.mlffNetworkFree import mlffNetworkFree
from pyimsl.stat.mlffNetworkInit import mlffNetworkInit
from pyimsl.stat.mlffNetworkTrainer import mlffNetworkTrainer
network = mlffNetworkInit(6, 2)
# Create 2 hidden layers and link all nodes
mlffNetwork(network, createHiddenLayer=4)
mlffNetwork(network, createHiddenLayer=3, linkAll=True)
# Remove unwanted links from input 0
mlffNetwork(network, removeLink={'to': 8, 'from': 0})
mlffNetwork(network, removeLink={'to': 9, 'from': 0})
# Remove unwanted links from input 1
mlffNetwork(network, removeLink={'to': 9, 'from': 1})
# Remove unwanted links from input 2
mlffNetwork(network, removeLink={'to': 6, 'from': 2})
mlffNetwork(network, removeLink={'to': 9, 'from': 2})
# Remove unwanted links from input 3
mlffNetwork(network, removeLink={'to': 6, 'from': 3})
mlffNetwork(network, removeLink={'to': 7, 'from': 3})
mlffNetwork(network, removeLink={'to': 8, 'from': 3})
# Remove unwanted links from input 4
mlffNetwork(network, removeLink={'to': 6, 'from': 4})
mlffNetwork(network, removeLink={'to': 7, 'from': 4})
mlffNetwork(network, removeLink={'to': 8, 'from': 4})
# Remove unwanted links from input 5
mlffNetwork(network, removeLink={'to': 6, 'from': 5})
mlffNetwork(network, removeLink={'to': 7, 'from': 5})
mlffNetwork(network, removeLink={'to': 8, 'from': 5})
# Add link from input 0 to output perceptron 0
mlffNetwork(network, linkNode={'to': 13, 'from': 0})
# Remove unwanted links between hidden layer 1 & hidden layer 2
mlffNetwork(network, removeLink={'to': 11, 'from': 8})
mlffNetwork(network, removeLink={'to': 12, 'from': 9})
# Remove unwanted links between hidden layer 2 & output layer
mlffNetwork(network, removeLink={'to': 14, 'from': 10})
mlffNetworkFree(network)
Another approach is to use keywords linkNode
and linkLayer
to
combine links between the two hidden layers, create individual links, and
remove the links that are not needed. This example illustrates this
approach:
# Another approach is to use keywords LINK_NODE and LINK_LAYER
network = mlffNetworkInit(6, 2)
# Create 2 hidden layers and link all nodes
mlffNetwork(network, createHiddenLayer=4)
mlffNetwork(network, createHiddenLayer=3)
# Link input attributes to first hidden layer
mlffNetwork(network, linkNode={'to': 6, 'from': 0})
mlffNetwork(network, linkNode={'to': 7, 'from': 0})
mlffNetwork(network, linkNode={'to': 6, 'from': 1})
mlffNetwork(network, linkNode={'to': 7, 'from': 1})
mlffNetwork(network, linkNode={'to': 8, 'from': 1})
mlffNetwork(network, linkNode={'to': 7, 'from': 2})
mlffNetwork(network, linkNode={'to': 8, 'from': 2})
mlffNetwork(network, linkNode={'to': 9, 'from': 3})
mlffNetwork(network, linkNode={'to': 9, 'from': 4})
mlffNetwork(network, linkNode={'to': 9, 'from': 5})
# Link hidden layer 1 to hidden layer 2 then remove unwanted links
mlffNetwork(network, linkLayer={'to': 2, 'from': 1})
mlffNetwork(network, removeLink={'to': 11, 'from': 8})
mlffNetwork(network, removeLink={'to': 12, 'from': 9})
# Link hidden layer 2 to output layer then remove unwanted links
mlffNetwork(network, linkLayer={'to': 3, 'from': 2})
mlffNetwork(network, removeLink={'to': 14, 'from': 10})
mlffNetworkFree(network)