Encog Backpropagation Error not changing

172 views Asked by At

The total error for the network did not change on over 100,000 iterations. The input is 22 values and the output is a single value. the input array is [195][22] and the output array is [195][1].

BasicNetwork network = new BasicNetwork();
    network.addLayer(new BasicLayer(null,true,22));
    network.addLayer(new BasicLayer(new ActivationSigmoid(),true,10));
    network.addLayer(new BasicLayer(new ActivationSigmoid(),false,1));
    network.getStructure().finalizeStructure();
    network.reset();


    MLDataSet training_data = new BasicMLDataSet(input, target_output);
    final Backpropagation train = new Backpropagation(network, training_data);

    int epoch = 1;

    do {
        train.iteration();

        System.out.println("Epoch #" + epoch + " Error:" + train.getError());

        epoch++;
    } 

    while(train.getError() > 0.01);
    {
        train.finishTraining();
    }

What is wrong with this code?

1

There are 1 answers

0
Aaron Baker On

Depending on what the data you are trying to classify your network may be too small to transform the search space into a linearly separable problem. So try adding more neurons or layers - this will probably take longer to train. Unless it is already linearly separable and then a NN may be an inefficient way to solve this.

Also you don't have a training strategy, if the NN falls into local minima on the error surface it will be stuck there. See the encog user guide https://s3.amazonaws.com/heatonresearch-books/free/Encog3Java-User.pdf pg 166 has a list of training strategy's.

final int strategyCycles = 50;
final double strategyError = 0.25; 
train.addStrategy(new ResetStrategy(strategyError,strategyCycles));