Multiclass classification using nerual network

609 views Asked by At

I have got something about 200 face 3D-meshes and I would like to classify its vertices based only on two parameters - gaussian and mean curvature. I decided to use a TensorFlow neural network for this prurpose.

I marked 18 outstanding vertices on all faces - i.e. nose tip, chin, eye corner and so on. All of these points are easily recognizeable by looking on the heatmap of gaussian and mean curvature. So I decided to make histograms (of differenct scales - 6 mm, 4 mm, 2 mm) of these parameters at these points and use it as input for my neural network. The input is vector of 608 features (101 integers for each of 6 histograms and 2 float numbers for point's mean and gaussian curvature). The output should be a vector describing in which class the vertex is.

Here are visualized histograms of scale 6 mm for 10 faces (each column describes one outstanding point; each row one face; there are 18 columns but just 11 classes, because for example inner eye corners - columns 5 and 6 - are symmetric, so both columns are in the same class): enter image description here

I modified this example. Firstly I tried to make a binary classificator. The resulting netowrk can recognize a nose tip vs remaining points or a chin vs remaining points very well, with accuracy of something about 98%. I let the neural network classify all vertices of the mesh. To my suprise the output vector was always [1.0, 0.0] or [0.0, 1.0] - I would guess that sometimes it should not be certain, so it should return for example [0.5, 0.5]. 1. question - Why?

Now I would like to create a single nerual network which tells me: "This point is with 0.2 probability nose tip and with 0.7 chin, 0.05 inner eye corner, ...". But the accuracy is getting worse by adding more classes to the output layer. So accuracy for 11 classes is just about 30%. 2. question - Why and how to fix it? And the output vectors with probabilities are still having just one 1 and ten 0.

I would appreciate any help.

Here are the train and test data. Here is my code:

import numpy as np
import tensorflow as tf

# Parameters
learning_rate = 0.001
training_epochs = 100
batch_size = 100

# Network Parameters
n_hidden_1 = 100 # 1st layer number of features
n_hidden_2 = 100 # 2nd layer number of features
n_input = 608   # input number of features
n_classes = 11  # output number of labels

# Input files
train_data_filename = "data/train.csv"
test_data_filename = "data/test.csv"
model_name = "model"

# Extract numpy representations of the labels and features given rows consisting of:
#   label, feat_0, feat_1, ..., feat_n
def extract_data(filename):
    labels = []
    fvecs = []

    # Iterate over the rows, splitting the label from the features. Convert labels
    # to integers and features to floats.
    with open(filename) as file:
        for line in file:
            row = list(filter(None, line.strip().split(";")))

            labels.append([int(x) for x in row[:n_classes]])
            fvecs.append([float(x) for x in row[n_classes:]])

    fvecs_np = np.matrix(fvecs).astype(np.float32)
    labels_np = np.matrix(labels).astype(np.uint8)

    return fvecs_np, labels_np

# Create model
def multilayer_perceptron(x, weights, biases):
    # Hidden layer with RELU activation
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.relu(layer_1)
    # Hidden layer with RELU activation
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    layer_2 = tf.nn.relu(layer_2)
    # Output layer with linear activation
    out_layer = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
    return out_layer

def main(argv=None):

    train_data, train_labels = extract_data(train_data_filename)
    test_data, test_labels = extract_data(test_data_filename)

    train_size, num_features = train_data.shape
    test_size, _ = test_data.shape

    x = tf.placeholder("float", [None, n_input], name="input_node")
    y = tf.placeholder("float", [None, n_classes])

    weights = {
        'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
        'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
    }
    biases = {
        'b1': tf.Variable(tf.random_normal([n_hidden_1])),
        'b2': tf.Variable(tf.random_normal([n_hidden_2])),
        'out': tf.Variable(tf.random_normal([n_classes]))
    }

    prediction = multilayer_perceptron(x, weights, biases)
    probabilities = tf.nn.softmax(prediction, name="output_node")

    # Define probabilities, loss and optimizer
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

    # Evaluation.
    correct_prediction = tf.equal(tf.argmax(probabilities, 1), tf.argmax(y, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

    # Create a local session to run this computation.
    with tf.Session() as s:

        tf.global_variables_initializer().run()

        print('Initialized!')
        print('Training.')

        for step in range(training_epochs * train_size // batch_size):

            offset = (step * batch_size) % train_size
            batch_data = train_data[offset:(offset + batch_size), :]
            batch_labels = train_labels[offset:(offset + batch_size)]

            _, c = s.run([optimizer, cost], feed_dict={x: batch_data, y: batch_labels})

            if step % 100 == 0:
                print('Training Step:' + str(step) + '  Accuracy =  ' + str(
                        s.run(accuracy, feed_dict={x: test_data, y: test_labels})) + '  Loss = ' + str(
                        s.run(cost, {x: train_data, y: train_labels})))

        print("Optimization Finished!")
        print("Accuracy - test:", accuracy.eval({x: test_data, y: test_labels}))
        print("Accuracy - train:", accuracy.eval({x: train_data, y: train_labels}))

if __name__ == '__main__':
    tf.app.run()
0

There are 0 answers