Error in Dimension for LSTM in tflearn

666 views Asked by At

I am training PTB dataset for predicting characters (i.e. character-level LSTM).
The dimension for training batches is [len(dataset) x vocabulary_size]. Here, vocabulary_size = 27 (26+1[for unk tokens and spaces or fullstops.]).
This is the code for converting to one_hot for both batches input(arrX) and labels(arrY).

arrX = np.zeros((len(train_data), vocabulary_size), dtype=np.float32)
arrY = np.zeros((len(train_data)-1, vocabulary_size), dtype=np.float32)
for i, x in enumerate(train_data):
     arrX[i, x] = 1
arrY = arrX[1, :] 

I am making a placeholder of input(X) and labels(Y) in Graph to pass it to tflearn LSTM.Following is the code for the graph and session.

batch_size = 256
with tf.Graph().as_default():
    X = tf.placeholder(shape=(None, vocabulary_size), dtype=tf.float32)       
    Y = tf.placeholder(shape=(None, vocabulary_size), dtype=tf.float32)      
    print (utils.get_incoming_shape(tf.concat(0, Y)))
    print (utils.get_incoming_shape(X))
    net = tflearn.lstm(X, 512, return_seq=True)
    print (utils.get_incoming_shape(net))
    net = tflearn.dropout(net, 0.5)
    print (utils.get_incoming_shape(net))
    net = tflearn.lstm(net, 256)
    net = tflearn.fully_connected(net, vocabulary_size, activation='softmax')
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(net, Y))
    optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)

init = tf.initialize_all_variables()

with tf.Session() as sess:
    sess.run(init)
    offset=0
    avg_cost = 0
    total_batch = (train_length-1) / 256
    print ("No. of batches:", '%d' %total_batch)
    for i in range(total_batch) :
        batch_xs, batch_ys = trainX[offset : batch_size + offset], trainY[offset : batch_size + offset]
        sess.run(optimizer, feed_dict={X: batch_xs, Y: batch_ys})
        cost = sess.run(loss, feed_dict={X: batch_xs, Y: batch_ys})
        avg_cost += cost/total_batch
        if i % 20 == 0:
            print("Step:", '%03d' % i, "Loss:", str(cost))
        offset += batch_size    

SO, I get the following error assert ndim >= 3, "Input dim should be at least 3." AssertionError: Input dim should be at least 3.

How can I resolve this error? Is there any alternate solution? Should I write separate LSTM definition?

2

There are 2 answers

0
Overasyco On

I'm not used to these kind of datasets but have you tried using the tflearn.input_data(shape) with the tflearn.embedding layer ? If you use the embedding I suppose that you won't have to reshape your data in 3 dimension.

0
Nilesh Birari On

lstm layer takes input of shape 3-D Tensor [samples, timesteps, input dim]. You can reshape your input data to 3D. In your problem shape of trainX is [len(dataset) x vocabulary_size]. Using trainX = trainX.reshape( trainX.shape+ (1,)) shape will be changed to [len(dataset), vocabulary_size, 1]. This data can be pass to lstm by simple change in input placeholder X. Add one more dimention to placeholder by X = tf.placeholder(shape=(None, vocabulary_size, 1), dtype=tf.float32).