tensorboard with numpy array

13.2k views Asked by At

Can someone give a example on how to use tensorboard visualize numpy array value?

There is a related question here, I don't really get it. Tensorboard logging non-tensor (numpy) information (AUC)

For example, If I have

for i in range(100):
    foo = np.random.rand(3,2)

How can I keep tracking the distribution of foo using tensorboard for 100 iterations? Can someone give a code example? Thanks.

7

There are 7 answers

2
Kaixiang Lin On

Found a way to work around, create a variable and assign the value of numpy array to the variable, use tensorboard to track the variable

mysummary_writer = tf.train.SummaryWriter("./tmp/test/")
a = tf.Variable(tf.zeros([3,2]), name="a")
sum1 = tf.histogram_summary("nparray1", a)
summary_op = tf.merge_all_summaries()
sess = tf.Session()

sess.run(tf.initialize_all_variables())

for ii in range(10):
    foo = np.random.rand(3, 2)
    assign_op = a.assign(foo)
    summary, _ = sess.run([summary_op, assign_op])
    mysummary_writer.add_summary(tf.Summary.FromString(summary), global_step=ii)
    mysummary_writer.flush()
1
Yaroslav Bulatov On

For simple values (scalar), you can use this recipe

summary_writer = tf.train.SummaryWriter(FLAGS.logdir)
summary = tf.Summary()
summary.value.add(tag=tagname, simple_value=value)
summary_writer.add_summary(summary, global_step)
summary_writer.flush()

As far as using array, perhaps you can add 6 values in a sequence, ie

for value in foo:
  summary.value.add(tag=tagname, simple_value=value)
3
fr_andres On

if you install this package via pip install tensorboard-pytorch it becomes as straightforward as it can get:

import numpy as np
from tensorboardX import SummaryWriter
writer = SummaryWriter()
for i in range(50):
    writer.add_histogram("moving_gauss", np.random.normal(i, i, 1000), i, bins="auto")
writer.close()

Will generate the corresponding histogram data in the runs directory:

enter image description here

0
Sung Kim On

Another (simplest) way is just using placeholders. First, you can make a placeholder for your numpy array shape.

# Some place holders for summary
summary_reward = tf.placeholder(tf.float32, shape=(), name="reward")
tf.summary.scalar("reward", summary_reward)

Then, just call session.run the merged summary with the feed_dict.

# Summary
summ = tf.summary.merge_all()
...
s = sess.run(summ, feed_dict={summary_reward: reward})
writer.add_summary(s, i)
0
Anjum Sayed On

You could define a function like this (taken from gyglim's gist):

def add_histogram(writer, tag, values, step, bins=1000):
    """
    Logs the histogram of a list/vector of values.
    From: https://gist.github.com/gyglim/1f8dfb1b5c82627ae3efcfbbadb9f514
    """

    # Create histogram using numpy
    counts, bin_edges = np.histogram(values, bins=bins)

    # Fill fields of histogram proto
    hist = tf.HistogramProto()
    hist.min = float(np.min(values))
    hist.max = float(np.max(values))
    hist.num = int(np.prod(values.shape))
    hist.sum = float(np.sum(values))
    hist.sum_squares = float(np.sum(values ** 2))

    # Requires equal number as bins, where the first goes from -DBL_MAX to bin_edges[1]
    # See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/summary.proto#L30
    # Therefore we drop the start of the first bin
    bin_edges = bin_edges[1:]

    # Add bin edges and counts
    for edge in bin_edges:
        hist.bucket_limit.append(edge)
    for c in counts:
        hist.bucket.append(c)

    # Create and write Summary
    summary = tf.Summary(value=[tf.Summary.Value(tag=tag, histo=hist)])
    writer.add_summary(summary, step)

And then add to the summary writer like this:

add_histogram(summary_writer, "Histogram_Name", your_numpy_array, step)
0
Grigory Sizov On

You can plot the vector with matplotlib, convert the plot to numpy array along the lines of https://stackoverflow.com/a/35362787/10873169, and then add it to Tensorboard as image

import numpy as np
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
from torch.utils.tensorboard import SummaryWriter

writer = SummaryWriter("runs/example")

for step in range(1, 10):
# Time-dependent vector we want to plot
    example_vector = np.sin(np.arange(100) / step)

    # Plot it in matplotlib first. Default DPI doesn't look good in Tensorboard
    fig = Figure(figsize=(5, 2),  dpi=200)
    canvas = FigureCanvasAgg(fig)
    fig.gca().plot(example_vector)     
    canvas.draw()

    # Get the image as a string of bytes
    image_as_string = np.frombuffer(canvas.tostring_rgb(), dtype='uint8')
    # Need to reshape to (height, width, channels)
    target_shape = canvas.get_width_height()[::-1] + (3,)
    reshaped_image = image_as_string.reshape(target_shape)

    # Write to Tensorboard logs
    writer.add_image("example_vector", reshaped_image, 
                 dataformats="HWC", global_step=step)
    writer.close()

enter image description here

0
FireFLO On
sess = tf.Session()
writer = tf.summary.FileWriter('tensorboard_test')
var = tf.Variable(0.0,trainable=False,name='loss')
sess.run(var.initializer)
summary_op = tf.summary.scalar('scalar1',var)

for value in array:    
    sess.run(var.assign(value))
    summary = sess.run(summary_op)
    writer.add_summary(summary,i)

It works, but slow.