When I initiate training, the process happens only on CPU. and this is the message I get when it starts :

Ignoring device specification /device:GPU:0 for node 'prefetch_queue_Dequeue' because the input edge from 'prefetch_queue' is a reference connection and already has a device field set to /device:CPU:0

and while GPU memory is filled up, there are only sudden spikes of gpu load, and it mostly is 0%. The performance, needless to say, is extremely slow.
enter image description here

Seems everything is loaded to the CPU instead of GPU since the CPU utilization is nearly 100%:

enter image description here

I also noticed in ubuntu the same issue exists, but its at least 4 times faster than the windows branch (each step takes 400 ms while in windows it takes 1300 ms)
I'm using tensorflow 1.3.0 on both Ubuntu(14.04) and Windows and they are both installed using pip install --upgrade tensorflow-gpu command
Here is the whole log :

G:\Tensorflow_section\models-master\object_detection>python train.py  --logtostderr --train_dir=training_stuff --pipeline_config_path=ssd_mobilenet_v1_pets.config
INFO:tensorflow:Summary name Learning Rate is illegal; using Learning_Rate instead.
WARNING:tensorflow:From C:\Users\Master\Anaconda3\envs\anaconda35\lib\site-packages\object_detection-0.1-py3.5.egg\object_detection\meta_architectures\ssd_meta_arch.py:607: all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Please use tf.global_variables instead.
INFO:tensorflow:Summary name /clone_loss is illegal; using clone_loss instead.
2017-09-18 03:44:08.545358: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-18 03:44:08.545474: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-18 03:44:09.121357: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.835
pciBusID 0000:01:00.0
Total memory: 8.00GiB
Free memory: 6.63GiB
2017-09-18 03:44:09.121483: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0
2017-09-18 03:44:09.122196: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:986] 0:   Y
2017-09-18 03:44:09.133158: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
INFO:tensorflow:Restoring parameters from training_stuff\model.ckpt-0
2017-09-18 03:44:15.528390: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'prefetch_queue_Dequeue' because the input edge from 'prefetch_queue' is a reference connection and already has a device field set to /device:CPU:0
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path training_stuff\model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:Recording summary at step 0.
INFO:tensorflow:global step 1: loss = 20.1465 (18.034 sec/step)
INFO:tensorflow:global step 2: loss = 15.8647 (1.601 sec/step)
INFO:tensorflow:global step 3: loss = 13.3987 (1.540 sec/step)
INFO:tensorflow:global step 4: loss = 11.5424 (1.562 sec/step)
INFO:tensorflow:global step 5: loss = 10.8328 (1.337 sec/step)
INFO:tensorflow:global step 6: loss = 10.7179 (1.317 sec/step)
INFO:tensorflow:global step 7: loss = 9.7616 (1.369 sec/step)
INFO:tensorflow:global step 8: loss = 8.5631 (1.336 sec/step)
INFO:tensorflow:global step 9: loss = 7.2683 (1.384 sec/step)

What is wrong and how to resolve this?
by the way here is some complementary information :

  • OS Platform and Distribution: Windows 10 x64 1703, Build 15063.540
  • TensorFlow installed from (source or binary): binary (used pip install )
  • TensorFlow version (use command below): 1.3.0
  • Python version: 3.5.3
  • CUDA/cuDNN version: Cuda 8.0 /cudnn v6.0
  • GPU model and memory: GTX-1080 - 8G

Update:
The output of the suggested snippet in the comment section is as follows:

import tensorflow as tf

sess = tf.InteractiveSession()
with tf.device('/cpu:0'):
    q = tf.FIFOQueue(3, 'float')
    add_op = q.enqueue(42)
with tf.device('/gpu:0'):
    get_op = q.dequeue()

sess.run(add_op)
print( sess.run(get_op))

Outputs :

2017-09-17 23:30:59.539728: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-17 23:30:59.539857: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-17 23:30:59.856904: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.835
pciBusID 0000:01:00.0
Total memory: 8.00GiB
Free memory: 6.63GiB
2017-09-17 23:30:59.857034: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0
2017-09-17 23:30:59.858320: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:986] 0:   Y
2017-09-17 23:30:59.858688: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
2017-09-17 23:30:59.879245: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\simple_placer.cc:697] Ignoring device specification /device:GPU:0 for node 'fifo_queue_Dequeue' because the input edge from 'fifo_queue' is a reference connection and already has a device field set to /job:localhost/replica:0/task:0/device:CPU:0
42.0
1

There are 1 answers

0
Hossein On BEST ANSWER

OK, I ran some tests on Ubuntu as well and here is what I found:
The computation runs on GPU on Ubuntu and compared to Windows (Tensorflow 1.3 installed using pip), it's 4 to 6 times faster.
It utilizes GPU and CPU and load balancing on CPU cores seems fairly distributed, However, this is not the case with Windows version.
Under Windows it does seem to be using GPUsince it takes up 7.1 gigabytes of VRAM, but the load on CPU is crazy! Unlike the load on GPU which minuscule, all cores are maxed out and it chokes the whole system performance.
Further investigation lead me to this similar problem : Object detection using GPU on Windows is about 5 times slower than on Ubuntu which the guy says it's because of some dependencies issues on windows, which Tensorflow did not do anything about it since version 1.2!
so the bottom line is if you intend on using Object Detection API, and intend on utilizing your GPU and resources efficiently, use Ubuntu and run your experiments there. Tensorflow seems not to give a damn about Windows!