Keras verbose training progress bar writing a new line on each batch issue

17.8k views Asked by At

running a Dense feed-forward neural net in Keras. there are class_weights for two outputs, and sample_weights for a third output. fore some reason it prints the progress verbose display for each batch calculated, and not updating the print on the same line as its supposed to... Did this ever happens to you? How is it fixed? From the shell:

42336/747322 [====>.........................] - ETA: 79s - loss: 20.7154 - x1_loss: 9.5913 - x2_loss: 10.0536 - x3_loss: 1.0705 - x1_acc: 0.6930 - x2_acc: 0.4433 - x3_acc: 0.6821
143360/747322 [====>.........................] - ETA: 78s - loss: 20.7387 - x1_loss: 9.6131 - x2_loss: 10.0555 - x3_loss: 1.0702 - x1_acc: 0.6930 - x2_acc: 0.4432 - x3_acc: 0.6820
144384/747322 [====>.........................] - ETA: 78s - loss: 20.7362 - x1_loss: 9.6067 - x2_loss: 10.0608 - x3_loss: 1.0687 - x1_acc: 0.6930 - x2_acc: 0.4429 - x3_acc: 0.6817
145408/747322 [====>.........................] - ETA: 78s - loss: 20.7257 - x1_loss: 9.5985 - x2_loss: 10.0571 - x3_loss: 1.0702 - x1_acc: 0.6929 - x2_acc: 0.4428 - x3_acc: 0.6815
146432/747322 [====>.........................] - ETA: 78s - loss: 20.7145 - x1_loss: 9.5849 - x2_loss: 10.0605 - x3_loss: 1.0691 - x1_acc: 0.6932 - x2_acc: 0.4429 - x3_acc: 0.6816
147456/747322 [====>.........................] - ETA: 78s - loss: 20.7208 - x1_loss: 9.5859 - x2_loss: 10.0662 - x3_loss: 1.0688 - x1_acc: 0.6931 - x2_acc: 0.4429 - x3_acc: 0.6815
148480/747322 [====>.........................] - ETA: 78s - loss: 20.7078 - x1_loss: 9.5762 - x2_loss: 10.0636 - x3_loss: 1.0680 - x1_acc: 0.6932 - x2_acc: 0.4430 - x3_acc: 0.6815
149504/747322 [=====>........................] - ETA: 77s - loss: 20.6987 - x1_loss: 9.5749 - x2_loss: 10.0555 - x3_loss: 1.0683 - x1_acc: 0.6931 - x2_acc: 0.4430 - x3_acc: 0.6817
150528/747322 [=====>........................] - ETA: 77s - loss: 20.9883 - x1_loss: 9.5688 - x2_loss: 10.3509 - x3_loss: 1.0686 - x1_acc: 0.6928 - x2_acc: 0.4428 - x3_acc: 0.6819
151552/747322 [=====>........................] - ETA: 77s - loss: 20.9721 - x1_loss: 9.5606 - x2_loss: 10.3435 - x3_loss: 1.0679 - x1_acc: 0.6927 - x2_acc: 0.4426 - x3_acc: 0.6821
152576/747322 [=====>........................] - ETA: 77s - loss: 20.9585 - x1_loss: 9.5558 - x2_loss: 10.3355 - x3_loss: 1.0672 - x1_acc: 0.6926 - x2_acc: 0.4425 - x3_acc: 0.6822
153600/747322 [=====>........................] - ETA: 77s - loss: 20.9409 - x1_loss: 9.5447 - x2_loss: 10.3300 - x3_loss: 1.0662 - x1_acc: 0.6925 - x2_acc: 0.4426 - x3_acc: 0.6822
154624/747322 [=====>........................] - ETA: 77s - loss: 20.9254 - x1_loss: 9.5341 - x2_loss: 10.3250 - x3_loss: 1.0663 - x1_acc: 0.6924 - x2_acc: 0.4425 - x3_acc: 0.6825
155648/747322 [=====>........................] - ETA: 77s - loss: 20.9189 - x1_loss: 9.5270 - x2_loss: 10.3249 - x3_loss: 1.0670 - x1_acc: 0.6925 - x2_acc: 0.4425 - x3_acc: 0.6825
156672/747322 [=====>........................] - ETA: 76s - loss: 20.9069 - x1_loss: 9.5155 - x2_loss: 10.3256 - x3_loss: 1.0658 - x1_acc: 0.6927 - x2_acc: 0.4423 - x3_acc: 0.6827
157696/747322 [=====>........................] - ETA: 76s - loss: 20.9275 - x1_loss: 9.5461 - x2_loss: 10.3163 - x3_loss: 1.0651 - x1_acc: 0.6927 - x2_acc: 0.4422 - x3_acc: 0.6828
158720/747322 [=====>........................] - ETA: 76s - loss: 21.4809 - x1_loss: 10.1018 - x2_loss: 10.3133 - x3_loss: 1.0659 - x1_acc: 0.6928 - x2_acc: 0.4422 - x3_acc: 0.6829
159744/747322 [=====>........................] - ETA: 76s - loss: 21.4617 - x1_loss: 10.0871 - x2_loss: 10.3093 - x3_loss: 1.0653 - x1_acc: 0.6928 - x2_acc: 0.4421 - x3_acc: 0.6830
160768/747322 [=====>........................] - ETA: 76s - loss: 21.5462 - x1_loss: 10.1705 - x2_loss: 10.3105 - x3_loss: 1.0652 - x1_acc: 0.6928 - x2_acc: 0.4420 - x3_acc: 0.6832
161792/747322 [=====>........................] - ETA: 76s - loss: 21.5642 - x1_loss: 10.1849 - x2_loss: 10.3138 - x3_loss: 1.0655 - x1_acc: 0.6928 - x2_acc: 0.4418 - x3_acc: 0.6832
162816/747322 [=====>........................] - ETA: 76s - loss: 21.5508 - x1_loss: 10.1739 - x2_loss: 10.3118 - x3_loss: 1.0651 - x1_acc: 0.6928 - x2_acc: 0.4418 - x3_acc: 0.6833
163840/747322 [=====>........................] - ETA: 76s - loss: 21.5323 - x1_loss: 10.1606 - x2_loss: 10.3057 - x3_loss: 1.0659 - x1_acc: 0.6927 - x2_acc: 0.4419 - x3_acc: 0.6833
164864/747322 [=====>........................] - ETA: 75s - loss: 21.5282 - x1_loss: 10.1607 - x2_loss: 10.3016 - x3_loss: 1.0659 - x1_acc: 0.6926 - x2_acc: 0.4418 - x3_acc: 0.6834
165888/747322 [=====>........................] - ETA: 75s - loss: 21.5321 - x1_loss: 10.1696 - x2_loss: 10.2963 - x3_loss: 1.0662 - x1_acc: 0.6927 - x2_acc: 0.4417 - x3_acc: 0.6834
166912/747322 [=====>........................] - ETA: 75s - loss: 21.5131 - x1_loss: 10.1554 - x2_loss: 10.2912 - x3_loss: 1.0664 - x1_acc: 0.6927 - x2_acc: 0.4416 - x3_acc: 0.6833
167936/747322 [=====>........................] - ETA: 75s - loss: 21.5211 - x1_loss: 10.1649 - x2_loss: 10.2886 - x3_loss: 1.0676 - x1_acc: 0.6929 - x2_acc: 0.4415 - x3_acc: 0.6835
168960/747322 [=====>........................] - ETA: 75s - loss: 21.5049 - x1_loss: 10.1504 - x2_loss: 10.2870 - x3_loss: 1.0676 - x1_acc: 0.6930 - x2_acc: 0.4414 - x3_acc: 0.6835
169984/747322 [=====>........................] - ETA: 75s - loss: 21.5171 - x1_loss: 10.1684 - x2_loss: 10.2818 - x3_loss: 1.0670 - x1_acc: 0.6931 - x2_acc: 0.4414 - x3_acc: 0.6832
171008/747322 [=====>........................] - ETA: 75s - loss: 21.5036 - x1_loss: 10.1541 - x2_loss: 10.2816 - x3_loss: 1.0678 - x1_acc: 0.6931 - x2_acc: 0.4413 - x3_acc: 0.6828
172032/747322 [=====>........................] - ETA: 75s - loss: 21.4870 - x1_loss: 10.1377 - x2_loss: 10.2816 - x3_loss: 1.0677 - x1_acc: 0.6931 - x2_acc: 0.4413 - x3_acc: 0.6827
173056/747322 [=====>........................] - ETA: 75s - loss: 21.4729 - x1_loss: 10.1210 - x2_loss: 10.2836 - x3_loss: 1.0683 - x1_acc: 0.6931 - x2_acc: 0.4413 - x3_acc: 0.6824
174080/747322 [=====>........................] - ETA: 74s - loss: 21.4512 - x1_loss: 10.1085 - x2_loss: 10.2742 - x3_loss: 1.0685 - x1_acc: 0.6931 - x2_acc: 0.4414 - x3_acc: 0.6821
175104/747322 [======>.......................] - ETA: 74s - loss: 21.4315 - x1_loss: 10.0977 - x2_loss: 10.2647 - x3_loss: 1.0690 - x1_acc: 0.6931 - x2_acc: 0.4414 - x3_acc: 0.6817
176128/747322 [======>.......................] - ETA: 74s - loss: 21.4231 - x1_loss: 10.0880 - x2_loss: 10.2656 - x3_loss: 1.0695 - x1_acc: 0.6932 - x2_acc: 0.4412 - x3_acc: 0.6813
177152/747322 [======>.......................] - ETA: 74s - loss: 21.4059 - x1_loss: 10.0732 - x2_loss: 10.2639 - x3_loss: 1.0688 - x1_acc: 0.6931 - x2_acc: 0.4412 - x3_acc: 0.6809
178176/747322 [======>.......................] - ETA: 74s - loss: 21.4289 - x1_loss: 10.0967 - x2_loss: 10.2634 - x3_loss: 1.0688 - x1_acc: 0.6930 - x2_acc: 0.4413 - x3_acc: 0.6807
179200/747322 [======>.......................] - ETA: 74s - loss: 21.4329 - x1_loss: 10.1092 - x2_loss: 10.2557 - x3_loss: 1.0681 - x1_acc: 0.6930 - x2_acc: 0.4414 - x3_acc: 0.6807
180224/747322 [======>.......................] - ETA: 74s - loss: 21.4277 - x1_loss: 10.1099 - x2_loss: 10.2503 - x3_loss: 1.0675 - x1_acc: 0.6930 - x2_acc: 0.4415 - x3_acc: 0.6807
181248/747322 [======>.......................] - ETA: 73s - loss: 21.4088 - x1_loss: 10.0975 - x2_loss: 10.2441 - x3_loss: 1.0671 - x1_acc: 0.6929 - x2_acc: 0.4416 - x3_acc: 0.6808
182272/747322 [======>.......................] - ETA: 73s - loss: 21.3909 - x1_loss: 10.0841 - x2_loss: 10.2405 - x3_loss: 1.0663 - x1_acc: 0.6929 - x2_acc: 0.4415 - x3_acc: 0.6811
183296/747322 [======>.......................] - ETA: 73s - loss: 21.3775 - x1_loss: 10.0699 - x2_loss: 10.2416 - x3_loss: 1.0660 - x1_acc: 0.6927 - x2_acc: 0.4415 - x3_acc: 0.6813
184320/747322 [======>.......................] - ETA: 73s - loss: 21.3682 - x1_loss: 10.0664 - x2_loss: 10.2355 - x3_loss: 1.0662 - x1_acc: 0.6928 - x2_acc: 0.4417 - x3_acc: 0.6818
185344/747322 [======>.......................] - ETA: 73s - loss: 21.4162 - x1_loss: 10.1213 - x2_loss: 10.2291 - x3_loss: 1.0658 - x1_acc: 0.6927 - x2_acc: 0.4417 - x3_acc: 0.6821
186368/747322 [======>.......................] - ETA: 73s - loss: 21.3981 - x1_loss: 10.1050 - x2_loss: 10.2259 - x3_loss: 1.0672 - x1_acc: 0.6928 - x2_acc: 0.4418 - x3_acc: 0.6825
187392/747322 [======>.......................] - ETA: 73s - loss: 21.3793 - x1_loss: 10.0909 - x2_loss: 10.2212 - x3_loss: 1.0673 - x1_acc: 0.6928 - x2_acc: 0.4417 - x3_acc: 0.6827
188416/747322 [======>.......................] - ETA: 73s - loss: 21.3614 - x1_loss: 10.0784 - x2_loss: 10.2163 - x3_loss: 1.0668 - x1_acc: 0.6930 - x2_acc: 0.4418 - x3_acc: 0.6830
189440/747322 [======>.......................] - ETA: 72s - loss: 21.3736 - x1_loss: 10.0909 - x2_loss: 10.2169 - x3_loss: 1.0659 - x1_acc: 0.6930 - x2_acc: 0.4417 - x3_acc: 0.6833
190464/747322 [======>.......................] - ETA: 72s - loss: 21.4615 - x1_loss: 10.0802 - x2_loss: 10.3165 - x3_loss: 1.0648 - x1_acc: 0.6930 - x2_acc: 0.4418 - x3_acc: 0.6836
191488/747322 [======>.......................] - ETA: 72s - loss: 21.4493 - x1_loss: 10.0653 - x2_loss: 10.3194 - x3_loss: 1.0646 - x1_acc: 0.6930 - x2_acc: 0.4417 - x3_acc: 0.6837
192512/747322 [======>.......................] - ETA: 72s - loss: 21.4863 - x1_loss: 10.0997 - x2_loss: 10.3207 - x3_loss: 1.0659 - x1_acc: 0.6927 - x2_acc: 0.4416 - x3_acc: 0.6837
193536/747322 [======>.......................] - ETA: 72s - loss: 21.4750 - x1_loss: 10.0895 - x2_loss: 10.3198 - x3_loss: 1.0657 - x1_acc: 0.6929 - x2_acc: 0.4416 - x3_acc: 0.6839
194560/747322 [======>.......................] - ETA: 72s - loss: 21.4577 - x1_loss: 10.0755 - x2_loss: 10.3168 - x3_loss: 1.0654 - x1_acc: 0.6929 - x2_acc: 0.4416 - x3_acc: 0.6839
195584/747322 [======>.......................] - ETA: 72s - loss: 21.4429 - x1_loss: 10.0627 - x2_loss: 10.3148 - x3_loss: 1.0655 - x1_acc: 0.6929 - x2_acc: 0.4417 - x3_acc: 0.6838
196608/747322 [======>.......................] - ETA: 71s - loss: 21.4307 - x1_loss: 10.0558 - x2_loss: 10.3089 - x3_loss: 1.0660 - x1_acc: 0.6929 - x2_acc: 0.4418 - x3_acc: 0.6834
197632/747322 [======>.......................] - ETA: 71s - loss: 21.4446 - x1_loss: 10.0669 - x2_loss: 10.3107 - x3_loss: 1.0670 - x1_acc: 0.6929 - x2_acc: 0.4418 - x3_acc: 0.6830
198656/747322 [======>.......................] - ETA: 71s - loss: 21.4287 - x1_loss: 10.0552 - x2_loss: 10.3071 - x3_loss: 1.0665 - x1_acc: 0.6930 - x2_acc: 0.4418 - x3_acc: 0.6827
199680/747322 [=======>......................] - ETA: 71s - loss: 21.4168 - x1_loss: 10.0474 - x2_loss: 10.3034 - x3_loss: 1.0660 - x1_acc: 0.6931 - x2_acc: 0.4417 - x3_acc: 0.6823
200704/747322 [=======>......................] - ETA: 71s - loss: 21.4064 - x1_loss: 10.0385 - x2_loss: 10.3015 - x3_loss: 1.0664 - x1_acc: 0.6931 - x2_acc: 0.4417 - x3_acc: 0.6819
201728/747322 [=======>......................] - ETA: 71s - loss: 21.3954 - x1_loss: 10.0320 - x2_loss: 10.2974 - x3_loss: 1.0659 - x1_acc: 0.6931 - x2_acc: 0.4416 - x3_acc: 0.6817
202752/747322 [=======>......................] - ETA: 71s - loss: 21.3870 - x1_loss: 10.0243 - x2_loss: 10.2965 - x3_loss: 1.0662 - x1_acc: 0.6931 - x2_acc: 0.4415 - x3_acc: 0.6816
203776/747322 [=======>......................] - ETA: 70s - loss: 21.3782 - x1_loss: 10.0155 - x2_loss: 10.2954 - x3_loss: 1.0673 - x1_acc: 0.6929 - 

etc...
9

There are 9 answers

1
androst On

I had a similar issue, but have not had the time to investigate it further. The problem seems to be related to the class Progbar in generic_utils.py of keras, see link, and perhaps Python >= 3.3.

The following lines are found in the update function of the class:

Line 107: sys.stdout.write('\b' * prev_total_width)
Line 108: sys.stdout.write('\r')

I simply removed line 107 as a quick fix, so instead of backspacing the previous line then performing a shift to the beginning of the line, I only perform the shift. I guess there is some better ways than altering the source code though.

1
MacGregor On

Just make the terminal window wider. Make sure it can fit whole "progress expression" in one line.

The same thing happened to me while using VS Code. I was moving terminal window when it randomly started to print the progress status always on new line. When I put the terminal window back (or just made it wider), the problem was solved.

0
TzviLederer On

In Keras file keras/utils/generic_utils.py in lines 354-359:

            if self._dynamic_display:
                sys.stdout.write('\b' * prev_total_width)
                sys.stdout.write('\r')
            else:
                sys.stdout.write('\n')

I replaced the sys.stdout.write('\n') with: sys.stdout.write('\r'). Worked for me.

3
casper.dcl On

I've added built-in support for keras in tqdm so you could use it instead (pip install "tqdm>=4.41.0"):

from tqdm.keras import TqdmCallback
...
model.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])

This turns off keras' progress (verbose=0), and uses tqdm instead. For the callback, verbose=2 means separate progressbars for epochs and batches. 1 means clear batch bars when done. 0 means only show epochs (never show batch bars).

If there are problems with it please open an issue at https://github.com/tqdm/tqdm/issues

0
KST_331 On

The workaround of androst (4 Jan 2017) did not work for me. However, I found out that the lines of code which androst cited from generic_utils.py were never executed when I ran my code, because the condition in the embracing "if-clause" was always False. When I out-commented the if-clause, and set the corresponding variable (manually) to "True", it worked.

This is what I changed (for me: lines 311-314 of generic_utils.py):

    #self._dynamic_display = ((hasattr(sys.stdout, 'isatty') and
    #                          sys.stdout.isatty()) or
    #                         'ipykernel' in sys.modules)
    self._dynamic_display = True   # inserted to overwrite the above (workaround by KS)

Then the progress bar worked nicely :-)

0
Gabriel Chung On

Import the library:

from tqdm.keras import TqdmCallback

Specify using tqdm library:

model.fit(xs, ys, epochs=10000, verbose=0, callbacks=[TqdmCallback(verbose=1)])

Result:

100%|██████████████| 10000/10000 [01:34<00:00, 105.57epoch/s, loss=1.56e+3]
 29%|███▌          |  2891/10000 [00:28<01:05, 108.02epoch/s, loss=1.57e+3]

For my case, verbose=1 is what really works for me. My answer is based on casper.dcl's answer.

0
tomwesolowski On

It was mentioned before, but I will rewrite it to be more visible for future users.

You have too narrow terminal to print all these values - just set width argument of Progbar constructor to smaller number or remove/rename some of the provided values.

0
Panos On

This seems to be a consistent problem with Keras. I tried to find the lines

sys.stdout.write('\b' * prev_total_width)

sys.stdout.write('\r')

at the Keras/utils/generic_utils.py file and they are (as of the current version) at 258 and 259 accordingly. I commented like 258 but this seems not to resolve the issue. I did manage to make the progress bar work by commenting the line:

line 303: sys.stdout.write(info)

It seems as if the info makes the bar too long for the terminal and so it breaks to a new line.

So I finally resolved the issue. It seems like it was rather simple at the end....

Just make the terminal wider...

Note: Tested on Linux Ubuntu 16.04 | Keras Version 2.0.5

0
Antonin G. On

The solutions of @user11353683 and @Panos can be completed in order to not modify the sources (which can create future problems): just install ipykernel and import it in your code:

pip install ipykernel Then import ipykernel

In fact, in the Keras generic_utils.py file, the probematic line was:

            if self._dynamic_display:
                sys.stdout.write('\b' * prev_total_width)
                sys.stdout.write('\r')
            else:
                sys.stdout.write('\n')

And the value self._dynamic_display was initiated such as:

        self._dynamic_display = ((hasattr(sys.stdout, 'isatty') and
                                  sys.stdout.isatty()) or
                                 'ipykernel' in sys.

So, loading ipykerneladded it to sys.modules and fixed the problem for me.