Liquidsoap 1.3.1 — Mountpoint in use

452 views Asked by At

When I shut down my Icecast server there is occasionally a problem restarting it which forces me to reboot my computer.

The logs look like this

14:52:22 soap.1 | started with pid 9817
14:52:22 soap.1 | Warning: ignored expression at line 12, char 20-96.
14:52:22 soap.1 | 2017/09/12 14:52:22 >>> LOG START
14:52:22 soap.1 | 2017/09/12 14:52:22 [main:3] Liquidsoap 1.3.1 (git://github.com/savonet/liquidsoap.git@3adeff73df0cd369401c7b46caaab058ef80880b:20170608:111503)
14:52:22 soap.1 | 2017/09/12 14:52:22 [main:3] Using: bytes=[distributed with OCaml 4.02 or above] pcre=7.2.3 dtools=0.3.3 duppy=0.6.0 duppy.syntax=0.6.0 cry=0.5.0 mm=0.3.0 xmlplaylist=0.1.4 lastfm=0.3.1 ogg=0.5.1 opus=0.1.2 speex=0.2.1 mad=0.4.5 flac=0.1.2 flac.ogg=0.1.2 dynlink=[distributed with Ocaml] lame=0.3.3 gstreamer=0.2.2 fdkaac=0.2.1 theora=0.3.1 bjack=0.1.5 alsa=0.2.3 ao=0.2.1 samplerate=0.1.4 taglib=0.3.3 camomile=0.8.5 faad=0.3.3 soundtouch=0.1.8 portaudio=0.2.1 pulseaudio=0.1.3 ladspa=0.1.5 dssi=0.1.2 lo=0.1.1
14:52:22 soap.1 | 2017/09/12 14:52:22 [gstreamer.loader:3] Loaded GStreamer 1.2.4 0
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Using 44100Hz audio, 25Hz video, 44100Hz master.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Frame size must be a multiple of 1764 ticks = 1764 audio samples = 1 video samples.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Targetting 'frame.duration': 0.04s = 1764 audio samples = 1764 ticks.
14:52:22 soap.1 | 2017/09/12 14:52:22 [frame:3] Frames last 0.04s = 1764 audio samples = 1 video samples = 1764 ticks.
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "generic queue #1".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "generic queue #2".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "non-blocking queue #1".
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "non-blocking queue #2".
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:3] Connecting mount ogr for source@localhost...
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:2] Connection failed: 403, Mountpoint in use (HTTP/1.0)
14:52:22 soap.1 | 2017/09/12 14:52:22 [ogr:3] Will try again in 3.00 sec.
14:52:22 soap.1 | strange error flushing buffer ... 
14:52:22 soap.1 | strange error flushing buffer ... 
14:52:22 soap.1 | 2017/09/12 14:52:22 [threads:3] Created thread "wallclock_main" (1 total).
14:52:22 soap.1 | 2017/09/12 14:52:22 [clock.wallclock_main:3] Streaming loop starts, synchronized with wallclock.
14:52:22 soap.1 | 2017/09/12 14:52:22 [fallback_9219:3] Switch to sine_9218.

My guess is that sometimes when it shuts down the old mountpoint isn't probably removed.

Is there a way to manually delete this mountpoint, or some other way to resolve this?

Many thanks.

1

There are 1 answers

1
miknik On BEST ANSWER

I sometimes have the same problem. For whatever reason the first instance hasn't exited cleanly and is still listening on the addr/port combo of the mountpoint, preventing the new instance from binding to it. You can fix it without rebooting, you need to find the process causing the problem and then kill it.

For example let's say your mountpoint is listening on port 8800, you can use the lsof command to identify the old process. Add the -i option and specify the interface/port to return results for and you'll get something like this:

lsof -i:8800

COMMAND     PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
liquidsoa 30511 liquid   20u  IPv4 947691      0t0  TCP 192.168.1.5:8800 (LISTEN)

So here the offending pid would be 30511, if you kill that kill -9 30511 then liquidsoap should restart properly.

That's the basic concept covered, now let's make it a one liner.

We can add t to throw the terse option into the mix, telling lsof to dump the bits we don't need and just give us the info we are interested in, the pid(s) we want to kill:

lsof -ti:8800

30511

Our command now returns only the pid. Perfect, let's pipe it:

lsof -ti:8800 | xargs kill -9

Job done. lsof -ti:8800 should now return nothing and liquidsoap/icecast/whatever should start properly.