FFMPEG - RTMP to HLS no audio output

5.3k views Asked by At

I am currently developing a dynamic HLS segmenter for our livecam application. Therefor I catch the external RTMP stream and convert it into segments with ffmpeg.

The following command works:

ffmpeg -i rtmp://"$serverip"/"$application"/mp4:"$stream_name".f4v -c:v libx264 -profile:v baseline -level 5.1 \
-c:a aac -strict experimental -flags +global_header -f mpegts - | ffmpeg -i - -c copy -map 0 -f segment \
-segment_list /tmp/hls/"$id"/"$stream_name".m3u8 -segment_format libmp3lame -segment_time 10 \
-segment_wrap 4 /tmp/hls/"$id"/"$stream_name"%03d.ts

But with this command I do have a huge latency between the livestream and the HLS output (around 1-2 minutes!).

So I tried another command, which results in a latency of 20-30 seconds! My only problem is, that the audio stream is not recognized and also not put to HLS files (means I only got the video, but no audio at all):

ffmpeg -probesize 50k -i rtmp://"$serverip"/"$application"/mp4:"$stream_name".f4v \
-c:v libx264 -b:v 128k -g 90 -c:a aac -strict experimental -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list /tmp/hls/"$id"/"$stream_name".m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts /tmp/hls/"$id"/"$stream_name"%d.ts

I thought, the -c:a aac Flag should do the job for muxing the audio as well.

Do you have any suggestions what went wrong on the second command? I definitly have to segement the audio stream as well!

Thanks in advance

Update:

Some outputs of the FFMPEG command:

I started the command (2) ones, and got an audio output, but it seems not to work everytime.

Output from working command 2, audio is working: http://pastebin.com/bhxfNQBg

Output from working command 2, audio not working (nothing changed): http://pastebin.com/engEuSdn

Whats strange for me, is the line:

[flv @ 0x1eed900] New audio stream 0:1 at pos:716680 and DTS:0s

This only occurs, if the audio on hls side is NOT working.

Any help will be appreciated

Update 2:
It seems like there is a problem when I start the ffmpeg command after the stream is already published.
If I follow these steps, everything works fine:
1. Start Stream (nc Connection to AMS is established)
2. Start FFMPEG command (it will idle until the stream publishes)
3. Start publishing

But If I do it like that (which we will need), no audio will be present:
1. Start Stream
2. User join, start publishing
3. Trigger ffmpeg command

1

There are 1 answers

2
DerHighland On

Because more and more people have to implement HLS features, I quickly want to post my "answer" to my origin question.

I figured out, that the problem has to do something with the probesize.

As you can see in the documentation, the probesize defines how long ffmpeg waits for stream information before starting the segmentation.

On an running live stream I have to set this around 1000k to fetch the audio stream correctly.

No everything works as expected. But note, that a higher probesize will result in higher latencies!