Capturing a CMSampleBuffer using an RTCAudioSource on iOS

867 views Asked by At

I'm trying to stream a CMSampleBuffer video / audio combo using WebRTC on iOS, but I'm running into trouble trying to capture audio. Video works just fine:

guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
    print("couldn't get image from buffer :~(")
    
    return
}
        
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: ._0, timeStampNs: timeStampNs)

videoSource.capturer(videoCapturer, didCapture: rtcVideoFrame)

When it comes to audio, I can't see any method on the RTCAudioSource class in order to capture audio, any help would be appreciated!

1

There are 1 answers

3
William On

I found a fork of the WebRTC codebase which solves this issue by adding a way for audio samples to be captured by an RTCAudioDeviceModule:

https://github.com/pixiv/webrtc/blob/87.0.4280.142-pixiv0/README.pixiv.en.md