Feed frames one at a time into WebRTC iOS

1.2k views Asked by At

I am trying to make an iOS app that does some pre-processing on video from the camera, then sends it out over webrtc. I am doing the pre-processing on each individual frame using the AVCaptureVideoDataOutputSampleBufferDelegate protocol and then capturing the frame with the captureOutput method.

Now I need to figure out how to send it out on WebRTC. I am using the Google WebRTC library: https://webrtc.googlesource.com/src/.

There is a class called RTCCameraVideoCapturer [(link)][1] that most iOS example apps using this library seem to use. This class accesses the camera itself, so I won't be able to use it. It uses AVCaptureVideoDataOutputSampleBufferDelegate, and in captureOutput, it does this

  RTC_OBJC_TYPE(RTCCVPixelBuffer) *rtcPixelBuffer =
      [[RTC_OBJC_TYPE(RTCCVPixelBuffer) alloc] initWithPixelBuffer:pixelBuffer];
  int64_t timeStampNs = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) *
      kNanosecondsPerSecond;
  RTC_OBJC_TYPE(RTCVideoFrame) *videoFrame =
      [[RTC_OBJC_TYPE(RTCVideoFrame) alloc] initWithBuffer:rtcPixelBuffer
                                                  rotation:_rotation
                                               timeStampNs:timeStampNs];
  [self.delegate capturer:self didCaptureVideoFrame:videoFrame];

[self.delegate capturer:self didCaptureVideoFrame:videoFrame] seems to be the call that is made to feed a single frame into webRTC.


How can I write swift code that will allow me to feed frames into webRTC one at a time, similar to how it is done in the `RTCCameraVideoCapturer` class?


  [1]: https://webrtc.googlesource.com/src/+/refs/heads/master/sdk/objc/components/capturer/RTCCameraVideoCapturer.m
1

There are 1 answers

0
Satoshi Nakajima On

You just need to create an instance of RTCVideoCapturer (which is just a holder of the delegate, localVideoTrack.source), and calls a delegate method "capturer" with a frame whenever you have a pixelBuffer you want to push.

Here is a sample code.

    var capturer: RTCVideoCapturer?
    let rtcQueue = DispatchQueue(label: "WebRTC")

    func appClient(_ client: ARDAppClient!, didReceiveLocalVideoTrack localVideoTrack: RTCVideoTrack!) {
        capturer = RTCVideoCapturer(delegate: localVideoTrack.source)
    }

    func render(pixelBuffer: CVPixelBuffer, timesample: CMTime) {
        
        let buffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
        self.rtcQueue.async {
            let frame = RTCVideoFrame(buffer: buffer, rotation: ._0, timeStampNs: Int64(CMTimeGetSeconds(timesample) * Double(NSEC_PER_SEC)))
            self.capturer?.delegate?.capturer(self.capturer!, didCapture: frame)
        }
    }