I learned how to use AVAudioEngine to process audio into a file and my code works, as long as i don't alter the playback rate with AVAudioUnitTimePitch.

When the rate is altered, the rendered audio is the same length of the original audio (with unchanged rate). Consequently, if the audio is slowed down (rate < 1) part of it will be trimmed and if it is speeded up (rate > 1) the last portion of the rendered audio will be silent.

Here's the code:

// engine: AVAudioEngine
// playerNode: AVAudioPlayerNode
// audioFile: AVAudioFile

open func render(to destinationFile: AVAudioFile) throws {
    
    playerNode.scheduleFile(audioFile, at: nil)
    
    do {
        let buffCapacity: AVAudioFrameCount = 4096
        try engine.enableManualRenderingMode(.offline, format: audioFile.processingFormat, maximumFrameCount: buffCapacity)
    }
    catch {
        print("Failed to enable manual rendering mode: \(error)")
        throw error
    }
    
    do {
        try engine.start()
    }
    catch {
        print("Failed to start the engine: \(error)")
    }
    
    playerNode.play()
    
    let outputBuff = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat,
                                      frameCapacity: engine.manualRenderingMaximumFrameCount)!
    
    while engine.manualRenderingSampleTime < audioFile.length {
        let remainingSamples = audioFile.length - engine.manualRenderingSampleTime
        let framesToRender = min(outputBuff.frameCapacity, AVAudioFrameCount(remainingSamples))
        
        do {
            let renderingStatus = try engine.renderOffline(framesToRender, to: outputBuff)
            
            switch renderingStatus {
            
            case .success:
                do {
                    try destinationFile.write(from: outputBuff)
                }
                catch {
                    print("Failed to write from file to buffer: \(error)")
                    throw error
                }
                
            case .insufficientDataFromInputNode:
                break
            
            case.cannotDoInCurrentContext:
                break
            
            case .error:
                print("An error occured during rendering.")
                throw AudioPlayer.ExportError.renderingError
            
            @unknown default:
                fatalError("engine.renderOffline() returned an unknown value.")
            }
        }
        catch {
            print("Failed to render offline manually: \(error)")
            throw error
        }
    }
    
    playerNode.stop()
    engine.stop()
    engine.disableManualRenderingMode()
}

I tried to solve the issue by rendering an amount of samples inversely proportional to the playback rate. This only fixed the problem when the rate is greater than 1.

1

There are 1 answers

0
Fabio On

Since no one answered and many days have passed, i'm going to share how i solved the issue.

Rendering an amount of samples inversely proportional to the playback rate proved to be effective. Initially this approach didn't work because i was doing it wrong.

Here's how to get the correct number of samples to render:

let framesToRenderCount = AVAudioFramePosition(Float(audioFile.length) * 1 / rate)