Audio enhancement quality issues for same speech on google colab and Linux machine using speechbrain

108 views Asked by At

I am using same https://huggingface.co/speechbrain/sepformer-wham-enhancement model for enhancement of the audio speech. Output (enhanced) speech quality differs highly based on the platforms i am using, i.e using same model for same speech gives different quality of enhanced speech though i have used GPU on both machines. enhanced speech on Linux machine is far better then the one enhanced on Colab.

Here's the code i am using:

from IPython.display import Audio
from speechbrain.pretrained import SepformerSeparation as separator

class AudioProcessing:
    def __init__(self):
        self.separator_model = separator.from_hparams(source="speechbrain/sepformer-whamr-enhancement", savedir='pretrained_models/sepformer-whamr-enhancement')
       
    def enhance_audio(self, input_filename, output_filename):
        est_sources = self.separator_model.separate_file(path=input_filename)
        torchaudio.save(output_filename, est_sources[:, :, 0].detach().cpu(), 8000)
        print(f"Enhanced audio saved as '{output_filename}'")

if __name__ == "__main__":
    audio_processor = AudioProcessing()

    input_audio = "/home/wesee20/Documents/test/Recorded-Audio.wav"
    output_audio = "output_enhanced_audio1.wav"

    audio_processor.enhance_audio(input_audio, output_audio)
    

My expectation from this code are that quality of the enhanced audio should not differ based on the platform i am using to run this code.

I am not sure whether it is because of the hardware or the something else. I have experimented with running the code on linux Machine with or without GPU but the enhancement speech quality remains the same but on colab, it differs.

0

There are 0 answers