How to use GPU for Fine-tuning HuggingSound custom model

261 views Asked by At

I want to Fine-tune my model by using this code

from huggingsound import TrainingArguments, ModelArguments, SpeechRecognitionModel, TokenSet

model = SpeechRecognitionModel("facebook/wav2vec2-large-xlsr-53")
output_dir = "my/finetuned/model/output/dir"

tokens = ["a", "b", ... "y", "z", "'"]
token_set = TokenSet(tokens)

train_data = [
    {"path": "/path/to/sagan.mp3", "transcription": "some text"},
    {"path": "/path/to/asimov.wav", "transcription": "some text"},
]
eval_data = [
    {"path": "/path/to/sagan.mp3", "transcription": "some text"},
    {"path": "/path/to/asimov.wav", "transcription": "some text"},
]


model.finetune(
    output_dir, 
    train_data=train_data, 
    eval_data=eval_data,
    token_set=token_set,
)

it's running on RAM and I want to use Colab-GPU for train this model

1

There are 1 answers

0
miladjurablu On

I find the way to do that

import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SpeechRecognitionModel("facebook/wav2vec2-large-xlsr-53", device=device)

with that model trained on GPU