Inference execution in huggingface transformers.js

47 views Asked by At

I found a code that works fine in python.

def text_classification_inference(self, input_text):
    if not self.model or not self.tokenizer or not self.id2label:
        print('Something wrong has been happened!')
        return

    pt_batch = self.tokenizer(
        input_text,
        padding=True,
        truncation=True,
        max_length=self.config.max_position_embeddings,
        return_tensors="pt"
    )

    pt_outputs = self.model(**pt_batch)
    pt_predictions = torch.argmax(F.softmax(pt_outputs.logits, dim=1), dim=1)

    output_predictions = []
    for i, sentence in enumerate(input_text):
        output_predictions.append((sentence, self.id2label.get(pt_predictions[i].item())))
    return output_predictions

This is the code github address: https://github.com/Mofid-AI/persian-nlp-benchmark/blob/main/text_classification.py

I want to do the same thing with traformers library in javascipt(traformers.js). I tried pipeline api in js successfully but encountered error for too long inputs. So I need to set truncation=True for tokenizer in js. I'm beginner to both python and javascript. Appreciate in advance

0

There are 0 answers