How to improve the output of fine tuned Open Llama 7b model for text generation?

419 views Asked by At

I am trying to fine tune a openllama model with huggingface's peft and lora. I fine tuned the model on a specific dataset. However, the output from the model.generate() is very poor for the given input. When I give a whole sentence form the dataset then it generates related texts, otherwise it is not. Are there any way to improve it?

0

There are 0 answers