List Question
20 TechQA 2024-03-27T05:08:35.310000What is the difference between PEFT and RAFT?
21 views
Asked by Krishna
Accuracy at 0 during inference with peft and Vision EncoderDecoderModel from huggingface
29 views
Asked by user21970358
PyTorch: AttributeError: 'torch.dtype' object has no attribute 'itemsize'
589 views
Asked by Lidor Eliyahu Shelef
Repo id must use alphanumeric chars : while performing auto training on llm
36 views
Asked by Ankur Kumar
Struggling with Hugging Face PEFT
81 views
Asked by countermode
'MistralForCausalLM' object has no attribute 'merge_and_unload"
225 views
Asked by NetForceProduction
convert a PeftModel back to the original model but with updated weights
77 views
Asked by afsara_ben
finetune a model with LoRa, then load it in its vanilla architecture
79 views
Asked by afsara_ben
how to save adapter.bin model as .pt model
34 views
Asked by afsara_ben
Resume training from a checkpoint with different hyperparameters when training with PEFT and transformers
104 views
Asked by Jon Flynn
Huggingface transformer train function throwing Device() received an invalid combination of arguments
259 views
Asked by Syed Mohammad Fahim Abrar
Why no log for training model, and key_error for 'eval_loss'?
60 views
Asked by Therrief
How do I save a huggingface LLM model into shards?
138 views
Asked by Muhammad Omar Farooq
Running out of memory during PEFT LoRA fine-tuning of LLMs with 7B parameters
193 views
Asked by Sebastian Simon
using peft after bits and bytes seems to have no effect on LLM
47 views
Asked by user3476463
OSError: ./peft-dialogue-summary-checkpoint-local does not appear to have a file named config.json
37 views
Asked by ZandaJr
padding_idx or self.num_embeddings gets changed into a string while finetuning Llama 2
29 views
Asked by sanminchui
Is it possible to fine tune the model nllb200_1.3B in Google Colab?
114 views
Asked by Maximiliano Ramirez