How to merge LoRA weights to an LLM at initialization time on-device (Gemma)

36 views Asked by At

I'm experimenting something on Web with mediapipe that require having multiple LoRA files, each file trained for a different task. I want to select a LoRA file and merge it to Gemma at initialization time locally on web. I went through the code, I saw some .proto file with lora_path or lora_rank fields but I haven't seen any exposed parameter from the LlmInference class or its options that can help me specify a LoRA file.

One option could be (Maybe) to use the LlmGPUCalculatorOptions.lora_path. However, the current API doesn't expose anything that make this possible, I don't even know if it could work e.i: if that field is meant for this purpose. Will this option work? If not, how can I achieve this?

0

There are 0 answers