Is it possible to fine tune or use RAG on the CoreML version of Llama2?

211 views Asked by At

I recently came across the coreML version of Llama2 and I’m trying to see if I can fine tune it or use RAG. Specifically, for the RAG component, I’m trying to make an IOS swift application that initializes the embedding database with data the user enters so that Llama 2 can have context of large amounts of user data(nonsensitive) when answering questions. There isn’t a lot of documentation surrounding this so I was hoping to know if this is possible and if so how I can get started.

1

There are 1 answers

0
Jeshua Lacock On BEST ANSWER

CoreML 3 added the ability to fine tune models on device but has a number of limitations:

  1. Only convolution and fully-connected layers can be trained.
  2. There are only two loss functions: cross entropy and MSE.
  3. There are only two optimizers: SGD and Adam.

For an in-depth tutorial about on-device fine-tuning, please see:

https://machinethink.net/blog/coreml-training-part1/