I'm running supervised fine tuning of Mistral 7b model. The input data is a json file including a list of dictionaries, each formatted like this: {"prefix": [, ], "system": null}

I'm running the sft commands in an EC2 env. First, I ran the sft with ~500 data, and it went fine, though the model apparently didn't learn anything. Then, I added 10k more training data in the same format. The training started and completed for 10% and then it failed with this error:

`. . .
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Utterance
text
   str type expected (type=type_error.str)
. . .`

I tried with different data files of different lengths (1k-16k), but all the time, am getting the same error. Can anyone please help?

Things I've already tried:

  1. I prepare the data in csv (with 2 columns: 'question' and 'answer') on pandas and then convert the csv to json. I ensured that the values for each column is str by both running: df['pref_answer'] = df['pref_answer'].astype("string")

and adding dtype='string' to the pd.read_csv function.

  1. I checked that my csv file has no NaN value, empty string, or float.

  2. Since there are special character in the data file, I saved the csv in UTF-8 format.

I'm still getting the above error. Can anyone please help?

I did another exp:

  1. I ran the training on the first 500 data. It ran successfully.
  2. I ran another training on another set of 500 data (index 500-1000). It also ran successfully.
  3. Then I combined 500+500 and ran another training on this 1k data. And the training failed with the same pydanctic validation error message:
pydantic.error_wrappers.ValidationError: 1 validation error for Utterance text str type expected (type=type_error.str)

I find it bizarre that it's a size issue. Any insight?

0

There are 0 answers