I want to train my keras model in gcp.
My code:
this is how I load the dataset
dataset = pandas.read_csv('USDJPY.fx5.csv', usecols=[2, 3, 4, 5], engine='python')
this is how i trigger cloud training
job_labels = {"job": "forex-usdjpy", "team": "xxx", "user": "xxx"}
tfc.run(requirements_txt="./requirements.txt",
job_labels=job_labels,
stream_logs=True
)
Right before my model, which shouldn't make much of a difference
model = Sequential()
model.add(LSTM(4, input_shape=(1, 4)))
model.add(Dropout(0.2))
model.add(Dense(4))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=1, batch_size=1, verbose=2)
Everything is working, docker image for my model is being created, but the USDJPY.fx5.csv file is not being uploaded. So I get file not found error
What is the proper way of loading custom files into the training job? I uploaded the train data to s3 bucket but I wasn't able to tell google to look there.
Turns out it was a problem with my GCP configuration Here are the steps I made to make it work:
Create an s3 bucket and make all files inside it public so the train job can access them
Include these two in the requirements fsspec and gcsfs
remove the 'engine' parameter from panda.readCsv like so
dataset = pandas.read_csv('gs:///USDJPY.fx5.csv', usecols=[2, 3, 4, 5])
Since you are uploading the python file to GCP a good way to organize your code it to put all of the training logic into a method and then called it conditionally on the cloud train flag:
Here is the whole working code if someone is interested
NOTE: This is probably not an optimal LSTM config, take it with a grain of salt