kerastuner INFO:tensorflow:Oracle triggered exit

1.2k views Asked by At

When using keras tuner to optimize my UNET AI-model, I get the following message in the terminal:

{'conv_blocks1': 2, 'filters1_0': 240, 'conv_blocks2': 3, 'filters2_0': 136, 'bottle': 4, 'filtersbot_0': 184, 'filtersbot_1': 32, 'filters2_1': 200, 'filters1_1': 208, 'filtersbot_2': 8, 'filtersbot_3': 8, 'filters2_2': 8}

<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
<IPython.core.display.HTML object>
INFO:tensorflow:Oracle triggered exit 

It gives me the first iteration. Then the "INFO:tensorflow:Oracle triggered exit" is triggered. But how do I get the keras tuner to loop through all iterations and avoid the oracle triggered exit? Here is my code:

with open("C:\\Users\\joko9\\Documents\\Python\\AI brus\\pickled_mnist.pkl", "br") as fh:
      data = pickle.load(fh)

image_size=28
train_imgs=data[0]
test_imgs= data[1]
train_noise_imgs= data[2]
test_nois_imgse= data[3]

#(xtrain,ytrain),(xtest,ytest)=fashion_mnist.load_data()
#xtrain=xtrain.reshape(60000,28,28,1)

LOG_DIR=f"{int(time.time())}"

def UNet(hp):
    inputs=keras.layers.Input((image_size,image_size,1))
    xd1=inputs
    
    for i in range(hp.Int('conv_blocks1',2,4,1)):
        filters1=hp.Int('filters1_'+str(i),8,256,8)
        xd1=keras.layers.Conv2D(filters1,kernel_size=(3,3),padding='same',strides=1,activation= 'relu')(xd1)
        xd2=keras.layers.MaxPool2D((2, 2),(2, 2))(xd1)
        
        for j in range(hp.Int('conv_blocks2',2,4,1)):
             filters2=hp.Int('filters2_'+str(j),8,256,8)
             xd2=keras.layers.Conv2D(filters2,kernel_size=(3,3),padding='same',strides=1,activation= 'relu')(xd2)
             xb=keras.layers.MaxPool2D((2, 2),(2, 2))(xd2)
             
             for k in range(hp.Int('bottle',2,4,1)):
                 filtersbot=hp.Int('filtersbot_'+str(k),8,256,8)
                 xb=keras.layers.Conv2D(filtersbot,kernel_size=(3,3),padding='same',strides=1,activation= 'relu')(xb)
                 
             xu2=keras.layers.UpSampling2D((2, 2))(xb)
             concat=keras.layers.Concatenate()([xu2,xd2])
             xu2=keras.layers.Conv2D(filters2,kernel_size=(3,3),padding='same',strides=1,activation='relu')(concat)
        
        xu1=keras.layers.UpSampling2D((2, 2))(xu2)
        concat=keras.layers.Concatenate()([xu1,xd1])
        xu1=keras.layers.Conv2D(filters1,kernel_size=(3,3),padding='same',strides=1,activation='relu')(concat)
        
    outputs=keras.layers.Conv2D(1,(1,1),padding='same',activation='sigmoid')(xu1)
    model=keras.models.Model(inputs,outputs)
    model.compile(optimizer='adam',loss='binary_crossentropy',metrics=["acc"])
        
    return model
        
    
    
tuner=RandomSearch(UNet, objective="acc",max_trials=1,executions_per_trial=1, directory=LOG_DIR)
tuner.search(x=train_noise_imgs,y=train_imgs,epochs=1)
1

There are 1 answers

1
ttt On

I've run into one similar problem, tuners need a location in the computer to store the files contains all the parameters, and I direct it to a local location in my computer, if I don't change that address and re-run the tuner, then I'll get "INFO:tensorflow:Oracle triggered exit". sample codes that I'm using: tuner=Hyperband( build_model, objective='val_accuracy', max_epochs=15, directory='/Users/.../test2', hyperparameters=hp, project_name='CPT_recognition'

) To solve that, I'll change the address from "user/.../test1" to "user/.../test2" and it will start running.