I am fairly new to Deep Learning, but I managed to build a multi-branch Image Classification architecture yielding quite satisfactory results.

Not so important: I am working on KKBox customer churn (https://kaggle.com/c/kkbox-churn-prediction-challenge/data) where I transformed customer behavior, transactions and static data into heatmaps and try to classify churners based on that.

The classification itself works just fine. My issue comes in when I try to apply LIME to see where the results are coming from. When following the code here: https://marcotcr.github.io/lime/tutorials/Tutorial%20-%20images.html with the exception that I use list of inputs [members[0],transactions[0],user_logs[0]], I get the following error: AttributeError: 'list' object has no attribute 'shape'

What springs to mind is that LIME is probably not made for multi-input architectures such as mine. On the other hand, Microsoft Azure have a multi-branch architecture as well (http://www.freepatentsonline.com/20180253637.pdf?fbclid=IwAR1j30etyDGPCmG-QGfb8qaGRysvnS_f5wLnKz-KdwEbp2Gk0_-OBsSepVc) and they allegedly use LIME to interpret their result (https://www.slideshare.net/FengZhu18/predicting-azure-churn-with-deep-learning-and-explaining-predictions-with-lime).

I have tried to concatenate the images into a single input but this sort of an approach yields far worse results than the multi-input one. LIME works for this approach though (even though not as comprehensibly as for usual image recognition).

The DNN architecture:

# Members
members_input = Input(shape=(61,4,3), name='members_input')
x1 = Dropout(0.2)(members_input)
x1 = Conv2D(32, kernel_size = (61,4), padding='valid', activation='relu', strides=1)(x1)
x1 = GlobalMaxPooling2D()(x1)

# Transactions
transactions_input = Input(shape=(61,39,3), name='transactions_input')
x2 = Dropout(0.2)(transactions_input)
x2 = Conv2D(32, kernel_size = (61,1,), padding='valid', activation='relu', strides=1)(x2)
x2 = Conv2D(32, kernel_size = (1,39,), padding='valid', activation='relu', strides=1)(x2)
x2 = GlobalMaxPooling2D()(x2)

# User logs
userlogs_input = Input(shape=(61,7,3), name='userlogs_input')
x3 = Dropout(0.2)(userlogs_input)
x3 = Conv2D(32, kernel_size = (61,1,), padding='valid', activation='relu', strides=1)(x3)
x3 = Conv2D(32, kernel_size = (1,7,), padding='valid', activation='relu', strides=1)(x3)
x3 = GlobalMaxPooling2D()(x3)

# User_logs + Transactions + Members
merged = keras.layers.concatenate([x1,x2,x3]) # Merged layer
out = Dense(2)(merged)
out_2 = Activation('softmax')(out)

model = Model(inputs=[members_input, transactions_input, userlogs_input], outputs=out_2)
model.compile(optimizer="adam", loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()

The attempted LIME utilization:

explainer = lime_image.LimeImageExplainer()

explanation = explainer.explain_instance([members_test[0],transactions_test[0],user_logs_test[0]], model.predict, top_labels=2, hide_color=0, num_samples=1000)

Model summary:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
transactions_input (InputLayer) (None, 61, 39, 3)    0                                            
__________________________________________________________________________________________________
userlogs_input (InputLayer)     (None, 61, 7, 3)     0                                            
__________________________________________________________________________________________________
members_input (InputLayer)      (None, 61, 4, 3)     0                                            
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 61, 39, 3)    0           transactions_input[0][0]         
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 61, 7, 3)     0           userlogs_input[0][0]             
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 61, 4, 3)     0           members_input[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 1, 39, 32)    5888        dropout_2[0][0]                  
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 1, 7, 32)     5888        dropout_3[0][0]                  
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 1, 1, 32)     23456       dropout_1[0][0]                  
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 1, 1, 32)     39968       conv2d_2[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 1, 1, 32)     7200        conv2d_4[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_1 (GlobalM (None, 32)           0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_2 (GlobalM (None, 32)           0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_3 (GlobalM (None, 32)           0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 96)           0           global_max_pooling2d_1[0][0]     
                                                                 global_max_pooling2d_2[0][0]     
                                                                 global_max_pooling2d_3[0][0]     
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2)            194         concatenate_1[0][0]              
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 2)            0           dense_1[0][0]                    
==================================================================================================

Hence my question: does anyone have experience with multi-input DNN architecture and LIME? Is there a workaround I am not seeing? Is there another interpretable model I could use?

Thank you.

0 Answers