While analyzing the accuracy of my Federated Learning model, I found that my client's accuracy is going high but my global accuracy is not getting high. Can somebody help me on the same that why I am getting such issue. I am pasting the image that shows the global accuracy and loss and per epoch accuracy and loss.
I am mentioning the code that I am using for finding global accuracy and loss
cce = tf.keras.losses.mean_squared_error
y_pred=model.predict(X_test,batch_size=32)
loss = cce(Y_test, y_pred)
y_pred=[0 if val<0.5 else 1 for val in y_pred]
acc = accuracy_score(Y_test,y_pred)
loss=loss.numpy()
loss1=sum(loss) / len(loss)
print('comm_round: {} | global_acc: {:.3%} | global_loss: {}'.format(communication_round, acc, loss1))
Also I am mentioning the code that I have used to find accuracy of client in each round
local_model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
Can someone help me on the issue that why there is such a big difference in client's accuracy and global accuracy in my model. Even after running my code for 500 rounds, I am getting the clients accuracy around 94% but global accuracy around 61%.
I think you need to define your custom evaluating strategy. strategies are the classes that determine how the server will aggregate the new weights, how it will evaluate clients, etc. The most basic strategy is the
FedAvg
(for federated average) which I think you are using. After the final round, theserver
will perform the last evaluation step with all clients available to verify the model’s performance. That wouldn’t be a problem in a real-life scenario, but this could actually backfire in yours. You need to perform the evaluation only on the server side and to remove this functionality from the client side. This is done through theevaluate
method of the strategy, which you need to override:You can read more about creating your own FL algorithm here.