The concept of Federated Learning (FL) is a distributed-based machine learning (ML) approach that trains its model using edge devices. Its focus is on maintaining privacy by transmitting gradient updates along with users' learning parameters to the global server in the process of training as well as preserving the integrity of data on the user-end of internet of medical things (IoMT) devices. Instead of a direct use of user data, the training which is performed on the global server is done on the parameters while the model modification is performed locally on IoMT devices. But the major drawback of this federated learning approach is its inability to preserve user privacy complete thereby resulting in gradients leakage. Thus, this study first presents a summary of the process of learning and further proposes a new approach for federated medical recommender system which employs the use of homomorphic cryptography to ensure a more privacy-preservation of user gradients during recommendations. The experimental results indicate an insignificant decrease with respect to the metrics of accuracy, however, a greater percentage of user-privacy is achieved. Further analysis also shows that performing computations on encrypted gradients at the global server scarcely has any impact on the output of the recommendation while guaranteeing a supplementary secure channel for transmitting user-based gradients back and forth the global server. The result of this analysis indicates that the performance of federated stochastic modification minimized gradient (FSMMG) algorithm is greatly increased at every given increase in the number of users and a good convergence is achieved as well. Also, experiments indicate that when compared against other existing techniques, the proposed FSMMG outperforms at 98.3% encryption accuracy.