KAT: A Knowledge Adversarial Training Method for Zero-Order Takagi-Sugeno-Kang Fuzzy Classifiers

IEEE Trans Cybern. 2022 Jul;52(7):6857-6871. doi: 10.1109/TCYB.2020.3034792. Epub 2022 Jul 4.

Abstract

While input or output-perturbation-based adversarial training techniques have been exploited to enhance the generalization capability of a variety of nonfuzzy and fuzzy classifiers by means of dynamic regularization, their performance may perhaps be very sensitive to some inappropriate adversarial samples. In order to avoid this weakness and simultaneously ensure enhanced generalization capability, this work attempts to explore a novel knowledge adversarial attack model for the zero-order Tagaki-Sugeno-Kang (TSK) fuzzy classifiers. The proposed model is motivated by exploiting the existence of special knowledge adversarial attacks from the perspective of the human-like thinking process when training an interpretable zero-order TSK fuzzy classifier. Without any direct use of adversarial samples, which is different from input or output perturbation-based adversarial attacks, the proposed model considers adversarial perturbations of interpretable zero-order fuzzy rules in a knowledge-oblivion and/or knowledge-bias or their ensemble to mimic the robust use of knowledge in the human thinking process. Through dynamic regularization, the proposed model is theoretically justified for its strong generalization capability. Accordingly, a novel knowledge adversarial training method called KAT is devised to achieve promising generalization performance, interpretability, and fast training for zero-order TSK fuzzy classifiers. The effectiveness of KAT is manifested by the experimental results on 15 benchmarking UCI and KEEL datasets.

MeSH terms

  • Algorithms
  • Fuzzy Logic*
  • Humans
  • Neural Networks, Computer*