A Regularized Attribute Weighting Framework for Naive Bayes

Research output: Journal PublicationArticlepeer-review

13 Citations (Scopus)
60 Downloads (Pure)

Abstract

The Bayesian classification framework has been widely used in many fields, but the covariance matrix is usually difficult to estimate reliably. To alleviate the problem, many naive Bayes (NB) approaches with good performance have been developed. However, the assumption of conditional independence between attributes in NB rarely holds in reality. Various attribute-weighting schemes have been developed to address this problem. Among them, class-specific attribute weighted naive Bayes (CAWNB) has recently achieved good performance by using classification feedback to optimize the attribute weights of each class. However, the derived model may be over-fitted to the training dataset, especially when the dataset is insufficient to train a model with good generalization performance. This paper proposes a regularization technique to improve the generalization capability of CAWNB, which could well balance the trade-off between discrimination power and generalization capability. More specifically, by introducing the regularization term, the proposed method, namely regularized naive Bayes (RNB), could well capture the data characteristics when the dataset is large, and exhibit good generalization performance when the dataset is small. RNB is compared with the state-of-the-art naive Bayes methods. Experiments on 33 machine-learning benchmark datasets demonstrate that RNB outperforms the compared methods significantly.

Original languageEnglish
Article number9294037
Pages (from-to)225639-225649
Number of pages11
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

Keywords

  • Attribute weighting
  • classification
  • naive Bayes
  • regularization

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'A Regularized Attribute Weighting Framework for Naive Bayes'. Together they form a unique fingerprint.

Cite this