Abstract
Traditional graph-based semi-supervised learning (GBSSL) algorithms usually scale badly due to the expensive computational burden. The main bottleneck is that they need to compute the inversion of a huge matrix. In order to alleviate this problem, this paper proposes Neumann series approximation (NSA) to explicitly approximate the inversion process required by conventional GBSSL methodologies, which makes them computationally tractable for relatively large datasets. It is proved that the deviation between the approximation and direct inversion is bounded. Using real-world datasets related to handwritten digit recognition, speech recognition and text classification, the experimental results reveal that NSA accelerates the speed significantly without decreasing too much precision. We also empirically show that NSA outperforms other scalable approaches such as Nyström method, Takahashi equation, Lanczos process based SVD and AnchorGraph regularization, in terms of both efficiency and accuracy.
Original language | English |
---|---|
Pages (from-to) | 187-197 |
Number of pages | 11 |
Journal | Neural Processing Letters |
Volume | 42 |
Issue number | 1 |
DOIs | |
Publication status | Published - 22 Aug 2015 |
Externally published | Yes |
Keywords
- Error bound
- Neumann series
- Scalability
- Semi-supervised learning
ASJC Scopus subject areas
- Software
- General Neuroscience
- Computer Networks and Communications
- Artificial Intelligence