Short-term wind speed prediction with a two-layer attention-based lstm

Jingcheng Qian, Mingfang Zhu, Yingnan Zhao, Xiangjian He

Research output: Journal PublicationArticlepeer-review

18 Citations (Scopus)


Wind speed prediction is of great importance because it affects the effi ciency and stability of power systems with a high proportion of wind power. Temporal-spatial wind speed features contain rich information; however, their use to predict wind speed remains one of the most challenging and less studied areas. This paper investigates the problem of predicting wind speeds for multiple sites using temporal and spatial features and proposes a novel two-layer attention- based long short-term memory (LSTM), termed 2Attn-LSTM, a unified frame work of encoder and decoder mechanisms to handle temporal-spatial wind speed data. To eliminate the unevenness of the original wind speed, we initially decom pose the preprocessing data into IMF components by variational mode decomposi tion (VMD). Then, it encodes the spatial features of IMF components at the bottom of the model and decodes the temporal features to obtain each component's predicted value on the second layer. Finally, we obtain the ultimate prediction value after denormalization and superposition. We have performed extensive experiments for short-term predictions on real-world data, demonstrating that 2Attn-LSTM outperforms the four baseline methods. It is worth pointing out that the presented 2Atts-LSTM is a general model suitable for other spatial-temporal features.

Original languageEnglish
Pages (from-to)197-209
Number of pages13
JournalComputer Systems Science and Engineering
Issue number2
Publication statusPublished - 2021
Externally publishedYes


  • Attention mechanism
  • Lstm
  • Temporal-spatial features
  • Vmd
  • Wind speed prediction

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Short-term wind speed prediction with a two-layer attention-based lstm'. Together they form a unique fingerprint.

Cite this