Deep Q-learning with feature extraction and prioritized experience replay for edge node overload in edge computing

Lionel Nkenyereye, Boon Giin Lee, Wan Young Chung

Research output: Journal PublicationArticlepeer-review

Abstract

Keeping track of the edge nodes’ status information is crucial, which is the condition of their available compute capacity as measured by their Age-of-Information. In Internet of Things-oriented edge computing systems, the computational and software-defined infrastructure resources are heterogeneous and subject to rapid change. Edge computing systems often face dynamic workloads and limited computational resources, leading to frequent node overload scenarios. These overloads degrade system responsiveness and service availability, especially in latency-sensitive applications. The scheduling of computation tasks would consider both current resource availability and predicted overload risks. An intelligent, adaptive method that learns optimal task allocation under resource constraints has the potential to boost the operational efficiency of Internet of Things-oriented edge computing systems. Addressing edge node overload is critical for the sustainable and scalable deployment of edge-based infrastructures. This research designs a resource-overloaded detection model specifically for diverse workloads in edge computing systems. The proposed Deep Reinforcement Learning model explores two significant challenges: the selection of pertinent feature sets from the workload resource utilization storage, and their classification of overload and detection of fatal failure of edge computing nodes. We propose a Deep-Q Network with a prioritized experience replay framework for edge node resource overload. The framework relies on feature learning using Linear Discriminant Analysis and Deep Q Network with a prioritized experience replay to efficiently indicate the overload status of edge nodes and reward the system with actions that enhance edge resources allocation. Deep-Q Network is well-suited for sequential decision-making in dynamic environments, while prioritized experience replay improves sample efficiency by focusing on updating on high-priority transitions with larger temporal-difference errors. Features are learned automatically from the edge node resource profiling data generated on a real edge-based container infrastructure. Linear Discriminant Analysis reduces the high-dimensional state space by emphasizing the most discriminative features for scheduling decisions. The infrastructure executes an intelligent inference of containerized applications considered as resource-intensive applications. When the feature extraction is added to the proposed deep reinforcement learning model, the overload classifier's performance is improved. Comparing the model with selection to the one without it, the total accuracy and F1-score were improved by 1.3% and 1.4%, respectively.

Original languageEnglish
Article number112124
JournalEngineering Applications of Artificial Intelligence
Volume162
DOIs
Publication statusPublished - 20 Dec 2025

Keywords

  • Containerized edge applications
  • Deep reinforcement learning
  • Edge computing
  • Edge node overload
  • Feature extraction
  • Internet of Things
  • Prioritized experience replay
  • Resource overload

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Deep Q-learning with feature extraction and prioritized experience replay for edge node overload in edge computing'. Together they form a unique fingerprint.

Cite this