Mobile edge cache technique under radio access network for the next generation of communication networks

  • Lincan LI

Student thesis: PhD Thesis


Recently, with the explosive growth of mobile devices and the popularity of the fifth-generation (5G) wireless network, the user demand for low latency and high-reliability connectivity services is increasing. In addition, mobile data traffic has increased exponentially, which contributes to high pressure on the network capacity. Although the conventional method, such as the expansion of base stations, can expand network capacity, it cannot satisfy user demand for low latency. To overcome this challenge, the mobile edge caching technique is investigated and considered as the optimal method. The basic idea of mobile edge caching is to store the popular content at the edge of the network in close physical proximity to the users. In this way, the requested content can be retrieved from the edge node rather than the core network, which reduces the latency.
Based on the development of the mobile edge caching technique, this thesis investigates the optimisation of cache policy for managing the cached content. The main contributions of this thesis are as follows: First, a proactive cache policy is proposed, where the prediction-by-partial-matching algorithm can predict the upcoming location, and the backpropagation neural network can predict the forthcoming content requests. The predicted upcoming content requests are pre-deployed at the upcoming location. In this way, users can request their preferred content as soon as they arrive at the site. Then, the proposed proactive caching policy is compared with the conventional policies to prove the advantage of this proactive caching policy. In particular, the 3-order proactive policy improves the cache hit ratio and reduces latency by at least 14% and 6%, respectively, compared to other policies.
Second, the idea of the proactive caching policy to pre-cache the popular content before the content requests is extended to the cooperative heterogeneous networks. Here, the relationship between user mobility and the characteristics of different BSs is considered. In addition, a small base station cluster is defined, and the small base stations in the same cluster can exchange their cached content with each other. Furthermore, the long short term network is used to predict the future popularity of content, and a size-weighted content popularity algorithm is introduced to balance the influence of content size on content popularity. This proposed cache policy reduces the average content access latency by at least 8.9% and increases the offloading ratio by at least 6.8% than the existing methods.
Finally, a Deep Reinforcement Learning (DRL) - based cache policy is described to solve the integrated cache location and cache content determination problem. The integrated cache problem is modelled as a Markov decision process (MDP). Moreover, in conventional caching policies, the placement or replacement of cache content depends on the known or accurately predicted popularity of the content, which is hard to achieve under complex dynamic cache scenarios. In this context, a Deep Q-learning Network (DQN) algorithm is applied to derive the optimal cache policy based on experience from the interaction with the caching scenario, without any prior information about the content popularity. Compared to the conventional policies, the proposed DRL-based cache policy has better cache performance.
Date of AwardJul 2022
Original languageEnglish
Awarding Institution
  • University of Nottingham
SupervisorC.F. Kwong (Supervisor) & Jing Wang (Supervisor)


  • mobile edge cache
  • cache hit ratio
  • reinforcement learning
  • energy consumption
  • content popularity
  • PPM

Cite this