caching policy
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 25)

H-INDEX

9
(FIVE YEARS 4)

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jiliang Yin ◽  
Congfeng Jiang ◽  
Hidetoshi Mino ◽  
Christophe Cérin

The traditional centralized network architecture can lead to a bandwidth bottleneck in the core network. In contrast, in the information-centric network, decentralized in-network caching can alleviate the traffic flow pressure from the network center to the edge. In this paper, a popularity-aware in-network caching policy, namely, Pop, is proposed to achieve an optimal caching of network contents in the resource-constrained edge networks. Specifically, Pop senses content popularity and distributes content caching without adding additional hardware and traffic overhead. We conduct extensive performance evaluation experiments by using ndnSIM. The experiments showed that the Pop policy achieves 54.39% cloud service hit reduction ratio and 22.76% user request average hop reduction ratio and outperforms other policies including Leave Copy Everywhere, Leave Copy Down, Probabilistic Caching, and Random choice caching. In addition, we proposed an ideal caching policy (Ideal) as a baseline whose popularity is known in advance; the gap of Pop and Ideal in cloud service hit reduction ratio is 4.36%, and the gap in user request average hop reduction ratio is only 1.47%. More simulation results further show the accuracy of Pop in perceiving popularity of contents, and Pop has good robustness in different request scenarios.


2021 ◽  
Vol 7 ◽  
pp. e418
Author(s):  
Stéfani Pires ◽  
Artur Ziviani ◽  
Leobino N. Sampaio

In recent years, information-centric networks (ICNs) have gained attention from the research and industry communities as an efficient and reliable content distribution network paradigm, especially to address content-centric and bandwidth-needed applications together with the heterogeneous requirements of emergent networks, such as the Internet of Things (IoT), Vehicular Ad-hoc NETwork (VANET) and Mobile Edge Computing (MEC). In-network caching is an essential part of ICN architecture design, and the performance of the overall network relies on caching policy efficiency. Therefore, a large number of cache replacement strategies have been proposed to suit the needs of different networks. The literature extensively presents studies on the performance of the replacement schemes in different contexts. The evaluations may present different variations of context characteristics leading to different impacts on the performance of the policies or different results of most suitable policies. Conversely, there is a lack of research efforts to understand how the context characteristics influence policy performance. In this direction, we conducted an extensive study of the ICN literature through a Systematic Literature Review (SLR) process to map reported evidence of different aspects of context regarding the cache replacement schemes. Our main findings contribute to the understanding of what is a context from the perspective of cache replacement policies and the context characteristics that influence cache behavior. We also provide a helpful classification of policies based on context dimensions used to determine the relevance of contents. Further, we contribute with a set of cache-enabled networks and their respective context characteristics that enhance the cache eviction process.


2021 ◽  
Vol 48 (3) ◽  
pp. 77-78
Author(s):  
Guocong Quan ◽  
Atilla Eryilmaz ◽  
Jian Tan ◽  
Ness Shroff

In practice, prefetching data strategically has been used to improve caching performance. The idea is that data items can either be cached upon request (traditional approach) or prefetched into the cache before the requests actually occur. The caching and prefetching operations compete for the limited cache space, whose size is typically much smaller than the number of data items. A key challenge is to design an optimal prefetching and caching policy, assuming that the future requests can be predicted to a certain extent. This is a non-trivial challenge even under the idealized assumption that future requests are precisely known.


Author(s):  
Shengqian Han ◽  
Fei Xue ◽  
Chenyang Yang ◽  
Jinyang Liu ◽  
Fengxu Lin

Author(s):  
Sheng-Jie Wang ◽  
Po-Ning Chen ◽  
Shin-Lin Shieh ◽  
Yu-Chih Huang

2020 ◽  
Vol 68 (11) ◽  
pp. 7039-7053
Author(s):  
Kuan Wu ◽  
Lei Zhao ◽  
Ming Jiang ◽  
Xiaojing Huang ◽  
Yi Qian

2020 ◽  
Vol 140 ◽  
pp. 142-152 ◽  
Author(s):  
Muddasir Rahim ◽  
Muhammad Awais Javed ◽  
Ahmad Naseem Alvi ◽  
Muhammad Imran

2020 ◽  
Vol 19 (8) ◽  
pp. 5589-5604 ◽  
Author(s):  
Ming-Chun Lee ◽  
Andreas F. Molisch

Sign in / Sign up

Export Citation Format

Share Document