replacement algorithms
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 14)

H-INDEX

15
(FIVE YEARS 0)

Author(s):  
Gajanan Digambar Gaikwad

Abstract: Operating system offers a service known as memory management which manages and guides primary memory. It moves processes between disk and main memory during the execution back and forth. The process in which we provisionally moves process from primary memory to the hard disk so the memory is available for other processes. This process is known as swapping. Page replacement techniques are the methods by which the operating system concludes which memory pages to be swapped out and write to disk, whenever a page of main memory is required to be allocated. There are different policies regarding how to select a page to be swapped out when a page fault occurs to create space for new page. These Policies are called page replacement algorithms. In this paper the strategy for identifying the refresh rate for ‘Aging’ page replacement algorithm is presented and evaluated. Keywords: Aging algorithm, page replacement algorithm, refresh rate, virtual memory management.


Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2503
Author(s):  
Minseon Cho ◽  
Donghyun Kang

Today, research trends clearly confirm the fact that machine learning technologies open up new opportunities in various computing environments, such as Internet of Things, mobile, and enterprise. Unfortunately, the prior efforts rarely focused on designing system-level input/output stacks (e.g., page cache, file system, block input/output, and storage devices). In this paper, we propose a new page replacement algorithm, called ML-CLOCK, that embeds single-layer perceptron neural network algorithms to enable an intelligent eviction policy. In addition, ML-CLOCK employs preference rules that consider the features of the underlying storage media (e.g., asymmetric read and write costs and efficient write patterns). For evaluation, we implemented a prototype of ML-CLOCK based on trace-driven simulation and compared it with the traditional four replacement algorithms and one flash-friendly algorithm. Our experimental results on the trace-driven environments clearly confirm that ML-CLOCK can improve the hit ratio by up to 72% and reduces the elapsed time by up to 2.16x compared with least frequently used replacement algorithms.


Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analysed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.


Author(s):  
Josphat Chege Njuguna ◽  
Emre Alabay ◽  
Anil Celebi ◽  
Aysun Tasyapi Celebi ◽  
Mehmet Kemal Gullu

2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


Webology ◽  
2021 ◽  
Vol 18 (1) ◽  
pp. 62-76
Author(s):  
Hitha Paulson ◽  
Dr.R. Rajesh

The acceptance of NAND flash memories in the electronic world, due to its non-volatility, high density, low power consumption, small size and fast access speed is hopeful. Due to the limitations in life span and wear levelling, this memory needs special attention in its management techniques compared to the conventional techniques used in hard disks. In this paper, an efficient page replacement algorithm is proposed for NAND flash based memory systems. The proposed algorithm focuses on decision making policies based on the relative reference ratio of pages in memory. The size adjustable eviction window and the relative reference based shadow list management technique proposed by the algorithm contribute much to the efficiency in page replacement procedure. The simulation tool based experiments conducted shows that the proposed algorithm performs superior to the well-known flash based page replacement algorithms with regard to page hit ratio and memory read/write operations.


2021 ◽  
Vol 8 (1) ◽  
pp. 69
Author(s):  
Tanwir Tanwir ◽  
Parma Hadi Rantelinggi ◽  
Sri Widiastuti

<p>Algoritma pergantian adalah suatu mekanisme pergantian objek dalam cache yang lama dengan objek baru, dengan mekanisme  melakukan penghapusan objek sehingga mengurangi penggunaan bandwidth dan server load. Penghapusan dilakukan apabila cache penuh sehingga penyimpanan entri baru diperlukan. Secara umum algoritma FIFO, LRU dan LFU sering digunakan dalam pergantian objek, akan tetapi diperoleh suatu objek yang sering digunakan namun terhapus dalam pergantian cache sedangkan objek tersebut masih digunakan, akibatnya pada waktu klien melakukan permintaan dibutuhkan waktu yang lama dalam browsing objek. Untuk mengatasi masalah tersebut dilakukan kombinasi algoritma pergantian cache Multi-Rule Algorithm, dalam bentuk algoritma kombinasi ganda FIFO-LRU dan triple FIFO-LRU-LFU. Algoritma Mural (Multi-Rule Algorithm) menghasilkan respon pada cache size 200 MB dengan waktu tanggapan rata-rata berturut-turut 56,33 dan 42 ms, sedangkan pada algoritma tunggal memerlukan waktu tanggapan rata-rata 77 ms. Sehingga Multi-Rule Algorithm dapat meningkatkan kinerja terhadap waktu penundaan, throughput, dan hit rate. Dengan demikian, algoritma pergantian cache Mural, sangat direkomendasikan untuk meningkatkan akses klien.</p><p> </p><p class="Judul2"><em>Abstract</em></p><p class="Abstract">Substitution algorithm is a mechanism to replace objects in the old cache with new objects, with a mechanism to delete objects so that it reduces bandwidth usage and server load. Deletion is done when the cache is full so saving new entries is needed. In general, FIFO, LRU and LFU algorithms are often used in object changes, but an object that is often used but is deleted in the cache changes while the object is still being used, consequently when the client makes a request it takes a long time to browse the object. To overcome this problem a combination of Multi-Rule Algorithm cache replacement algorithms is performed, in the form of a double combination algorithm FIFO-LRU and triple FIFO-LRU-LFU. The Mural algorithm (Multi-Rule Algorithm) produces a response on a cache size of 200 MB with an average response time of 56.33 and 42 ms respectively, whereas a single algorithm requires an average response time of 77 ms. So the Multi-Rule Algorithm can improve the performance of the delay, throughput, and hit rate. Thus, the Mural cache change algorithm, is highly recommended to improve client access.</p><p><br /><em></em></p>


Virtual memory plays an important role in memory management of an operating system. A process or a set of processes may have a requirement of memory space that may exceed the capacity of main memory. This situation is addressed by virtual memory where a certain memory space in secondary memory is treated as primary memory, i.e., main memory is virtually extended to secondary memory. When a process requires a page, it first scans in primary memory. If it is found then, process continues to execute, otherwise a situation arises, called page fault, which is addressed by page replacement algorithms. This algorithms swaps out a page from main memory to secondary memory and replaced it with another page from secondary memory in addition to the fact that it should have minimum page faults so that considerable amount of I/O operations, required for swapping in/out of pages, can be reduced. Several algorithms for page replacement have been formulated to increase the efficiency of page replacement technique. In this paper, mainly three page replacement algorithms: FIFO, Optimal and LRU are discussed, their behavioural pattern is analysed with systematic approach and a comparative analysis of these algorithms is recorded with proper diagram.


Author(s):  
A. V. Vishnekov ◽  
E. M. Ivanova

The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.


Sign in / Sign up

Export Citation Format

Share Document