data lifetime
Recently Published Documents


TOTAL DOCUMENTS

29
(FIVE YEARS 13)

H-INDEX

5
(FIVE YEARS 0)

2022 ◽  
Vol 21 (1) ◽  
pp. 1-24
Author(s):  
Katherine Missimer ◽  
Manos Athanassoulis ◽  
Richard West

Modern solid-state disks achieve high data transfer rates due to their massive internal parallelism. However, out-of-place updates for flash memory incur garbage collection costs when valid data needs to be copied during space reclamation. The root cause of this extra cost is that solid-state disks are not always able to accurately determine data lifetime and group together data that expires before the space needs to be reclaimed. Real-time systems found in autonomous vehicles, industrial control systems, and assembly-line robots store data from hundreds of sensors and often have predictable data lifetimes. These systems require guaranteed high storage bandwidth for read and write operations by mission-critical real-time tasks. In this article, we depart from the traditional block device interface to guarantee the high throughput needed to process large volumes of data. Using data lifetime information from the application layer, our proposed real-time design, called Telomere , is able to intelligently lay out data in NAND flash memory and eliminate valid page copies during garbage collection. Telomere’s real-time admission control is able to guarantee tasks their required read and write operations within their periods. Under randomly generated tasksets containing 500 tasks, Telomere achieves 30% higher throughput with a 5% storage cost compared to pre-existing techniques.


2021 ◽  
Vol 251 ◽  
pp. 02035
Author(s):  
Adrian Eduard Negru ◽  
Latchezar Betev ◽  
Mihai Carabaș ◽  
Costin Grigoraș ◽  
Nicolae Țăpuş ◽  
...  

CERN uses the world’s largest scientific computing grid, WLCG, for distributed data storage and processing. Monitoring of the CPU and storage resources is an important and essential element to detect operational issues in its systems, for example in the storage elements, and to ensure their proper and efficient function. The processing of experiment data depends strongly on the data access quality, as well as its integrity and both of these key parameters must be assured for the data lifetime. Given the substantial amount of data, O(200 PB), already collected by ALICE and kept at various storage elements around the globe, scanning every single data chunk would be a very expensive process, both in terms of computing resources usage and in terms of execution time. In this paper, we describe a distributed file crawler that addresses these natural limits by periodically extracting and analyzing statistically significant samples of files from storage elements, evaluates the results and is integrated with the existing monitoring solution, MonALISA.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Pengwei Wang ◽  
Caihui Zhao ◽  
Yi Wei ◽  
Dong Wang ◽  
Zhaohui Zhang

Cloud service providers (CSPs) can offer infinite storage space with cheaper maintenance cost compared to the traditional storage mode. Users tend to store their data in geographical and diverse CSPs so as to avoid vendor lock-in. Static data placement has been widely studied in recent works. However, the data access pattern is often time-varying and users may pay more cost if static placement is adopted during the data lifetime. Therefore, it is a pending problem and challenge of how to dynamically store users’ data under time-varying data access pattern. To this end, we propose ADPA, an adaptive data placement architecture that can adjust the data placement scheme based on the time-varying data access pattern and subject for minimizing the total cost and maximizing the data availability. The proposed architecture includes two main components: data retrieval frequency prediction module based on LSTM and data placement optimization module based on Q-learning. The performance of ADPA is evaluated through several experimental scenarios using NASA-HTTP workload and cloud providers information.


Author(s):  
Shun Suzuki ◽  
Kyoji Mizoguchi ◽  
Hikaru Watanabe ◽  
Toshiki Nakamura ◽  
Yoshiaki Deguchi ◽  
...  

2019 ◽  
Vol 56 (2) ◽  
pp. 358-383
Author(s):  
Magdalena Szymkowiak

AbstractA family of generalized ageing intensity functions of univariate absolutely continuous lifetime random variables is introduced and studied. They allow the analysis and measurement of the ageing tendency from various points of view. Some of these generalized ageing intensities characterize families of distributions dependent on a single parameter, while others determine distributions uniquely. In particular, it is shown that the elasticity functions of various transformations of distributions that appear in lifetime analysis and reliability theory uniquely characterize the parent distribution. Moreover, the recognition of the shape of a properly chosen generalized ageing intensity estimate admits a simple identification of the data lifetime distribution.


Sign in / Sign up

Export Citation Format

Share Document