hard disk drive
Recently Published Documents


TOTAL DOCUMENTS

922
(FIVE YEARS 57)

H-INDEX

30
(FIVE YEARS 2)

2022 ◽  
Vol 178 ◽  
pp. 106102
Author(s):  
Julien Walzberg ◽  
Robin Burton ◽  
Fu Zhao ◽  
Kali Frost ◽  
Stéphanie Muller ◽  
...  

2022 ◽  
Vol 115 ◽  
pp. 103657
Author(s):  
S. Chumpen ◽  
S. Pimpakun ◽  
B. Charoen ◽  
S. Pornnimitra ◽  
S. Plong-ngooluam ◽  
...  

2021 ◽  
Vol 173 ◽  
pp. 105694
Author(s):  
Kali Frost ◽  
Ines Sousa ◽  
Joanne Larson ◽  
Hongyue Jin ◽  
Inez Hua

Author(s):  
Vaibhav Umesh Mokal

Abstract: The data is the most valuable thing in this modern world of Information Technology. As we can see the day to day the data is increasing as each and every people using the World Wide Web. This all system generated data or may be the personal or informative data will get generated in a huge amount of size. That data will get stored at the data centers or on cloud. But those will get stored on the Hard Disk Drives in data centers. So in some situation if the HDD got crashed then we will have lost our data. This work proposes to develop the failure prediction of Hard disk drive. We have chosen the accuracy and review measurements, generally important to the issue, and tried a few learning strategies, Adaboost, Naive Bayes, Logistic Regression and Voting. Our investigation shows that while we can't accomplish close to 100% forecast precision utilizing ML with the present information we have accessible for HDDs, we can improve our expectation exactness over the standard methodology Keywords: Machine learning, Adaboost, Naive Bayes, Voting, Logistic Regression


2021 ◽  
Author(s):  
Pierre-Olivier Jubert ◽  
Yuri Obukhov ◽  
Cristian Papusoi ◽  
Paul Dorsey
Keyword(s):  

2021 ◽  
Vol 8 (2) ◽  
pp. 205395172110154
Author(s):  
Zane Griffin Talley Cooper

How did the 3.5-inch Winchester hard disk drive become the fundamental building block of the modern data center? In attempting to answer this question, I theorize the concept of "data peripheries" to attend to the awkward, uneven, and unintended outsides of data infrastructures. I explore the concept of data peripheries by first situating Big Data in one of its many unintended outsides—an unassuming dog kennel in Indiana housed in a former permanent magnet manufacturing plant. From the perspective of this dog kennel, I then build a history of the 3.5-inch Winchester hard disk drive, and weave this hard drive history through the industrial histories of rare earth mining and permanent magnet manufacturing, focusing principally on Magnequench, a former General Motors subsidiary, and its sale and movement of operations from Indiana to China in the mid-1990s and early 2000s. I then discuss how mobilities of rare earths, both as materials and political discourse, shape Big Data futures, and conclude by speculating on how using the situated lenses of data peripheries (such as this Indiana dog kennel) can open up new methods for studying the material entanglements of Big Data writ large.


Author(s):  
Karthik Venkatesh ◽  
Abhishek Srivastava ◽  
Rahul Rai ◽  
Bernhard Knigge

Abstract Accurately detecting irregularities in the media — thermal asperities and delamination — and mapping them out from further usage is critical to prevent data loss and minimize head disk interaction (HDI). Defect growth is a common concern in hard disk drives (HDD) and the immediate vicinity of media defects are also mapped out to provide sufficient protection against defect growth. A class of media defects that prove more complex to protect against defect growth is scratches on the media. Margining a media scratch involves filling in the gaps between the components of a scratch and margining the vicinity of the scratch in the defect growth direction. While Hough transform based techniques and deeplearning models have been developed to identify media patterns, they cannot be implemented in the hard disk drive firmware due to memory and computational limitations. Here, we present a computationally simple and efficient alternative to identify scratches on the media by combining clustering and an iterative parameter estimation to fit a line to the scratch in noisy conditions. The result is a method that is capable of modeling linear, spiral and parabolic scratches on a media and fill gaps in the scratch and extend the margining at either end of the scratch.


Author(s):  
Sladjana M. Djurasevic ◽  
Uros M. Pesovic ◽  
Borislav S. Djordjevic

Author(s):  
Anna Rita Bennato ◽  
Stephen Davies ◽  
Franco Mariuzzo ◽  
Peter Ormosi
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document