scholarly journals Persistent Fault Analysis on Block Ciphers

Author(s):  
Fan Zhang ◽  
Xiaoxuan Lou ◽  
Xinjie Zhao ◽  
Shivam Bhasin ◽  
Wei He ◽  
...  

Persistence is an intrinsic nature for many errors yet has not been caught enough attractions for years. In this paper, the feature of persistence is applied to fault attacks, and the persistent fault attack is proposed. Different from traditional fault attacks, adversaries can prepare the fault injection stage before the encryption stage, which relaxes the constraint of the tight-coupled time synchronization. The persistent fault analysis (PFA) is elaborated on different implementations of AES-128, specially fault hardened implementations based on Dual Modular Redundancy (DMR). Our experimental results show that PFA is quite simple and efficient in breaking these typical implementations. To show the feasibility and practicability of our attack, a case study is illustrated on the shared library Libgcrypt with rowhammer technique. Approximately 8200 ciphertexts are enough to extract the master key of AES-128 when PFA is applied to Libgcrypt1.6.3 with redundant encryption based DMR. This work puts forward a new direction of fault attacks and can be extended to attack other implementations under more interesting scenarios.

10.29007/fmzl ◽  
2018 ◽  
Author(s):  
Sayandeep Saha ◽  
Ujjawal Kumar ◽  
Debdeep Mukhopadhyay ◽  
Pallab Dasgupta

Characterization of all possible faults in a cryptosystem exploitable for fault attacks is a problem which is of both theoretical and practical interest for the cryptographic community. The complete knowledge of exploitable fault space is desirable while designing optimal countermeasures for any given crypto-implementation. In this paper, we address the exploitable fault characterization problem in the context of Differential Fault Analysis (DFA) attacks on block ciphers. The formidable size of the fault spaces demands an automated albeit fast mechanism for verifying each individual fault instance and neither thetraditional, cipher-specific, manual DFA techniques nor the generic and automated Algebraic Fault Attacks (AFA) [10] fulfill these criteria. Further, the diversified structures of different block ciphers suggest that such an automation should be equally applicable to any block cipher. This work presents an automatedframework for DFA identification, fulfilling all aforementioned criteria, which, instead of performing the attack just estimates the attack complexity for each individual fault instance. A generic and extendable data-mining assisted dynamic analysis framework capable of capturing a large class of DFA distinguishersis devised, along with a graph-based complexity analysis scheme. The framework significantly outperforms another recently proposed one [6], in terms of attack class coverage and automation effort. Experimental evaluation on AES and PRESENT establishes the effectiveness of the proposed framework in detectingmost of the known DFAs, which eventually enables the characterization of the exploitable fault space.


2007 ◽  
Vol 2 (1) ◽  
pp. 14-21
Author(s):  
Carlos R. Moratelli ◽  
Érika Cota ◽  
Marcelo S. Lubaszewski

This work describes a hardware approach for the concurrent fault detection and error correction in a cryptographic core. It has been shown in the literature that transient faults injected in a cryptographic core can lead to the revelation of the encryption key using quite inexpensive equipments. This kind of attack is a real threat to tamper resistant devices like Smart Cards. To tackle such attacks, the cryptographic core must be immune to transient faults. In this work the DES algorithm is taken as a vulnerable cryptosystem case study.We show how an attack against DES is performed through a fault injection campaign. Then, a countermeasure based on partial hardware replication is proposed and applied to DES. Experimental results show the efficiency of the proposed scheme to protect DES against DFA fault attacks. Furthermore, the proposed solution is independent of implementation, and can be applied to other cryptographic algorithms, such as AES.


Author(s):  
Keerthi K ◽  
Indrani Roy ◽  
Chester Rebeiro ◽  
Aritra Hazra ◽  
Swarup Bhunia

Fault injection attacks are one of the most powerful forms of cryptanalytic attacks on ciphers. A single, precisely injected fault during the execution of a cipher like the AES, can completely reveal the key within a few milliseconds. Software implementations of ciphers, therefore, need to be thoroughly evaluated for such attacks. In recent years, automated tools have been developed to perform these evaluations. These tools either work on the cipher algorithm or on their implementations. Tools that work at the algorithm level can provide a comprehensive assessment of fault attack vulnerability for different fault attacks and with different fault models. Their application is, however, restricted because every realization of the cipher has unique vulnerabilities. On the other hand, tools that work on cipher implementations have a much wider application but are often restricted by the range of fault attacks and the number of fault models they can evaluate.In this paper, we propose a framework, called FEDS, that uses a combination of compiler techniques and model checking to merge the advantages of both, algorithmic level tools as well as implementation level tools. Like the algorithmic level tools, FEDS can provide a comprehensive assessment of fault attack exploitability considering a wide range of fault attacks and fault models. Like implementation level tools, FEDS works with implementations, therefore has wide application. We demonstrate the versatility of FEDS by evaluating seven different implementations of AES (including bitsliced implementation) and implementations of CLEFIA and CAMELLIA for Differential Fault Attacks. The framework automatically identifies exploitable instructions in all implementations. Further, we present an application of FEDS in a Fault Attack Aware Compiler, that can automatically identify and protect exploitable regions of the code. We demonstrate that the compiler can generate significantly more efficient code than a naïvely protected equivalent, while maintaining the same level of protection.


Author(s):  
Hadi Soleimany ◽  
Nasour Bagheri ◽  
Hosein Hadipour ◽  
Prasanna Ravi ◽  
Shivam Bhasin ◽  
...  

We focus on the multiple persistent faults analysis in this paper to fill existing gaps in its application in a variety of scenarios. Our major contributions are twofold. First, we propose a novel technique to apply persistent fault apply in the multiple persistent faults setting that decreases the number of survived keys and the required data. We demonstrate that by utilizing 1509 and 1448 ciphertexts, the number of survived keys after performing persistent fault analysis on AES in the presence of eight and sixteen faults can be reduced to only 29 candidates, whereas the best known attacks need 2008 and 1643 ciphertexts, respectively, with a time complexity of 250. Second, we develop generalized frameworks for retrieving the key in the ciphertext-only model. Our methods for both performing persistent fault attacks and key-recovery processes are highly flexible and provide a general trade-off between the number of required ciphertexts and the time complexity. To break AES with 16 persistent faults in the Sbox, our experiments show that the number of required ciphertexts can be decreased to 477 while the attack is still practical with respect to the time complexity. To confirm the accuracy of our methods, we performed several simulations as well as experimental validations on the ARM Cortex-M4 microcontroller with electromagnetic fault injection on AES and LED, which are two well-known block ciphers to validate the types of faults and the distribution of the number of faults in practice.


2020 ◽  
Vol 39 (3) ◽  
pp. 407-437
Author(s):  
Markus Bader

Abstract In German, a verb selected by another verb normally precedes the selecting verb. Modal verbs in the perfect tense provide an exception to this generalization because they require the perfective auxiliary to occur in cluster-initial position according to prescriptive grammars. Bader and Schmid (2009b) have shown, however, that native speakers accept the auxiliary in all positions except the cluster-final one. Experimental results as well as corpus data indicate that verb cluster serialization is a case of free variation. I discuss how this variation can be accounted for, focusing on two mismatches between acceptability and frequency: First, slight acceptability advantages can turn into strong frequency advantages. Second, syntactic variants with basically zero frequency can still vary substantially in acceptability. These mismatches remain unaccounted for if acceptability is related to frequency on the level of whole sentence structures, as in Stochastic OT (Boersma and Hayes2001). However, when the acceptability-frequency relationship is modeled on the level of individual weighted constraints, using harmony as link (see Pater2009, for different harmony based frameworks), the two mismatches follow given appropriate linking assumptions.


2021 ◽  
Vol 11 (15) ◽  
pp. 7169
Author(s):  
Mohamed Allouche ◽  
Tarek Frikha ◽  
Mihai Mitrea ◽  
Gérard Memmi ◽  
Faten Chaabane

To bridge the current gap between the Blockchain expectancies and their intensive computation constraints, the present paper advances a lightweight processing solution, based on a load-balancing architecture, compatible with the lightweight/embedding processing paradigms. In this way, the execution of complex operations is securely delegated to an off-chain general-purpose computing machine while the intimate Blockchain operations are kept on-chain. The illustrations correspond to an on-chain Tezos configuration and to a multiprocessor ARM embedded platform (integrated into a Raspberry Pi). The performances are assessed in terms of security, execution time, and CPU consumption when achieving a visual document fingerprint task. It is thus demonstrated that the advanced solution makes it possible for a computing intensive application to be deployed under severely constrained computation and memory resources, as set by a Raspberry Pi 3. The experimental results show that up to nine Tezos nodes can be deployed on a single Raspberry Pi 3 and that the limitation is not derived from the memory but from the computation resources. The execution time with a limited number of fingerprints is 40% higher than using a classical PC solution (value computed with 95% relative error lower than 5%).


2021 ◽  
Author(s):  
Emma Michie ◽  
Mark Mulrooney ◽  
Alvar Braathen

<p>Significant uncertainties occur through varying methodologies when interpreting faults using seismic data.  These uncertainties are carried through to the interpretation of how faults may act as baffles/barriers or increase fluid flow.  Seismic line spacing chosen by the interpreter when picking fault segments, as well as the chosen surface generation algorithm used, will dictate how detailed or smoothed the surface is, and hence will impact any further interpretation such as fault seal, fault stability and fault growth analyses.</p><p>This contribution is a case study showing how picking strategies influence analysis of a bounding fault in terms of CO<sub>2</sub> storage assessment.  This example utilizes data from the Smeaheia potential storage site within the Horda Platform, 20 km East of Troll East.  This is a fault bound prospect, known as the Alpha prospect, and hence the bounding fault is required to have a high seal potential and low chance of reactivation upon CO<sub>2</sub> injection.</p><p>We can observe that an optimum spacing for fault interpretation for this case study is set at approximately 100 m.  It appears that any additional detail through interpretation with a line spacing of ≤50 m simply adds further complexities, associated with sensitivities by the individual interpreter.  Hence, interpreting at a finer scale may not necessarily improve the subsurface model and any related analysis, but in fact lead to the production of highly irregular surfaces, which impacts any further fault analysis.  Interpreting on spacing greater than 100 m often leads to overly smoothed fault surfaces that miss details that could be crucial, both for fault seal / stability as well as for fault growth models.</p><p>Uncertainty associated with the chosen seismic interpretation methodology will follow through to subsequent fault seal analysis, such as analysis of whether in situ stresses, combined with increased pore pressure through CO<sub>2</sub> injection, will act to reactivate the faults, leading to up-fault fluid flow / seep.  We have shown that changing picking strategies significantly alters the interpreted stability of the fault, where picking with an increased line spacing has shown to increase the overall fault stability, and picking using every line leads to the interpretation of a critically stressed fault.  Alternatively, it is important to note that differences in picking strategy show little influence on the overall predicted fault membrane seal (i.e. shale gouge ratio) of the fault, used when interpreting the fault seal capacity for a fault bound CO<sub>2</sub> storage site.</p>


2009 ◽  
Vol 289-292 ◽  
pp. 385-395 ◽  
Author(s):  
Jerzy Jedlinski

This paper reviews briefly the relationship between the growth mechanism and matter transport using as an example the best currently applied metallic materials being alumina formers. The attention is paid to the experimental approach as well as to the interpretation procedure of experimental results. The scale structure, microstructure, morphology and phase composition are indicated as factors strongly affecting its growth mechanism. The attempt is made to elucidate the possible relationships between the obtained experimental results and actual scale growth mechanisms operating during oxidation exposures.


Sign in / Sign up

Export Citation Format

Share Document