scholarly journals Counterexample- and Simulation-Guided Floating-Point Loop Invariant Synthesis

Author(s):  
Anastasiia Izycheva ◽  
Eva Darulova ◽  
Helmut Seidl

AbstractWe present an automated procedure for synthesizing sound inductive invariants for floating-point numerical loops. Our procedure generates invariants of the form of a convex polynomial inequality that tightly bounds the values of loop variables. Such invariants are a prerequisite for reasoning about the safety and roundoff errors of floating-point programs. Unlike previous approaches that rely on policy iteration, linear algebra or semi-definite programming, we propose a heuristic procedure based on simulation and counterexample-guided refinement. We observe that this combination is remarkably effective and general and can handle both linear and nonlinear loop bodies, nondeterministic values as well as conditional statements. Our evaluation shows that our approach can efficiently synthesize loop invariants for existing benchmarks from literature, but that it is also able to find invariants for nonlinear loops that today’s tools cannot handle.

1977 ◽  
Vol 24 (3) ◽  
pp. 132-143 ◽  
Author(s):  
Tran Thong ◽  
B. Liu

1973 ◽  
Vol 20 (3) ◽  
pp. 391-398 ◽  
Author(s):  
Toyohisa Kaneko ◽  
Bede Liu

2017 ◽  
Vol 27 (2) ◽  
pp. 261-272 ◽  
Author(s):  
Alexey Zhirabok ◽  
Alexey Shumsky ◽  
Sergey Solyanik ◽  
Alexey Suvorov

AbstractThe problem of robust linear and nonlinear diagnostic observer design is considered. A method is suggested to construct the observers that are disturbance decoupled or have minimal sensitivity to the disturbances. The method is based on a logic-dynamic approach which allows us to consider systems with non-differentiable nonlinearities in the state equations by methods of linear algebra.


2017 ◽  
Vol 27 (03n04) ◽  
pp. 1750006 ◽  
Author(s):  
Farhad Merchant ◽  
Anupam Chattopadhyay ◽  
Soumyendu Raha ◽  
S. K. Nandy ◽  
Ranjani Narayan

Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) form basic building blocks for several High Performance Computing (HPC) applications and hence dictate performance of the HPC applications. Performance in such tuned packages is attained through tuning of several algorithmic and architectural parameters such as number of parallel operations in the Directed Acyclic Graph of the BLAS/LAPACK routines, sizes of the memories in the memory hierarchy of the underlying platform, bandwidth of the memory, and structure of the compute resources in the underlying platform. In this paper, we closely investigate the impact of the Floating Point Unit (FPU) micro-architecture for performance tuning of BLAS and LAPACK. We present theoretical analysis for pipeline depth of different floating point operations like multiplier, adder, square root, and divider followed by characterization of BLAS and LAPACK to determine several parameters required in the theoretical framework for deciding optimum pipeline depth of the floating operations. A simple design of a Processing Element (PE) is presented and shown that the PE outperforms the most recent custom realizations of BLAS and LAPACK by 1.1X to 1.5X in GFlops/W, and 1.9X to 2.1X in Gflops/mm2. Compared to multicore, General Purpose Graphics Processing Unit (GPGPU), Field Programmable Gate Array (FPGA), and ClearSpeed CSX700, performance improvement of 1.8-80x is reported in PE.


2021 ◽  
Vol 47 (3) ◽  
pp. 1-23
Author(s):  
Ahmad Abdelfattah ◽  
Timothy Costa ◽  
Jack Dongarra ◽  
Mark Gates ◽  
Azzam Haidar ◽  
...  

This article describes a standard API for a set of Batched Basic Linear Algebra Subprograms (Batched BLAS or BBLAS). The focus is on many independent BLAS operations on small matrices that are grouped together and processed by a single routine, called a Batched BLAS routine. The matrices are grouped together in uniformly sized groups, with just one group if all the matrices are of equal size. The aim is to provide more efficient, but portable, implementations of algorithms on high-performance many-core platforms. These include multicore and many-core CPU processors, GPUs and coprocessors, and other hardware accelerators with floating-point compute facility. As well as the standard types of single and double precision, we also include half and quadruple precision in the standard. In particular, half precision is used in many very large scale applications, such as those associated with machine learning.


Author(s):  
George Constantinides ◽  
Fredrik Dahlqvist ◽  
Zvonimir Rakamarić ◽  
Rocco Salvia

AbstractWe present a detailed study of roundoff errors in probabilistic floating-point computations. We derive closed-form expressions for the distribution of roundoff errors associated with a random variable, and we prove that roundoff errors are generally close to being uncorrelated with their generating distribution. Based on these theoretical advances, we propose a model of IEEE floating-point arithmetic for numerical expressions with probabilistic inputs and an algorithm for evaluating this model. Our algorithm provides rigorous bounds to the output and error distributions of arithmetic expressions over random variables, evaluated in the presence of roundoff errors. It keeps track of complex dependencies between random variables using an SMT solver, and is capable of providing sound but tight probabilistic bounds to roundoff errors using symbolic affine arithmetic. We implemented the algorithm in the PAF tool, and evaluated it on FPBench, a standard benchmark suite for the analysis of roundoff errors. Our evaluation shows that PAF computes tighter bounds than current state-of-the-art on almost all benchmarks.


Author(s):  
Debasmita Lohar ◽  
Clothilde Jeangoudoux ◽  
Joshua Sobel ◽  
Eva Darulova ◽  
Maria Christakis

AbstractTools that automatically prove the absence or detect the presence of large floating-point roundoff errors or the special values NaN and Infinity greatly help developers to reason about the unintuitive nature of floating-point arithmetic. We show that state-of-the-art tools, however, support or provide non-trivial results only for relatively short programs. We propose a framework for combining different static and dynamic analyses that allows to increase their reach beyond what they can do individually. Furthermore, we show how adaptations of existing dynamic and static techniques effectively trade some soundness guarantees for increased scalability, providing conditional verification of floating-point kernels in realistic programs.


Author(s):  
Ilzina Dmitrieva ◽  
Gennadiy Ivanov ◽  
Alexey Mineev

The need to improve the level of mathematical in particular geometric training of students of technical universities is due to modern technologies of computer-aided design. They are based on mathematical models of designed products, technological processes, etc., taking into account a large variety of source data. Therefore, from the first years of technical universities, when studying the cycle of mathematical disciplines, it is advisable to interpret a number of issues in terms and concepts of multidimensional geometry. At the same time, the combination of constructive (graphical) algorithms for solving problems in descriptive geometry with analytical algorithms in linear algebra and matanalysis allows us to summarize their advantages: the constructive approach provides the imagery inherent in engineering thinking, and the analytical approach provides the final result. The article shows the effectiveness of combining constructive and analytical algorithms for solving problems involving linear and nonlinear forms of many variables using specific examples.


Sign in / Sign up

Export Citation Format

Share Document