Using transformations in the implementation of higher-order functions

1991 ◽  
Vol 1 (4) ◽  
pp. 459-494 ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

AbstractTraditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as one typically instantiates a higher-order function with the arguments that are known, whereas the unknown arguments remain to be taken as parameters. The distinction between ‘known’ and ‘unknown’ is closely related to the distinction between binding times like ‘compile-time’ and ‘run-time’. This leads to the use of a combination of polymorphic type inference and binding time analysis for obtaining the required information about which arguments are known.Following the current trend in the implementation of functional languages we then transform the run-time level of the program (not the compile-time level) into categorical combinators. At this stage we have a natural distinction between two kinds of program transformations: partial evaluation, which involves the compile-time level of our notation, and algebraic transformations (i.e., the application of algebraic laws), which involves the run-time level of our notation.By reinterpreting the combinators in suitable ways we obtain specifications of abstract interpretations (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both forward and backward analyses. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.Finally, the combinators can also be reinterpreted to specify code generation for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.

1990 ◽  
Vol 19 (337) ◽  
Author(s):  
Hanne Riis Nielson ◽  
Flemming Nielson

<p>Traditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as typically one instantiates a higher-order function with the arguments that are known whereas the unknown arguments remain to be taken as parameters. The distinction between ''known'' and ''unknown'' is closely related to the distinction between <em>binding times</em> like ''<em>compile-time</em>'' and ''<em> run-time</em>''. This leads to using a combination of <em>polymorphic type inference</em> and <em> binding time</em> analysis for obtaining the required information about which arguments are known.</p><p> </p><p>Following the current trend in the implementation of functional languages we then transform the run-time level of the program (<em> not</em> the compile-time level) into <em> categorial combinators</em>. At this stage we have a natural distinction between two kinds of program transformations: <em>partial evaluation</em> which involves the compile-time level of our notation and <em> algebraic transformations</em> (i.e. the application of algebraic laws) which involves the run-time level of our notation.</p><p> </p><p>By reinterpreting the combinators in suitable ways we obtain specifications of <em>abstract interpretations</em> (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both <em> forward</em> and <em> backward analyses</em>. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.</p><p> </p><p>Finally the combinators can also be reinterpreted to specify <em>code generation</em> for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.</p>


1997 ◽  
Vol 4 (43) ◽  
Author(s):  
Vincent Balat ◽  
Olivier Danvy

We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long beta-eta-normal form. Caml uses a virtual machine and has the capability to load byte code at run time. Representing the long beta-eta-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time.<br />We conclude this note with a preview of our current work on scaling<br />up strong normalization by run-time code generation to the Caml<br />module language.


1997 ◽  
Vol 4 (46) ◽  
Author(s):  
Olivier Danvy ◽  
Kristoffer H. Rose

We demonstrate the usefulness of higher-order rewriting techniques for specializing programs, i.e., for partial evaluation. More precisely, we demonstrate how casting program specializers as combinatory reduction systems (CRSs) makes it possible to formalize the corresponding program transformations as meta-reductions, i.e., reductions in the internal "substitution calculus." For partial-evaluation problems, this means that instead of having to prove on a case-by-case basis that one's "two-level functions" operate properly, one can concisely formalize them as a combinatory reduction system and obtain as a corollary that static reduction does not go wrong and yields a well-formed residual program.<br />We have found that the CRS substitution calculus provides an adequate expressive power to formalize partial evaluation: it provides sufficient termination strength while avoiding the need for additional restrictions such as types that would complicate the description unnecessarily (for our purpose). We also review the benefits and penalties entailed by more expressive higher-order formalisms. In addition, partial evaluation provides a number of examples of higher-order rewriting where being higher order is a central (rather than an occasional or merely exotic) property. We illustrate this by demonstrating how standard but non-trivial partial-evaluation examples are<br />handled with higher-order rewriting.


1999 ◽  
Vol 6 (45) ◽  
Author(s):  
Torben Amtoft

We report on a case study in the application of partial evaluation, initiated<br />by the desire to speed up a constraint-based algorithm for control-flow<br /> analysis. We designed and implemented a dedicated partial evaluator,<br />able to specialize the analysis wrt. a given constraint graph and thus <br />remove the interpretive overhead, and measured it with Feeley's Scheme<br />benchmarks. Even though the gain turned out to be rather limited, our<br />investigation yielded valuable feed back in that it provided a better understanding<br />of the analysis, leading us to (re)invent an incremental version.<br />We believe this phenomenon to be a quite frequent spinoff from using <br />partial evaluation, since the removal of interpretive overhead makes the flow<br />of control more explicit and hence pinpoints sources of inefficiency. <br /> Finally, we observed that partial evaluation in our case yields such regular,<br />low-level specialized programs that it begs for run-time code generation.


Author(s):  
JESÚS ARANSAY ◽  
JOSE DIVASÓN

AbstractIn this paper, we present a formalisation in a proof assistant, Isabelle/HOL, of a naive version of the Gauss-Jordan algorithm, with explicit proofs of some of its applications; and, additionally, a process to obtain versions of this algorithm in two different functional languages (SML and Haskell) by means of code generation techniques from the verified algorithm. The aim of this research is not to compete with specialised numerical implementations of Gauss-like algorithms, but to show that formal proofs in this area can be used to generate usable functional programs. The obtained programs show compelling performance in comparison to some other verified and functional versions, and accomplish some challenging tasks, such as the computation of determinants of matrices of big integers and the computation of the homology of matrices representing digital images.


2003 ◽  
Vol 38 (12) ◽  
pp. 44-56 ◽  
Author(s):  
Sam Kamin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document