The ability to compose code in a modular fashion is important to the construction of large programs. In the logic programming setting, it is desirable that such capabilities be realized through logic-based devices. We present here an approach to doing this that is a supported in the Teyjus implementation of the Lambda Prolog language. Within this scheme, a module corresponds to a block of code whose external view is mediated by a signature. Thus, signatures impose a form of hiding that is explained logically via existential quantifications over predicate, function and constant names. Modules interact through the mechanism of accumulation that translates into conjoining the clauses in them while respecting the scopes of existential quantifiers introduced by signatures. We show that this simple device for statically structuring name spaces suffices for realizing features related to code scoping for which the dynamic control of predicate definitions was earlier considered necessary. While a compile-time inlining of accumulated modules that respects quantifier scoping can be used to realize the module capabilities we present in a transparently correct way, such an approach has the drawback of not supporting separate compilation. We show how this approach can be refined into a scheme that allows each distinct module to be compiled separately, with the effect of inlining being realized by a subsequent linking process.

The approach to reasoning about structural operational semantics style specifications supported by the Abella system is discussed. This approach uses &lambda-tree syntax to treat object language binding and encodes binding related properties in generic judgments. Further, object language specifications are embedded directly into the reasoning framework through recursive definitions. The treatment of binding via generic judgments implicitly enforces distinctness and atomicity in the names used for bound variables. These properties must, however, be made explicit in reasoning tasks. This objective can be achieved by allowing recursive definitions to also specify generic properties of atomic predicates. The utility of these various logical features in the Abella system is demonstrated through actual reasoning tasks. Brief comparisons with a few other logic based approaches are also made.

Many semantical aspects of programming languages, such as their
operational semantics and their type assignment calculi, are specified
by describing appropriate proof systems.
Recent research has identified two proof-theoretic features that
allow direct, logic-based reasoning about such descriptions:
the treatment of atomic judgments as fixed points
(recursive definitions) and an encoding of
binding constructs via generic judgments.
However, the logics encompassing these two features have thus far
treated them orthogonally: that is,
they do not provide the ability to define
object-logic properties that themselves depend on an intrinsic
treatment of binding.
We propose a new and simple integration of these features within an
intuitionistic logic enhanced with induction over natural numbers and
we show that the resulting logic is consistent.
The pivotal benefit of the integration is that it allows recursive
definitions to not just encode simple, traditional forms of *atomic
judgments* but also to capture generic properties pertaining
to such judgments.
The usefulness of this logic is illustrated by showing how it can
provide elegant
treatments of object-logic contexts that
appear in proofs involving typing calculi and of arbitrarily cascading
substitutions that play a role in reducibility arguments.

Bedwyr is a generalization of logic programming that allows model
checking directly on syntactic expressions possibly containing
bindings. This system, written in OCaml, is a direct implementation of
two recent advances in the theory of proof search. The first is
centered on the fact that both finite success and finite failure can
be captured in the sequent calculus by incorporating inference rules
for *definitions* that allow *fixed points* to be explored.
As a result, proof search in such a sequent calculus can capture
simple model checking problems as well as may and must behavior in
operational semantics. The second is that higher-order abstract
syntax is directly supported using term-level λ-binders and
the &nabla quantifier. These features allow reasoning
directly on expressions
containing bound variables.

This paper concerns the explicit treatment of substitutions in the lambda calculus. One of its contributions is the simplification and rationalization of the suspension calculus that embodies such a treatment. The earlier version of this calculus provides a cumbersome encoding of substitution composition, an operation that is important to the efficient realization of reduction. This encoding is simplified here, resulting in a treatment that is easy to use directly in applications. The rationalization consists of the elimination of a practically inconsequential flexibility in the unravelling of substitutions that has the inadvertent side effect of losing contextual information in terms; the modified calculus now has a structure that naturally supports logical analyses, such as ones related to the assignment of types, over lambda terms. The overall calculus is shown to have pleasing theoretical properties such as a strongly terminating sub-calculus for substitution and confluence even in the presence of term meta variables that are accorded a grafting interpretation. Another contribution of the paper is the identification of a broad set of properties that are desirable for explicit substitution calculi to support and a classification of a variety of proposed systems based on these. The suspension calculus is used as a tool in this study. In particular, mappings are described between it and the other calculi towards understanding the characteristics of the latter.

We present a natural confluence of higher-order hereditary
Harrop formulas (HH formulas),
Constraint Logic Programming, and Concurrent Constraint Programming as
a fragment of intuitionistic, higher-order logic. This
combination is motivated by the need for a simple executable,
logical presentation for static and dynamic semantics of modern
programming languages. The power of HH formulas is needed for
higher-order abstract syntax, and the power of constraints is
needed to naturally abstract the underlying domain of computation.
Underpinning the combination is a sound and complete operational
interpretation of a two-sided sequent presentation of (a large
fragment of) intuitionistic logic in terms of *behavioral testing of
concurrent systems*. Formulas on the left hand side of a sequent style
presentation are viewed as a system of concurrent agents, and formulas
on the right hand side as *tests* against this evolving
system. The language permits recursive definitions of agents and
tests, allows tests to augment the system being tested and allows
agents to be contingent on the success of a test.
We present a condition on proofs, *operational derivability* (OD),
and show that the operational semantics generates only operationally
derivable proofs. We show that a sequent in this logic has a proof iff
it has an operationally derivable proof.

Appear in the Proceedings of the *Twelfth International Conference on
Logic for Programming, Artificial Intelligence and Reasoning*,
Springer LNAI 3835, pages 110-125, 2005.

The traditional purpose of types in programming languages of providing correctness assurances at compile time is increasingly being supplemented by a direct role for them in the computational process. In the specific context of typed logic programming, this is manifest in their effect on the unification operation. Their influence takes two different forms. First, in a situation where polymorphism is permitted, type information is needed to determine if different occurrences of the same name in fact denote an identical constant. Second, type information may determine the specific form of a binding for a variable. When types are needed for the second purpose as in the case of higher-order unification, these have to be available with every variable and constant. However, in many situations such as first-order and higher-order pattern unification it turns out that types have no impact on the variable binding process. As a consequence, type examination is needed in these situations only for the first of the two purposes described and even here a careful preprocessing can considerably reduce their runtime footprint. We develop a scheme for treating types in these contexts that exploits this observation. Under this scheme, type information is elided in most cases and is embedded into term structure when this is not entirely possible. Our approach obviates types when properties known as definitional genericity and type preservation are satisfied and has the advantage of working even when these conditions are violated.

Appears in the Proceedings of the *Workshop on Empirically
Successful Automated Reasoning in Higher-Order Logic (ESHOL'05)*

The operational semantics and typing judgements of modern programming and specification languages are often defined using relations and proof systems. In simple settings, logic programming languages can be used to provide rather direct and natural interpreters for such operational semantics. More complex features of specifications such as names and their bindings, proof rules with negative premises, and the exhaustive enumeration of state spaces, all pose significant challenges to conventional logic programming systems. In this paper, we describe a simple architecture for the implementation of deduction systems that allows a specification to interleave between finite success and finite failure. The implementation techniques for this prover are largely common ones from higher-order logic programming, i.e., logic variables, (higher-order pattern) unification, backtracking (using stream-based computation), and abstract syntax based on simply typed λ-terms. We present a particular instance of this prover's architecture and its prototype implementation, Level 0/1, based on the dual interpretation of (finite) success and finite failure in proof search. We show how Level 0/1 provides a high-level and declarative implementation of model checking and bisimulation checking for the (finite) π-calculus.

Revised version appears in the proceedings of the *Twenty-First
International Conference on Logic Programming*, Lecture Notes in
Computer Science Vol 3668, pages 371-387, Springer,
2005.

Higher-order pattern unification problems arise often in computations carried out within systems such as Twelf, Lambda Prolog and Isabelle. An important characteristic of such problems is that they are given by equations appearing under a prefix of alternating universal and existential quantifiers. Existing algorithms for solving these problems assume that such prefixes are simplified to a "forall-some-forall" form by an a priori application of a transformation known as raising. There are drawbacks to this approach. Mixed quantifier prefixes typically manifest themselves in the course of computation, thereby requiring a dynamic form of preprocessing that is difficult to support in low-level implementations. Moreover, raising may be redundant in many cases and its effect may have to be undone by a subsequent pruning transformation. We propose a method to overcome these difficulties here. In particular, a unification algorithm is described that proceeds by recursively descending through the structures of terms, performing raising and other transformations on-the-fly and only as needed. This algorithm also exploits an explicit substitution notation for lambda terms.

Revised version appears in *Journal of Automated Reasoning 33:89 -
132, 2005*.

Higher-order representations of objects such as programs, proofs, formulas and types have become important to many symbolic computation tasks. Systems that support such representations usually depend on the implementation of an intensional view of the terms of some variant of the typed lambda calculus. New notations have been proposed for the lambda calculus that provide an excellent basis for realizing such implementations. There are, however, several choices in the actual deployment of these notations the practical consequences of which are not currently well understood. We attempt to develop such an understanding here by examining the impact on performance of different combinations of the features afforded by such notations. Amongst the facets examined are the treatment of bound variables, eagerness and laziness in substitution and reduction, the ability to merge different structure traversals into one and the virtues of annotations on terms that indicate their dependence on variables bound by external abstractions. We complement qualitative assessments with experiments conducted by executing programs in a language that supports an intensional view of lambda terms while varying relevant aspects of the implementation of the language. Our study provides insights into the preferred approaches to representing and reducing lambda terms and also exposes characteristics of computations that have a somewhat unanticipated effect on performance.

Appears in *Fifth ACM-SIGPLAN International Conference on Principles
and Practice of Declarative Programming*, pages 195-206,2003.

Substitution in the lambda calculus is a complex operation that traditional presentations of beta contraction naively treat as a unitary operation. Actual implementations are more careful. Within them, substitutions are realized incrementally through the use of environments. However, environments are usually not accorded a first-class status within such systems in that they are not reflected into term structure. This approach does not allow the smaller substitution steps to be intermingled with other operations of interest on lambda terms. Various new notations for lambda terms remedy this situation by proposing an explicit treatment of substitutions. Unfortunately, a naive implementation of beta reduction based on such notations has the potential of being costly: each use of the substitution propagation rules causes the creation of a new structure on the heap that is often discarded in the immediately following step. There is, thus, a tradeoff between these two approaches. This paper discusses these tradeoffs and offers an amalgamated approach that utilizes recursion in rewrite rule application but also suspends substitution operations where profitable.

Revised version appears in *Theory and Practice of Logic
Programming 5(3): 305 - 354, 2005*.

The logic programming paradigm provides the basis for a new
intensional view of higher-order notions. This view is realized
primarily by employing the terms of a typed lambda calculus as
representational devices and by using a richer form of unification
for probing their structures. These additions have important
meta-programming applications but they also pose non-trivial
implementation problems. One issue concerns the machine representation
of lambda terms suitable to their intended use: an adequate encoding
must facilitate comparison operations over terms in addition to
supporting the usual reduction computation. Another aspect relates to
the treatment of a unification operation that has a branching
character and that sometimes calls for the delaying of the solution of
unification problems. A final issue concerns the execution of goals
whose structures becomes apparent only in the course of computation.
These various problems are exposed in this paper and solutions to them
are described. A satisfactory representation for lambda terms is
developed by exploiting the nameless notation of de Bruijn as well as
explicit encodings of substitutions. Special mechanisms are
molded into the structure of traditional Prolog implementations to
support branching in unification and carrying of unification
problems over other computation steps; a premium is placed in this
context on exploiting determinism and on emulating usual first-order
behaviour. An extended compilation model is presented that
treats higher-order unification and also handles dynamically emergent
goals. The ideas described here have been employed in the *Teyjus*
implementation of the Lambda Prolog language, a fact that is used to
obtain a preliminary assessment of their efficacy.

Appears in Ninth Workshop on Logic, Language, Information and Computation (WoLLIC'02), Electronic Notes in Theoretical Computer Science, Elsevier, 2002.

Many metalanguages and logical frameworks have emerged in recent years that use the terms of the lambda calculus as data structures. A common set of questions govern the suitability of a representation for lambda terms in the implementation of such systems: alpha-convertibility must be easily recognizable, sharing in reduction steps, term traversal and term structure must be possible, comparison and unification operations should be efficiently supported and it should be possible to examine terms embedded inside abstractions. Explicit substitution notations for lambda calculi provide a basis for realizing such requirements. We discuss here the issues related to using one such notation---the suspension notation of Nadathur and Wilson---in this capacity. This notation has been used in two significant practical systems: the Standard ML of New Jersey compiler and the Teyjus implementation of Lambda Prolog. We expose the theoretical properties of this notation, highlight pragmatic considerations in its use in implementing operations such as reduction and unification and discuss its relationship to other explicit substitution notations.

Click here for the paper.

*Proceedings of the
Thirteenth International Conference on Rewriting Techniques and
Applications*, Lecture Notes in Computer Science,
Volume 2378, (S. Tison, ed.), pages 192-206, Springer-Verlag, 2002.

Higher-order representations of objects such as programs, specifications and proofs are important to many metaprogramming and symbolic computation tasks. Systems that support such representations often depend on the implementation of an intensional view of the terms of suitable typed lambda calculi. Refined lambda calculus notations have been proposed that can be used in realizing such implementations. There are, however, choices in the actual deployment of such notations whose practical consequences are not well understood. Towards addressing this lacuna, the impact of three specific ideas is examined: the de Bruijn representation of bound variables, the explicit encoding of substitutions in terms and the annotation of terms to indicate their independence on external abstractions. Qualitative assessments are complemented by experiments over actual computations. The empirical study is based on Lambda Prolog programs executed using suitable variants of a low level, abstract machine based implementation of this language.

Appears in *Proceedings of the Fifth International
Symposium on Functional and Logic Programming*, Lecture Notes in
Computer Science, Volume 2024, (H. Kuchen and K. Ueda, eds.), pages 1--20,
Springer-Verlag, 2001.

Stimulated by concerns of software certification especially as it
relates to mobile code, formal structures such as specifications and
proofs are beginning to play an explicit role in
computing. In representing and manipulating such structures, an
approach is needed that pays attention to the binding operation that
is present in them. The language Lambda Prolog provides programming
support for a higher-order treatment of abstract syntax that is
especially suited to this task. This support is realized by enhancing
the traditional strength of logic programming in the metalanguage
realm with an ability for dealing directly with binding
structure. This paper identifies the features of Lambda Prolog that
endow it with such a capability, illustrates their use and and describes
methods for their implementation. Also discussed is a new
realization of Lambda Prolog called *Teyjus* that incorporates
the implementation ideas presented.

Appears in *Proceedings of the Sixteenth Conference on Automated
Deduction*,
Lecture Notes in Artificial Intelligence, Volume 1632 (H. Ganzinger,
ed.), pages 287--291, Springer-Verlag, July 1999.

The logic programming language Lambda Prolog is based on the
intuitionistic theory of *higher-order hereditary Harrop
formulas*, a logic that significantly extends the theory of Horn
clauses. A systematic exploitation of features in the richer logic
endows Lambda Prolog with capabilities at the programming level that
are not present in traditional logic programming languages. Several
studies have established the value of Lambda Prolog as a language
for implementing systems that manipulate formal objects such as
formulas, programs, proofs and types. Towards harnessing these
benefits, methods have been developed for realizing this language
efficiently. This work has culminated in the description of an
abstract machine and compiler based implementation scheme. An actual
implementation of Lambda Prolog based on these ideas has recently
been completed. The planned presentation will exhibit this
system---called *Teyjus*---and will also illuminate the
metalanguage capabilities of Lambda Prolog.

*Proceedings of the First International Workshop on Explicit
Substitutions. Also available as Technical Report Number TR-98-01,
Department of Computer Science, University of Chicago, January,
1998*.

This extended abstract has a pragmatic intent: it explains the use of an
explicit substitution notation in an implementation of the
higher-order logic programming language lambdaProlog.
In particular, it describes a particular calculus for lambda terms that
is called the *annotated suspension notation*, then presents a
stack based procedure for head-normalizing terms in this notation and,
finally, explains how these various aspects fit into an
implementation of higher-order unification. All the devices sketched
in this paper have been used in a C-based implementation of an
abstract machine for lambdaProlog.

Appears in *Theoretical Computer Science 232(1-2): 273 - 298, 2000.*

Based on an analysis of the inference rules used, we provide a complete characterization of the situations in which classical and intuitionistic provability coincide. We then examine the relationship of these derivability notions to uniform provability, a restriction of intuitionistic provability that embodies a special form of goal-directedness. We determine, first, the circumstances in which these relations coincide. Using this result, we identify the richest versions of the so-called abstract logic programming languages in classical and intuitionistic logic. We then study the reduction of classical and, derivatively, intuitionistic provability to uniform provability via the addition to the assumption set of the negation of the formula to be proved. Our focus here is on understanding the situations in which this reduction is achieved. However, our discussions indicate the structure of a proof procedure based on the reduction, a matter also considered explicitly elsewhere.

Appears in * Journal of Functional and Logic Programming 1999(9), MIT Press,
April 1999*.

The language lambdaProlog incorporates a module notion that permits
the space of names and procedure declarations to be decomposed into
smaller units. Interactions between these units can take place through
either an accumulation or an importation process. There are both
static and dynamic effects to such interactions: the parsing of
expressions may require names declared in another module and
executable code may utilize procedures defined elsewhere. We describe
a method for implementing this feature for modular programming that is
based on the *separate* compilation of each module into an
appropriate fragment of code. The dynamic semantics of module
importation involves enhancing existing program contexts with the
procedures defined in other modules. This effect is achieved through
a run-time process for including the compiler generated code for such
procedures. Our implementation method partitions the code space into
distinct chunks corresponding to the module structure, with the
management of the sub-parts being realized through a module table.
Efficiency of execution requires the use of uniform structures, such
as a common symbol table, for the code in all sub-components. To
realize this requirement, we describe a suitable communication between
the compilation and loading processes. The scheme presented here has
been implemented and tested in conjunction with an abstract machine
for lambdaProlog.

Appears in * Journal of Logic and Computation 8(2):209-230, 1998*.

Uniform proofs are sequent calculus proofs with the following characteristic: the last step in the derivation of a complex formula at any stage in the proof is always the introduction of the top-level logical symbol of that formula. We investigate the relevance of this uniform proof notion to structuring proof search in classical logic. A logical language in whose context provability is equivalent to uniform provability admits of a goal-directed proof procedure that interprets logical symbols as search directives whose meanings are given by the corresponding inference rules. While this uniform provability property does not hold directly of classical logic, we show that it holds of a fragment of it that only excludes essentially positive occurrences of universal quantifiers under a modest, sound, modification to the set of assumptions: the addition to them of the negation of the formula being proved. We further note that all uses of the added formula can be factored into certain derived rules. The resulting proof system and the uniform provability property that holds of it are used to outline a proof procedure for classical logic. An interesting aspect of this proof procedure is that it incorporates within it previously proposed mechanisms for dealing with disjunctive information in assumptions and for handling hypotheticals. Our analysis sheds light on the relationship between these mechanisms and the notion of uniform proofs.

* Appears in the Tenth Annual Symposium on Logic in Computer
Science, pages 148-155, July 1995 *

One formulation of the concept of logic programming is the
notion of * Abstract Logic Programming Language *, introduced
in [MNPS91].
Central to that definition is * uniform proof *,
which enforces the requirements of inference direction, including
goal-directedness, and the duality of readings, declarative and
procedural.
We use this technology to investigate Disjunctive Logic Programming
(DLP), an extension of traditional logic programming that permits
disjunctive program clauses.
This extension has been considered by some to be
inappropriately identified with logic programming because the
indefinite reasoning introduced by disjunction violates the
goal-oriented search directionality central to logic programming.
We overcome this criticism by showing that the requirement of uniform
provability can be realized in a logic more general than that of
DLP under a modest, sound, modification of programs.
We use this observation to derive inference rules that capture the
essential proof structure of InH-Prolog, a known proof procedure for
DLP.

* Revised version appears in Volume 5 of the Handbook of Logic in
AI and Logic Programming, Oxford University Press, pp 499-590.*

This paper, which is to be a chapter in a handbook, provides an exposition of the notion of higher-order logic programming. It begins with an informal consideration of the nature of a higher-order extension to usual logic programming languages. Such an extension is then presented formally by outlining a suitable higher-order logic --- Church's simple theory of types --- and by describing a version of Horn clauses within this logic. Various proof-theoretic and computational properties of these higher-order Horn clauses that are relevant to their use in programming are observed. These observations eventually permit the description of an actual higher-order logic programming language. The applications of this language are examined. It is shown, first of all, that the language supports the forms of higher-order programming that are familiar from other programming paradigms. The true novelty of the language, however, is that it permits typed lambda terms to be used as data structures. This feature leads to several important applications for the language in the realm of meta-programming. Examples that bring out this facet of the language are presented. A complete exploitation of this meta-programming capability requires certain additional abstraction mechanisms to be present in the language. This issue is discussed, the logic of hereditary Harrop formulas is presented as a means for realizing the needed strengthening of the language, and the virtues of the features arising out the extension to the logic are illustrated.

- Introduction
- Motivating a Higher-Order Extension to Horn Clauses
- A Higher-Order Logic
- The Language
- Equality between Terms
- The Notion of Derivation
- A Notion of Models
- Predicate Variables and the Subformula Property

- Higher-Order Horn Clauses
- The Meaning of Computations
- Restriction to Positive Terms
- Provability and Operational Semantics

- Towards a Practical Realization
- The Higher-Order Unification Problem
- P-Derivations
- Designing an Actual Interpreter

- Examples of Higher-Order Programming
- A Concrete Syntax for Programs
- Some Simple Higher-Order Programs
- Implementing Tactics and Tacticals
- Comparison with Higher-Order Functional Programming

- Using lambda-Terms as Data Structures
- Implementing an Interpreter for Horn Clauses
- Dealing with Functional Programs as Data
- A Shortcoming of Horn Clauses for Meta-Programming

- Hereditary Harrop Formulas
- Permitting Universal Quantifiers and Implications in Goals
- Recursion over Structures that Incorporate Binding

- Conclusion
- Acknowledgements

*Appears in Journal of Logic Programming 25(2): 119-161, November 1995 *

The inclusion of universal quantification and a form of implication in goals in logic programming is considered. These additions provide a logical basis for scoping but they also raise new implementation problems. When universal and existential quantifiers are permitted to appear in mixed order in goals, the devices of logic variables and unification that are employed in solving existential goals must be modified to ensure that constraints arising out of the order of quantification are respected. Suitable modifications that are based on attaching numerical tags to constants and variables and on using these tags in unification are described. The resulting devices are amenable to an efficient implementation and can, in fact, be assimilated easily into the usual machinery of the Warren Abstract Machine (WAM). The provision of implications in goals results in the possibility of program clauses being added to the program for the purpose of solving specific subgoals. A naive scheme based on asserting and retracting program clauses does not suffice for implementing such additions for two reasons. First, it is necessary to also support the resurrection of an earlier existing program in the face of backtracking. Second, the possibility for implication goals to be surrounded by quantifiers requires a consideration of the parameterization of program clauses by bindings for their free variables. Devices for supporting these additional requirements are described as also is the integration of these devices into the WAM. Further extensions to the machine are outlined for handling higher-order additions to the language. The ideas presented here are relevant to the implementation of the higher-order logic programming language lambda Prolog.

* Revised version appears in Volume 5 of Handbook of Logic in AI and Logic
Programming, Oxford University Press, pp 163-234.*

Proof procedures are an essential part of logic applied to artificial intelligence tasks, and form the basis for logic programming languages. As such, many of the chapters throughout this handbook utilize, or study, proof procedures. The study of proof procedures that are useful in artificial intelligence would require a large book so we focus on proof procedures that relate to logic programming. We begin with the resolution procedures that influenced the definition of SLD-resolution, the procedure upon which Prolog is built. Starting with the general resolution procedure we move through linear resolution to a very restricted linear resolution, SL-resolution, which actually is not a resolution restriction, but a variant using an augmented logical form. (SL-resolution actually is a derivative of the Model Elimination procedure, which was developed independently of resolution.) We then consider logic programming itself, reviewing SLD-resolution and then describing a general criterion for identifying inference systems that can serve as bases for logic programming. This criterion requires, in essence, that logical symbols exhibit a duality between a declarative and a search-related meaning and we present the notion of a uniform proof as a means for formalizing such a duality. We describe some first-order systems that extend the Horn clause logic while satisfying this criterion. The usefulness of these extended systems from the perspective of programming and the construction of proof procedures for them is then examined. The third section discusses two proposed logic programming languages that extend Prolog in a manner that overcomes some of its limitations. Specifically, these proposals involve the addition of hypothetical implication and positive disjunctions. These extensions are viewed from the perspective of constructing uniform proofs.

* Journal of Functional and Logic Programming 1999(2), MIT Press,
March 1999. Previously available as Duke Technical Report
CS-1994-01*.

We discuss issues relevant to the practical use of a previously proposed notation for lambda terms in contexts where the intensions of such terms have to be manipulated. This notation uses the `nameless' scheme of de Bruijn, includes expressions for encoding terms together with substitutions to be performed on them and contains a mechanism for combining such substitutions so that they can be effected in a common structure traversal. The combination mechanism is a general one and consequently difficult to implement. We propose a simplification to it that retains its functionality in situations that occur commonly in beta reduction. We then describe a system for annotating terms to determine if they can be affected by substitutions generated by external beta contractions. These annotations can lead to a conservation of space and time in implementations of reduction by permitting substitutions to be performed trivially in certain situations. The use of the resulting notation in the reduction and comparison of terms is examined. Notions of head normal forms and head reduction sequences are defined in its context and shown to be useful in equality computations. Our head reduction sequences generalize the usual ones for lambda terms so that they subsume the sequences of terms produced by a variety of graph- and environment-based reduction procedures for the lambda calculus. They can therefore be used in correctness arguments for such procedures. This fact and the efficacy of our notation are illustrated in the context of a particular reduction procedure that we present. The relevance of the present discussions to the unification of lambda terms is also outlined.

Click here to view the journal page for the paper.

*Technical Report TR-97-01, Dept. of Computer Science, University
of Chicago
(*

A notation for lambda terms is described that is useful in contexts where the intensions of these terms need to be manipulated. The scheme of de Bruijn is used for eliminating variable names, thus obviating alpha conversion in comparing terms. A category of terms is provided that can encode other terms together with substitutions to be performed on them. The notion of an environment is used to realize this `delaying' of substitutions. However, the precise environment mechanism employed here is more complex than the usual one because the ability to examine subterms embedded under abstractions has to be supported. The representation presented permits a beta contraction to be realized via an atomic step that generates a substitution and associated steps that percolate this substitution over the structure of a term. Operations on terms are provided that allow for the combination and hence the simultaneous performance of substitutions. Our notation eventually provides a basis for efficient realizations of beta reduction and also serves as a means for interleaving steps inherent in this operation with steps in other operations such as higher-order unification. Manipulations on our terms are described through a system of rewrite rules whose correspondence to the usual notion of beta reduction is exhibited and exploited in establishing confluence and other similar properties. Our notation is similar in spirit to recent proposals deriving from the Categorical Combinators of Curien, and the relationship to these is discussed. Refinements to our notation and their use in describing manipulations on lambda terms are considered in a companion paper.

Click here for the paper.

* Appears in Computer Languages 20(1): 25-42, 1994 *

Abstract: Introducing types into a logic programming language leads to
the need for

* Appears in Journal of Automated Reasoning 11: 115-145, 1993*

A proof procedure is presented for a class of formulas in
intuitionistic logic. These formulas are the so-called

*Appears in Proceedings of the Third International
Workshop on Extensions of Logic Programming, Lecture Notes in
Artificial Intelligence, Volume 660, pages 359--393, Springer Verlag,
1993 *

Issues concerning the implementation of a notion of modules in the higher-order logic programming language lambda Prolog are examined. A program in this language is a composite of type declarations and procedure definitions. The module construct that is considered permits large collections of such declarations and definitions to be decomposed into smaller units. Mechanisms are provided for controlling the interaction of these units and for restricting the visibility of names used within any unit. The typical interaction between modules has both a static and a dynamic nature. The parsing of expressions in a module might require declarations in a module that it interacts with, and this information must be available during compilation. Procedure definitions within a module might utilize procedures presented in other modules and support must be provided for making the appropriate invocation during execution. Our concern here is largely with the dynamic aspects of module interaction. We describe a method for compiling each module into an independent fragment of code. Static interactions prevent the compilation of interacting modules from being completely decoupled. However, using the idea of an interface definition presented here, a fair degree of independence can be achieved even at this level. The dynamic semantics of the module construct involve enhancing existing program contexts with the procedures defined in particular modules. A method is presented for achieving this effect through a linking process applied to the compiled code generated for each module. A direct implementation of the dynamic semantics leads to considerable redundancy in search. We present a way in which this redundancy can be controlled, prove the correctness of our approach and describe run-time structures for incorporating this idea into the overall implementation.

Last updated on May 19, 2009 by gopalan atsign cs.umn.edu