Publications of Dr Stephen Fitzpatrick

2006

In this work, we address the problem of coordinating the distributed execution of plans and schedules by multiple agents subject to a number of different execution uncertainties. The coordination of multi-agent teams in uncertain, dynamic domains is a challenging problem requiring the fusion of techniques from many disciplines. We describe an approach based on the dynamic and selective use of a family of different problem-solving strategies that combine stochastic state estimation with repair-based and heuristic-guided planning and scheduling techniques. This approach is implemented as a cognitive problem-solving architecture that combines (i) a deliberative scheduler, which performs partially-centralized solution repair, (ii) an opportunistic scheduler, which locally optimizes resource utilization for plan enhancement, and (iii) a downgrader, which proactively guides the execution into regions of higher likelihood of success. This paper characterizes the complexity of the problem through examples and experiments, and discusses the advantages and effectiveness of the implemented solution.

Our Criticality-Sensitive Coordination (CSC) agents are designed to enhance the performance of a human-team working together in uncertain and dynamic settings by monitoring and adapting their plans as dictated by the evolution of the environment. Such situations model military scenarios such as a coordinated joint operations or enterprise settings such as multiple-project management. Among the many challenges in these situations are the large space of possible states due to uncertainty, the distributed / partial knowledge of current state and plan among the agents and the need to react in a timely manner to events that may not be in the original model. In fact, reaction alone is often insufficient as in environments where success depends on completing sequences of coupled actions, one needs to anticipate future difficulties and enable contingencies to alleviate potential hazards.

2005

We investigate the problem of keeping the plans of multiple agents synchronized during execution. We assume that agents only have a partial view of the overall plan. They know the tasks they must perform, and know the tasks of other agents with whom they have direct dependencies. Initially, agents are given a schedule of tasks to perform together with a collection of contingency plans that they can engage during execution in case execution deviates from the plan. During execution, agents monitor the status of their tasks, adjusting their local schedules as necessary and informing dependent agents about the changes. When agents determine that their schedule is broken or that a contingency schedule may be better, they engage in coordinating plan changes with other agents. We present a "dynamic partial centralization: approach to coordination. When a unit detects a problem (task delay, inability to perform a task), it will dynamically form a cluster of the critically affected agents (a subset of all potentially affected agents). The cluster will elect a leader, who will retrieve all task and contingency plan information from the cluster members and compute a solution depending on the situation.

2004

This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.

2003

In this chapter, a peer-to-peer algorithm is described for approximately solving distributed, real-time, constraint optimization problems. The ANTS challenge problem is formulated as a distributed constraint optimization problem; an approximation version of the classical problem of graph k-coloring is formulated as a distributed constraint optimization problem to enable simple experimental assessment of the algorithm's performance.

2002

This paper reports on a simple, stochastic, scalable, peer-to-peer algorithm for approximately solving distributed constraint problems in soft real time. The performance of the algorithm is assessed using soft k-coloring of random graphs having known chromatic number and edge probability ranging from moderate to high.

Previous papers have reported on a simple, distributed, synchronous algorithm for approximately k-colouring large graphs in soft real time. In this paper, the effects of asynchronous execution and communication latency are investigated. The main conclusions are that strict synchrony is not required and that considerable communication latency can be tolerated. These results are important for practical applications of the algorithm involving large networks of low-performance hardware equipped with wireless communication.

This paper reports on an algorithm for anytime, stochastic, distributed constraint optimization that uses iterated, peer-to-peer interaction to try to achieve rapid, approximate solutions to large constraint problems in which the constraint variables are naturally distributed. Two examples are given - graph coloring and coordination of distributed sensors - together with experimental results on performance.

2001

This report details an approach to the real-time coordination of large networks of short-range sensors that communicate over short-range, low-bandwidth, high-latency radio channels.

Each sensor is limited in the distance over which it can scan and in the type of data that it can acquire, so nearby sensors must collaborate to acquire complementary data for accomplishing such tasks as multiple-target detection and tracking.

Operational limitations on sensors and real-time requirements on tasks restrict the number of tasks in which a given sensor can collaborate. The quality with which a given task is accomplished and the cost incurred are affected by which particular sensors collaborate in achieving the task. Consequently, a coordination mechanism is required to optimize collaboration.

The coordination mechanism reported is fully distributed - each sensor decides for itself which measurements it should take to achieve optimal collaboration with nearby sensors, based what it knows about their intended measurements, and informs those sensors of its own intentions so that they can likewise optimize their own measurements.

In order to determine which measurements are optimal, a sensor must have knowledge about the targets that are the intended subjects of the measurements. To this end, each sensor exchanges measurement data with nearby sensors, and operates a data fusion process to maintain local target estimates.

The reported coordination mechanism is claimed to be scalable, low-cost, adaptive and robust.

This paper reports on a simple, decentralized, anytime, stochastic, soft graph-colouring algorithm. The algorithm is designed to quickly reduce the number of colour conflicts in large, sparse graphs in a scalable, robust, low-cost manner. The algorithm is experimentally evaluated in a framework motivated by its application to resource coordination in large, distributed networks.

Soft, real-time distributed graph coloring is presented as a simplification of the problem of distributed resource management in an environment where communication is expensive and subject to relatively high latency. The resources are subject to dynamic task sets that may produce critical or super-critical loading - the objective is to quickly compute reasonable schedules for the resources that accomplish a satisfactory fraction of the task set. A class of simple, decentralized, anytime, approximate colorers is presented, together with a framework for assessing performance, and representative results from performance experiments using a simulator.

1998

The issue of correctness in the context of a certain style of program transformation is investigated. This style is characterised by the fully automated application of large numbers of simple transformation rules to a representation of a functional program (serving as a specification) to produce an equivalent efficient imperative program. The simplicity of the transformation rules ensures that the proofs of their correctness are straightforward.

A selection of transformations appropriate for use in a particular context are shown to preserve program meaning. The transformations convert array operations expressed as the application of a small number of general-purpose functions into applications of a large number of functions which are amenable to efficient implementation on an array processor.

1997

We present a set of program transformations which are applied automatically to convert abstract functional specifications of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.

We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.

1996

The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on transformations which tailor implementations to use special programming techniques optimized for sparse matrices. We demonstrate that derived implementations attain superior execution performance than manual implementations for two significant algorithms.

Function unfolding is a well-known program transformation technique: it is often used to reduce execution overheads incurred by an implementation's function-calling mechanism and to localise information available in global definitions (localization simplifies further optimizations such as constant propagation and the static evaluation of expressions).

Unfolding function definitions that exhibit either self- or mutual-recursion presents a problem for automated transformation systems: sophisticated control mechanisms may need to be incorporated into the unfolding process to ensure termination. Consequently, many automated transformation systems do not attempt to unfold recursive function definitions. Many of the optimizations that follow from unfolding non-recursive functions can still be performed using techniques such as function cloning and specialization but, because of the separation of function definitions and function uses, these techniques are more complex for an automated system to perform than are the corresponding techniques used for non-recursive functions.

In this paper, the use of the paradoxical combinator, Y, to unfold all function definitions, including recursive definitions, is discussed. Full unfolding with the Y combinator is simple for an automated system to perform (requiring only an exhaustive application of a straightforward substitution process). In particular, termination is assured without necessitating a separate control mechanism. After unfolding has been performed, many optimizations can be applied in the same manner as they are applied to unfolded, non-recursive definitions.

1995

We present a set of program transformations which are applied automatically to convert an abstract functional specification of numerical algorithms into efficient implementations tailored to the AMT DAP array processor. The transformations are based upon a formal algebra of a functional array form, which provides a functional model of the array operations supported by the DAP programming language. The transformations are shown to be complete.

We present specifications and derivations of two example algorithms: an algorithm for computing eigensystems and an algorithm for solving systems of linear equations. For the former, we compare the execution performance of the implementation derived by transformation with the performance of an independent, manually constructed implementation; the efficiency of the derived implementation matches that of the manually constructed implementation.

A number of program transformations are proved to preserve the meaning of programs. The transformations convert array operations expressed using a small number of general-purpose functions into applications of a large number of functions suited to efficient implementation on an array processor.

The automated application of program transformations is used to derive, from abstract functional specifications of numerical mathematical algorithms, highly efficient imperative implementations tailored for execution on sequential, vector and array processors. Emphasis is placed on additional transformations which tailor implementations to use special programming techniques optimized for sparse matrices.

1994

We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for a conjugate gradient algorithm.

It is generally a difficult task to construct efficient implementations of numerical mathematical algorithms for execution on high-performance computer systems. The difficulty arises from the need to express an implementation in a form that reflects the nature of the computer system, rather than a form that reflects the computations performed by the algorithm.

This thesis develops the method of program transformation to derive automatically efficient implementations of algorithms from high-level, machine-independent specifications.

The primary system considered is the AMT DAP array processor, but sequential and vector systems are also considered.

The transformational method is further extended to automatically tailor implementations to use sparse programming techniques.

The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine-specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language (a subset of the SML programming language).

Such functional specifications can be executed, and indeed often are executed in the initial stages of development to give confidence that an algorithm has been correctly described. However, we intend our functional specifications to be a clear statement of the algorithm that captures its essence without consideration for execution efficiency. Such a specification executes prohibitively slowly using available functional language compilers (if such a compiler is available for the architecture at all), much more slowly than could be expected of a hand-crafted, imperative implementation of the algorithm.

So, rather than compile a functional specification, we derive imperative implementations, normally expressed in some dialect of Fortran, from the specification. Many imperative implementations can be derived from a single specification; each implementation is tailored for the hardware to be used and the data to be manipulated.

The derivation of an implementation from a specification is performed by automatically applying program transformations.

We show how efficient implementations can be derived from high-level functional specifications of numerical algorithms using automatic program transformation. We emphasize the automatic tailoring of implementations for manipulation of sparse data sets. Execution times are reported for matrix-vector multiplication and a conjugate gradient algorithm.

1993

The problem that we address in this paper is how to enable a programmer to obtain high performance from an implementation without having to write a low-level, machine specific implementation. The approach that we use is to write high-level, machine independent specifications of algorithms in a pure, functional programming language.

We derive imperative implementations of the specification, normally expressed in some dialect of Fortran. Many implementations may be derived from a single specification; each implementation is tailored for the hardware to be used and the data that are to be manipulated.

The derivation of an implementation from a specification is performed by automatically applying program transformations. Each transformation is a simple rewrite rule that produces a change in a program; normally, a single application of a transformation produces a minor, local change; a derivation effects the implementation of a functional specification by applying many transformations, each applied many times. We have demonstrated that it is possible to derive highly efficient implementations from our functional specifications; indeed, these implementations are comparable in performance to implementations written by a programmer using a conventional approach.

The construction of reliable and efficient scientific software for execution on novel computer architecture is a demanding and error-prone task. By automatically generating efficient parallel implementations of functional specifications using program transformation, many of these problems may be overcome. The basic method is discussed, with particular reference to its extension to generate implementations for exection on distributed memory MIMD machines.

1992

The general advantages of functional programming - naturalness of expression (in some problem areas), ease of proof, and clarity - are well known. In the field of numerical mathematical computation, however, these advantages are normally outweighed by functional programming's main disadvantage - inefficient execution. In this paper, we show how a functional style can be used for the elegant specification of numerical algorithms while still obtaining highly efficient implementations through the application of program transformations.

We have been investigating the automated derivation of efficient programs from functional specifications for a range of numerical algorithms. In this paper we describe recent work on deriving implementations tailored for execution on the AMT DAP computer of an eigenvalue algorithm.

1990