@string{brics =	"{BRICS}"}
@string{daimi =	"Department of Computer Science, University of Aarhus"}
@string{iesd  =	"Department of Computer Science, Institute
		  of Electronic Systems, Aalborg University"}
@string{rs    =	"Research Series"}
@string{ns    =	"Notes Series"}
@string{ls    =	"Lecture Series"}
@string{ds    =	"Dissertation Series"}

@TechReport{BRICS-DS-00-7,
  author =	 "Jurdzi{\'n}ski, Marcin",
  title =	 "Games for Verification: Algorithmic Issues",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-7",
  address =	 daimi,
  month =	 dec,
  note =	 "PhD thesis. ii+112~pp",
  abstract =	 "This dissertation deals with a number of algorithmic problems
motivated by computer aided formal verification of finite state
systems.  The goal of formal verification is to enhance the design and
development of complex systems by providing methods and tools for
specifying and verifying correctness of designs.  The success of
formal methods in practice depends heavily on the degree of automation
of development and verification process.  This motivates development
of efficient algorithms for problems underlying many verification
tasks",
  abstract1 =	 "\bibpar
Two paradigmatic algorithmic problems motivated by formal verification
that are in the focus of this thesis are model checking and
bisimilarity checking.  In the thesis game theoretic formulations of
the problems are used to abstract away from syntactic and semantic
peculiarities of formal models and specification formalisms studied.
This facilitates a detailed algorithmic analysis, leading to two novel
model checking algorithms with better theoretical or practical
performance, and to an undecidability result for a notion of
bisimilarity.\bibpar

The original technical contributions of this thesis are collected in
three research articles whose revised and extended versions are
included in the dissertation.\bibpar

In the first two papers the computational complexity of deciding the
winner in parity games is studied.  The problem of solving parity
games is polynomial time equivalent to the modal mu-calculus model
checking.  The modal mu-calculus plays a central role in the study of
logics for specification and verification of programs.  The model
checking problem is extensively studied in literature on computer
aided verification.  The question whether there is a polynomial time
algorithm for the modal mu-calculus model checking is one of the most
challenging and fascinating open questions in the area.\bibpar

In the first paper a new algorithm is developed for solving parity
games, and hence for the modal mu-calculus model checking.  The design
and analysis of the algorithm are based on a semantic notion of a
progress measure.  The worst-case running time of the resulting
algorithm matches the best worst-case running time bounds known so far
for the problem, achieved by the algorithms due to Browne at al., and
Seidl.  Our algorithm has better space complexity: it works in small
polynomial space; the other two algorithms have exponential worst-case
space complexity.\bibpar

In the second paper a novel approach to model checking is pursued,
based on a link between parity games and discounted payoff and
stochastic games, established and advocated by Puri.  A discrete
strategy improvement algorithm is given for solving parity games,
thereby proving a new procedure for the modal mu-calculus model
checking.  Known strategy improvement algorithms, as proposed for
stochastic games by Hoffman and Karp, and for discounted payoff games
and parity games by Puri, work with real numbers and require solving
linear programming instances involving high precision arithmetic.  The
present algorithm for parity games avoids these difficulties by
efficient manipulation of carefully designed discrete valuations.  A
fast implementation is given for a strategy improvement step.  Another
advantage of the approach is that it provides a better conceptual
understanding of the underlying discrete structure and gives hope for
easier analysis of strategy improvement algorithms for parity games.
However, so far it is not known whether the algorithm works in
polynomial time.  The long standing problem whether parity games can
be solved in polynomial time remains open.\bibpar

In the study of concurrent systems it is common to model concurrency
by non-determinism.  There are, however, some models of computation in
which concurrency is represented explicitly; elementary net systems
and asynchronous transition systems are well-known examples.  History
preserving and hereditary history preserving bisimilarities are
behavioural equivalence notions taking into account causal
relationships between events of concurrent systems.  Checking history
preserving bisimilarity is known to be decidable for finite labelled
elementary nets systems and asynchronous transition systems.  Its
hereditary version appears to be only a slight strengthening and it
was conjectured to be decidable too.  In the third paper it is proved
that checking hereditary history preserving bisimilarity is
undecidable for finite labelled asynchronous transition systems and
elementary net systems.  This solves a problem open for several years.
The proof is done in two steps.  First an intermediate problem of
deciding the winner in domino bisimulation games for origin
constrained tiling systems is introduced and its undecidability is
shown by a reduction from the halting problem for 2-counter machines.
Then undecidability of hereditary history preserving bisimilarity is
shown by a reduction from the problem of domino bisimulation games",
  linkhtmlabs =	 "",
  linkps =	 "",
  linkpdf =	 ""
}

@TechReport{BRICS-DS-00-6,
  author =	 "Henriksen, Jesper G.",
  title =	 "Logics and Automata for Verification: Expressiveness
                  and Decidability Issues",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-6",
  address =	 daimi,
  month =	 may,
  note =	 "PhD thesis. xiv+229~pp",
  abstract =	 "This dissertation investigates and extends the mathematical
foundations of logics and automata for the interleaving and
synchronous noninterleaving view of system computations with an
emphasis on decision procedures and relative expressive powers, and
introduces extensions of these foundations to the emerging domain of
noninterleaving asynchronous computations.  System computations are
described as occurrences of system actions, and tractable collections
of such computations can be naturally represented by finite automata
upon which one can do formal analysis. Specifications of system
properties are usually described in formal logics, and the question
whether the system at hand satisfies its specification is then solved
by means of automata-theoretic constructions",
  abstract1 ="\bibpar
Our focus here is on the linear time behaviour of systems, where
executions are modeled as sequence-like objects, neither reflecting
nondeterminism nor branching choices.  We consider a variety of linear
time paradigms, such as the classical interleaving view, the
synchronous noninterleaving view, and conclude by considering an
emerging paradigm of asynchronous noninterleaving computation. Our
contributions are mainly theoretical though there is one piece of
practical implementation work involving a verification tool. The
theoretical work is concerned with a range of logics and automata and
the results involve the various associated decision procedures
motivated by verification problems, as well as the relative expressive
powers of many of the logics that we consider.\bibpar

Our research contributions, as presented in this dissertation, are as
follows. We describe the practical implementation of the verification
tool \textsf{Mona}. This tool is basically driven by an engine which
translates formulas of monadic second-order logic for finite strings
to deterministic finite automata. This translation is known to have a
daunting complexity-theoretic lower bound, but surprisingly enough, it
turns out to be possible to implement a translation algorithm which
often works efficiently in practice; one of the major reasons being
that the internal representation of the constituent automata can be
maintained symbolically in terms of binary decision diagrams.  In
effect, our implementation can be used to verify the so-called safety
properties because collections of finite strings suffice to capture
such properties.",
  abstract2 ="\bibpar
For reactive systems, one must resort to infinite computations to
capture the so-called liveness properties.  In this setting, the
predominant specification mechanism is Pnueli's LTL which turns out to
be computationally tractable and, moreover, equal in expressive power
to the first-order fragment of monadic second-order logic. We define
an extension of LTL based on the regular programs of PDL to obtain a
temporal logic, DLTL, which remains computationally feasible and is
yet expressively equivalent to the full monadic second-order
logic.\bibpar

An important class of distributed systems consists of networks of
sequential agents that synchronize by performing common actions
together.  We exhibit a distributed version of DLTL and show that it
captures exactly all linear time properties of such systems, while the
verification problem, once again, remains tractable.\bibpar

These systems constitute a subclass of a more general class of systems
with a static notion of independence. For such systems the set of
computations constitute interleavings of occurrences of causally
independent actions. These can be grouped in a natural manner into
equivalence classes corresponding to the same partially-ordered
behaviour. The equivalence classes of computations of such systems can
be canonically represented by restricted labelled partial orders known
as (Mazurkiewicz) traces.  It has been noted that many properties
expressed as LTL-formulas have the ``all-or-none'' flavour, i.e.
either all computations of an equivalence class satisfy the formula or
none do.  For such properties (e.g. ``reaching a deadlocked state'')
it is possible to take advantage of the noninterleaving nature of
computations and apply the so-called partial-order methods for
verification to substantially reduce the computational resources
needed for the verification task.  This leads to the study of linear
time temporal logics interpreted directly over traces, as
specifications in such logics are guaranteed to have the
``all-or-none'' property. We provide an elaborate survey of the
various distributed linear time temporal logics interpreted over
traces.\bibpar

One such logic is TLC, through which one can directly formulate
causality properties of concurrent systems. We strengthen TLC to
obtain a natural extended logic TLC$^*$ and show that the extension
adds nontrivially to the expressive power. In fact, very little is
known about the relative expressive power of the various logics for
traces. The game-theoretic proof technique that we introduce may lead
to new separation results concerning such logics.\bibpar

In application domains such as telecommunication software, the
synchronous communication mechanism that traces are based on is not
appropriate. Rather, one would like message-passing to be the basic
underlying communication mechanism.  Message Sequence Charts (MSCs)
are ideally suited for the description of such scenarios, where system
executions constitute partially ordered exchanges of messages. This
raises the question of what constitutes reasonable collections of MSCs
upon which one can hope to do formal analysis. We propose a notion of
regularity for collections of MSCs and tie it up with a class of
finite-state devices characterizing such regular
collections. Furthermore, we identify a monadic second-order logic
which also defines exactly these regular collections.\bibpar

The standard method for the description of multiple scenarios in this
setting has been to employ MSC-labelled finite graphs, which might or
might not describe collections of MSCs regular in our sense.
One common feature is that these collections are finitely generated,
and we conclude by exhibiting a subclass of such graphs which
describes precisely the collections of MSCs that are both finitely
generated and regular",
  linkhtmlabs =	 "",
  linkps =	 "",
  linkpdf =	 ""
}

@TechReport{BRICS-DS-00-5,
  author =	 "Lyngs{\o}, Rune B.",
  title =	 "Computational Biology",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-5",
  address =	 daimi,
  month =	 mar,
  note =	 "PhD thesis. xii+173~pp",
  abstract =	 "
All living organisms are based on genetic material, genetic material
that is specific for the organism. This material -- that inexplicably
can be regarded as sequences -- can thus be a valuable source of
information, both concerning the basic processes of life and the
relationships among different species. During the last twenty years
the ability to `read' this genetic material has increased
tremendously. This development has led to an explosion in available
data.\bibpar

But all this data is of little use if methods are not available for
deducing information from it. Simply by the sheer amount of data, it
is of vital importance that these methods are automated to a very
large degree. This demand has spawned the field of \emph{computational
biology}, a field that from a computer science point of view offers
novel problems as well as variants over known problems.\bibpar

In this dissertation we focus on problems directly related to the
biological sequences. That is, problems concerned with \begin{itemize}
\item detecting patterns in one sequence, \item comparing two or more
sequences, \item inferring structural information about the
biomolecule represented by a given sequence. \end{itemize} We discuss
the modelling aspects involved when solving real world problems, and
look at several models from various areas of \emph{computational
biology}. This includes examining the complexity of and developing
efficient algorithms for some selected problems in these models.\bibpar

The dissertation includes five articles, four of which have been
previously published, on \begin{itemize} \item finding repetitions in
a sequence with restrictions on the distance between the repetitions
in the sequence; \item comparing two coding DNA sequences, that is, a
comparison of DNA sequences where the encoded protein is taken into
account; \item comparing two hidden Markov models. Hidden Markov
models are often used as representatives of families of evolutionary
or functionally related sequences; \item inferring the secondary
structure of an RNA sequence, that is, the set of base pairs in the
three dimensional structure, based on a widely accepted energy model;
\item an approximation algorithm for predicting protein structure in a
simple lattice model. \end{itemize} These articles contain the
technical details of the algorithms discussed in the first part of the
dissertation",
  linkhtmlabs =	 "",
  linkps =	 "",
  linkpdf =	 ""
}

@TechReport{BRICS-DS-00-4,
  author =	 "Pedersen, Christian N. S.",
  title =	 "Algorithms in Computational Biology",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-4",
  address =	 daimi,
  month =	 mar,
  note =	 "PhD thesis. xii+210~pp",
  abstract =	 "
In this thesis we are concerned with constructing algorithms that
address problems with biological relevance. This activity is part of a
broader interdisciplinary area called computational biology, or
bioinformatics, that focus on utilizing the capacities of computers to
gain knowledge from biological data. The majority of problems in
computational biology relate to molecular or evolutionary biology, and
focus on analyzing and comparing the genetic material of organisms. A
deciding factor in shaping the area of computational biology is that
biomolecules that DNA, RNA and proteins that are responsible for
storing and utilizing the genetic material in an organism, can be
described as strings over finite alphabets. The string representation
of biomolecules allows for a wide range of algorithmic techniques
concerned with strings to be applied for analyzing and comparing
biological data. We contribute to the field of computational biology
by constructing and analyzing algorithms that address problems with
relevance to biological sequence analysis and structure prediction",
  abstract1 =	 "\bibpar
The genetic material of organisms evolve by discrete mutations, most
prominently substitutions, insertions and deletions of nucleotides.
Since the genetic material is stored in DNA sequences and reflected in
RNA and protein sequences, it makes sense to compare two or more
biological sequences in order to look for similarities and differences
that can be used to infer the relatedness of the sequences. In the
thesis we consider the problem of comparing two sequences of coding
DNA when the relationship between DNA and proteins is taken into
account.  We do this by using a model that penalizes an event on the
DNA by the change it induces on the encoded protein. We analyze the
model in details and construct an alignment algorithm that improves on
the existing best alignment algorithm in the model by reducing its
running time with a quadratic factor. This makes the running time of
our alignment algorithm equal to the running time of alignment
algorithms based on much simpler models.\bibpar

If a family of related biological sequences are available it is
natural to derive a compact characterization of the sequence family.
Among other things, such a characterization can be used to search for
unknown members of the sequence family. A widely used way to describe
the characteristics of a sequence family is to construct a hidden
Markov model that generates members of the sequence family with high
probability and non-members with low probability. In this thesis we
consider the general problem of comparing hidden Markov models. We
define novel measures between hidden Markov models and show how to
compute them efficiently using dynamic programming. Since hidden
Markov models are widely used to characterize biological sequence
families, our measures and methods for comparing hidden Markov models
immediate apply to comparison of entire biological sequence families.\bibpar

Besides comparing sequences and sequence families, we also consider
problems of finding regularities in a single sequence.  Looking for
regularities in a single biological sequence can be used to
reconstruct part of the evolutionary history of the sequence or to
identify the sequence among other sequences. In this thesis we focus on
general string problems motivated by biological applications because
biological sequences are strings. We construct an algorithm that finds
all maximal pairs of equal substrings in a string, where each pair of
equal substrings adheres to restrictions on the number of characters
between the occurrences of the two substrings in the string. This is a
generalization of finding tandem repeats and the running time of the
algorithm is comparable to the running time of existing algorithms for
finding tandem repeats.  The algorithm is based on a general technique
that combines a traversal of a suffix tree with efficient merging of
search trees. We use the same general technique to construct an
algorithm that finds all maximal quasiperiodic substrings in a string.
A quasiperiodic substring is a substring that can be described as
concatenations and superpositions of a shorter substring. Our
algorithm for finding maximal quasiperiodic substrings has a running
time that is a logarithmic factor better than the running time of the
existing best algorithm for the problem.\bibpar

Analyzing and comparing the string representations of biomolecules can
reveal a lot of useful information about the biomolecules, but knowing
the three-dimensional structures of the biomolecules often reveal
additional information that is not immediately visible from their
string representations. Unfortunately it is difficult and time
consuming to determine the three-dimensional structure of a
biomolecule experimentally, so computational methods for structure
prediction are in demand.  Constructing such methods is also difficult
and often results in the formulation of intractable computational
problems. In this thesis we construct an algorithm that improves on
the widely used mfold algorithm for RNA secondary structure prediction
by allowing a less restrictive model of structure formation without an
increase in the running time. We also analyze the protein folding
problem in the two-dimensional hydrophobic-hydrophilic lattice model.
Our analysis shows that several complicated folding algorithms do not
produce better foldings in the worst case, in terms of free energy,
than an existing much simpler folding algorithm",
  linkhtmlabs =	 "",
  OPTcomment =	 "",
  linkps =	 "",
  linkpdf =	 ""
}

@TechReport{BRICS-DS-00-3,
  author =	 "Rauhe, Theis",
  title =	 "Complexity of Data Structures (Unrevised)",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-3",
  address =	 daimi,
  month =	 mar,
  note =	 "PhD thesis. xii+115~pp",
  abstract =	 "
This thesis consists of a number of results providing various
complexity results for a number of dynamic data structures and
problems. A large part of the result is devoted techniques for proving
lower bounds in Yao's cell probe model. In addition we also provide
upper bound results for a number of data structure problems",
  abstract1 =	 "\bibpar
First we study the signed prefix sum problem: given a string of length
$n$ of $0$s and signed $1$s compute the sum of its $i$th prefix during
updates. We show a lower bound of $\Omega(\log n/ \log \log n)$ time
per operation, even if the prefix sums are bounded by $\log n/ \log
\log n$ during all updates. We show how these results allow us to
prove lower bounds for a variety of dynamic problems. We give a lower
bound for the dynamic planar point location in monotone subdivisions
of $\Omega(\log n/ \log \log n)$ per operation. We give a lower bound
for dynamic transitive closure on upward planar graphs with one source
and one sink of $\Omega(\log n/(\log \log n)^2)$ per operation. We
give a lower bound of $\Omega(\surd(\log n/ \log \log n))$ for the
dynamic membership problem of any Dyck language with two or more
letters.\bibpar

Next we introduce new models for dynamic computation based on the cell
probe model. We give these models access to nondeterministic queries
or the right answer $\pm 1$ as an oracle. We prove that for the
dynamic partial sum problem, these new powers do not help, the problem
retains its lower bound of $\Omega(\log n/ \log \log n)$. From these
results we obtain stronger lower bounds of order $\Omega(\log n/ \log
\log n)$ for the conventional Random Access Machine and cell probe
model of the above problems for upward planar graphs and dynamic
membership for Dyck languages. In addition we also characterise the
complexity of the dynamic problem of maintaining a symmetric function
for prefixes of a string of bits. Common to these lower bounds are
their use of the chronogram method introduced in a seminal paper of
Fredman and Saks.\bibpar

Next we introduce a new lower bound technique that differs from the
mentioned chronogram method. This method enable us to provide lower
bounds for another fundamental dynamic problem we call the marked
ancestor problem; consider a rooted tree whose nodes can be in two
states: marked or unmarked. The marked ancestor problem is to maintain
a data structure with the following operations: $\mathit{mark}(v)$
marks node $v$; $\mathit{unmark}(v)$ removes any marks from node $v$;
$\mathit{firstmarked}(v)$ returns the first marked node on the path
from $v$ to the root. We show tight upper and lower bounds for the
marked ancestor problem. The upper bounds hold on the unit-cost Random
Access Machine, and the lower bounds in cell probe model. As
corollaries to the lower bound we prove (often optimal) lower bounds
on a number of problems. These include planar range searching,
including the existential or emptiness problem, priority search trees,
static tree union-find, and several problems from dynamic
computational geometry, including segment intersection, interval
maintenance, and ray shooting in the plane. Our upper bounds improve
algorithms from various fields, including dynamic dictionary matching,
coloured ancestor problems, and maintenance of balanced ~
parentheses.\bibpar

We study the fundamental problem of sorting in a sequential model of
computation and in particular consider the time-space trade-off
(product of time and space) for this problem. Beame has shown a lower
bound of $\Omega(n^2)$ for this product leaving a gap of a logarithmic
factor up to the previously best known upper bound of $O(n^2 \log n)$
due to Frederickson. We present a comparison based sorting algorithm
which closes this gap. The time-space product $O(n^2)$ upper bound
holds for the full range of space bounds between $\log n$ and $n/ \log
n$.\bibpar

The last problem we consider is dynamic pattern matching. Pattern
matching is the problem of finding all occurrences of a pattern in a
text. In a dynamic setting the problem is to support pattern matching
in a text which can be manipulated on-line, \textit{i.e.}, the usual
situation in text editing. Previous solutions support moving a block
from one place to another in the text in time linear to the block size
where efficient pattern matching is supported too. We present a data
structure that supports insertions and deletions of characters and
movements of arbitrary large blocks within a text in $O(\log^2 n \log
\log n \log^* n)$ time per operation. Furthermore a search for a
pattern $P$ in the text is supported in time $O(\log n \log \log n +
\mathit{occ} + |P|)$, where $\mathit{occ}$ is the number of
occurrences to be reported. An ingredient in our solution to the above
main result is a data structure for the \emph{dynamic string
equality\/} problem due to Mehlhorn, Sundar and Uhrig. As a secondary
result we give almost quadratic better time bounds for this problem
which in addition to keeping polylogarithmic factors low for our main
result, also improves the complexity for several other problems",
  OPTlinkhtmlabs ="",
  OPTcomment =	 "",
  OPTlinkps =	 "",
  OPTlinkpdf =	 ""
}

@TechReport{BRICS-DS-00-2,
  author =	 "Sandholm, Anders B.",
  title =	 "Programming Languages: Design, Analysis, and Semantics",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-2",
  address =	 daimi,
  month =	 feb,
  note =	 "PhD thesis. xiv+233~pp",
  abstract =	 "
This thesis contains three parts. The first part presents
contributions in the fields of domain-specific language design,
runtime system design, and static program analysis, the second part
presents contributions in the field of control synthesis, and finally
the third part presents contributions in the field of denotational
semantics",
  abstract1 =	 "\bibpar
Domain-specific language design is an emerging trend in software
engineering. The idea is to express the analysis of a particular
problem domain through the design of a programming language tailored
to solving problems within that domain. There are many established
examples of domain-specific languages, such as LaTeX, flex, and yacc.
The novelty is to use such techniques for solving traditional software
engineering task, where one would previously construct some collection
of libraries within a general-purpose language. In Part I, we start by
presenting the design of a domain-specific language for programming
Web services. The various domain-specific aspects of Web service
programming, such as, Web server runtime system design, document
handling, and concurrency control, are treated.\bibpar

Having treated many of the Web related topics in some detail, we turn
to the particular ideas and central design issues involved in the
development of our server-side runtime system. We build a runtime
system that not only shares the advantage of traditional CGI-based
services, namely that of portability, we also overcome many of the
known disadvantages, such as intricate and tedious programming and
poor performance.\bibpar

We then turn to the use of dynamic Web documents, which in our
language relies heavily on the use of static program analysis. Static
program analysis allows one to offer static compile-time techniques
for predicting safe and computable approximations to the set of values
or behavior arising dynamically at runtime when executing a program on
a computer. Some of the traditional applications of program analysis
are avoidance of redundant and superfluous computations. Another
aspect is that of program validation, where one provides guarantees
that the analyzed code can safely be executed.  We present an example
of such an analysis as explained in the following.  Many interactive
Web services use the CGI interface for communication with clients.
They will dynamically create HTML documents that are presented to the
client who then resumes the interaction by submitting data through
incorporated form fields.  This protocol is difficult to statically
type-check if the dynamic document are created by arbitrary script
code using ``printf'' statements. Previous proposals have suggested
using static document templates which trades flexibility for safety.
We propose a notion of typed, higher-order templates that
simultaneously achieve flexibility and safety. Our type system is
based on a flow analysis of which we prove soundness.  We also present
an efficient runtime implementation that respects the semantics of
only well-typed programs",
  abstract2 =	 "\bibpar
Having dealt with dynamic Web documents, we turn to the topic of
control synthesis. In the design of safety critical systems it is of
great importance that certain guarantees can be made about the system.
We describe in Part II of this thesis a method for synthesizing
controllers from logical specifications. First, we consider the
development of the BDD-based tool Mona for translating logical
formulae into automata and give examples of its use in small
applications. Then we consider the use of Mona for control synthesis,
that is, we turn to the use of Mona as a controller generator.  First,
in the context of our domain specific language for generating Web
services, where concurrency control aspects are treated using our
control synthesis technique. Second, in the context of LEGO robots,
where we show how controllers for autonomous LEGO robots are generated
using our technique and actually implemented on the physical LEGO
brick.\bibpar

Finally, in Part III we consider the problem of sequentiality and full
abstraction in denotational semantics.  The problem of finding an
abstract description of sequential functional computation has been one
of the most enduring problems of semantics. The problem dates from a
seminal paper of Plotkin, who pointed out that certain elements in
Scott models are not defineable. In Plotkin's example, the ``parallel
or'' function cannot be programmed in PCF, a purely functional,
sequential, call-by-name language. Moreover, the function causes two
terms to be distinct denotationaly even though the terms cannot be
distinguished by any program. The problem of modeling sequentiality is
enduring because it is robust.  For instance, changing the reduction
strategy from call-by-name to call-by-value makes no difference. When
examples like parallel or do not exist, the denotational model is said
to be fully abstract. More precisely, full abstraction requires an
operational definition of equivalence (interchangeability in all
programs) to match operational equivalence. It has been proved that
there is exactly one fully abstract model of PCF. Until recently, all
descriptions of this fully abstract model have used operational
semantics. New constructions using logical relations have yielded a
more abstract understanding of Milner's model.\bibpar

We consider in Part III how to adapt and extend the logical relations
model for PCF to a call-by-value setting. More precisely, we construct
a fully abstract model for the call-by-value, purely functional,
sequential language FPC",
  linkhtmlabs =	 "",
  OPTcomment =	 "",
  linkps =	 "",
  linkpdf =	 ""
}

@TechReport{BRICS-DS-00-1,
  author =	 "Hildebrandt, Thomas Troels",
  title =	 "Categorical Models for Concurrency: Independence,
                  Fairness and Dataflow",
  institution =	 brics,
  year =	 2000,
  type =	 ds,
  number =	 "DS-00-1",
  address =	 daimi,
  month =	 feb,
  note =	 "PhD thesis. x+141~pp",
  abstract =	 "
This thesis is concerned with formal semantics and models for {\em
concurrent} computational systems, that is, systems consisting of a
number of parallel computing sequential systems, interacting with each
other and the environment.  A formal semantics gives meaning to
computational systems by describing their {\em behaviour} in a {\em
mathematical model}.  For concurrent systems the interesting aspect of
their computation is often how they interact with the environment {\em
during} a computation and not in which state they terminate, indeed
they may not be intended to terminate at all. For this reason they are
often referred to as {\em reactive systems}, to distinguish them from
traditional {\em calculational} systems, as e.g.\ a program
calculating your income tax, for which the interesting behaviour is
the answer it gives when (or if) it terminates, in other words the
(possibly partial) function it computes between input and output. {\em
Church's thesis} tells us that regardless of whether we choose the
{\em lambda calculus}, {\em Turing machines}, or almost any modern
programming language such as {\em C} or {\em Java} to describe
calculational systems, we are able to describe exactly the same class
of functions.  However, there is no agreement on observable behaviour
for concurrent reactive systems, and consequently there is no
correspondent to Church's thesis. A result of this fact is that an
overwhelming number of different and often competing notions of
observable behaviours, primitive operations, languages and
mathematical models for describing their semantics, have been proposed
in the litterature on concurrency.\bibpar

The work presented in this thesis contributes to a categorical
approach to semantics for concurrency which have been developed
through the last 15 years, aiming at a more coherent theory.  The
initial stage of this approach is reported on in the chapter on models
for concurrency by Winskel and Nielsen in the Handbook of Logic in
Computer Science, and focuses on relating the already existing models
and techniques for reasoning about them.  This work was followed by a
uniform characterisation of behavioural equivalences from the notion
of {\em bisimulation from open maps} proposed by Joyal, Winskel and
Nielsen, which was applicable to any of the categorical models and
shown to capture a large number of existing variations on
bisimulation. At the same time, a class of abstract models for
concurrency was proposed, the {\em presheaf models for concurrency},
which have been developed over the last 5 years, culminating in the
recent thesis of Cattani.\bibpar

This thesis treats three main topics in concurrency theory: {\em
independence}, {\em fairness} and {\em non-deterministic dataflow}",
  abstract1 =	 "\bibpar
Our work on independence, concerned with explicitly representing that
actions result from independent parts of a system, falls in two
parts. The first contributes to the initial work on describing formal
relationships between existing models. The second contributes to the
study of concrete instances of the abstract notion of bisimulation
from open maps.  In more detail, the first part gives a complete
formal characterisation of the relationship between two models, {\em
transition systems with independence} and {\em labelled asynchronous
transition systems} respectively, which somehow escaped previous
treatments.  The second part studies the borderline between two well
known notions of bisimulation for independence models such as 1-safe
Petri nets and the two models mentioned above.  The first is known as
the {\em history-preserving bisimulation} (HPB), the second is the
bisimulation obtained from the open maps approach, originally
introduced under the name {\em hereditary history-preserving
bisimulation} (HHPB) as a strengthening of HPB obtained by adding {\em
backtracking}.  Remarkably, HHPB has recently been shown to be {\em
undecidable} for finite state systems, as opposed to HPB which is
known to be decidable.  We consider history-preserving bisimulation
with {\em bounded backtracking}, and show that it gives rise to an
infinite, {\em strict} hierarchy of {\em decidable} history-preserving
bisimulations seperating HPB and HHPB.\bibpar

The work on fairness and non-deterministic dataflow takes its starting
point in the work on presheaf models for concurrency in which these
two aspects have not been treated previously.\bibpar

Fairness is concerned with ruling out some completed, possibly
infinite, computations as {\em unfair}. Our approach is to give a
presheaf model for the observation of {\em infinite} as well as finite
computations. This includes a novel use of a {\em Grothendieck
topology} to capture unique completions of infinite computations. The
presheaf model is given a concrete representation as a category of
{\em generalised synchronisation trees}, which we formally relate to
the general transition systems proposed by Hennessy and Stirling for
the study of fairness. In particular the bisimulation obtained from
open maps is shown to coincide with their notion of {\em extended
bisimulation}.  Benefitting from the general results on presheaf
models we give the first denotational semantics of Milners SCCS with
finite delay that coincides with his operational semantics up to
extended bisimulation.\bibpar

The work on non-deterministic dataflow, i.e.~systems communicating
assynchronously via buffered channels, recasts dataflow in a modern
categorical light using {\em profunctors} as a generalisation of
relations. The well known causal anomalies associated with relational
semantics of indeterminate dataflow are avoided, but still we preserve
much of the intuitions of a relational model. The model extends the
traditional presheaf models usually applied to semantics for {\em
synchronous} communicating processes described by CCS-like calculi,
and thus opens up for the possibility of combining these two
paradigms. We give a concrete representation of our model as a
category of (unfolded) {\em monotone port automata}, previously
studied by Panangaden and Stark. We show that the notion of
bisimulation obtained from open maps is a congruence with respect to
the operations of dataflow. Finally, we use the categorical
formulation of feedback using {\em traced monoidal categories}. A pay
off is that we get a semantics of higher-order networks almost for
free, using the {\em geometry of interaction} construction",
  linkhtmlabs =	 "",
  linkps =	 "",
  linkpdf =	 ""
}