MFCS 2001

26th International Symposium on
Mathematical Foundations of Computer Science

August 27 - 31, 2001
Marianske Lazne, Czech Republic


Invited Speakers


Dana S. Scott: A New Category for Semantics

Domain theory for denotational semantics is over thirty years old. There are many variations on the idea and many interesting constructs that have been proposed by many people for realizing a wide variety of types as domains. Generally, the effort has been to create categories of domains that are cartesian closed (that is, have products and function spaces interpreting typed lambda-calculus) and permit solutions to domain equations (that is, interpret recursive domain definitions and perhaps untyped lambda-calculus). What has been missing is a simple connection between domains and the usual set-theoretical structures of mathematics as well as a comprehensive logic to reason about domains and the functions to be defined upon them. In December of 1996, the author realized that the old idea of partial equivalence relations on types could be applied to produce a large and rich category containing many specific categories of domains and allowing a suitable general logic. The category is called Equ, the category of equilogical spaces. The simplest definition is the category of T-0 spaces and total equivalence relations with continuous maps that are equivariant (meaning, preserving the equivalence relations). An equivalent definition uses algebraic (or continuous) lattices and partial equivalence relations, together with continuous equivariant maps. This category is not only cartesian closed, but it is locally cartesian closed (that is, it has dependent sums and products). Moreover it contains all T-0 spaces (and therefore the category of sets and the category of domains) as a full subcategory. The logic for this category is intuitionistic and can be explained by a form of the realizability interpretation. The project now is to use this idea as a unifying platform for semantics and reasoning.

Peter Buergisser: On Implications between P-NP-Hypotheses: Decision versus Computation in Algebraic Complexity

Several models of NP-completeness in an algebraic framework of computation have been proposed in the past, each of them hinging on a fundamental hypothesis of type P \neq NP. We first survey the known implications between such hypotheses and then describe attempts to establish further connections. This leads us to the problem of relating the complexity of computational and decisional tasks and naturally raises the question about the connection of the complexity of a polynomial with those of its factors. After reviewing what is known with this respect, we discuss a new result involving a concept of approximative complexity.

Erik D. Demaine: Playing Games with Algorithms: Algorithmic Combinatorial Game Theory

Combinatorial games lead to several interesting, clean problems in algorithms and complexity theory. A combinatorial game typically involves two players, or in the interesting case of a combinatorial puzzle, one player, or for a cellular automaton such as Conway's Game of Life, zero players. No randomness or privacy is allowed: all players know all information about gameplay. The problem is thus purely strategic: how to best play the game against an ideal opponent. A beautiful mathematical theory has been developed for analyzing such games, primarily in the book Winning Ways by Berlekamp, Conway, and Guy. But many algorithmic problems remain open. Many classic games are known to be computationally intractable: puzzles are often NP-complete (as in Minesweeper), and two-player games are often PSPACE-complete (as in Go) or EXPTIME-complete (as in Chess). Surprisingly, many seemingly simple games are also hard. I will mention several such results I have been involved in: robot pushing-block puzzles such as Sokoban, a block-removal puzzle Clickomania, and Conway's game Philosopher's Football. More exciting results are positive, proving that some games can be played optimally in polynomial time. I will describe one main result in this area, a wide class of coin-sliding puzzles. Other still-open problems I will discuss include firefighting on graphs, Conway's angel-devil problem, and a cat-and-mouse game whose computational hardness (in a particular sense) would prove that P does not equal NP.

Amos Fiat: Some Recent Results on Data Mining and Search

In this talk we review and survey some recent work and work in progress on data mining and web search. We discuss Latent Semantic Analysis and give conditions under which it is robust. We also consider the problem of collaborative filtering and show how spectral techniques can give a rigorous and robust justification for doing so. We consider the problems of web search and show how both Google and Klienberg's algorithm are robust under a model of web generation, and how this model can be reasonably extended. We then give an algorithm that provably gives the correct result in this extended model. The results surveyed are joint work with Azar, Karlin, McSherry and Saia, and Achlioptas, Karlin and McSherry.

Georg Gottlob: Hypertree Decompositions

One of the best-known methods for decomposing graphs is the method of tree-decompositions introduced by Robertson and Seymour. Many NP-hard problems become polynomially soblvable if restricted to instances whose underlying graph structure has bounded treewidth. The notion of treewidth can be straightforwardly extended to hypergraphs by simply considering the treewidth of their primal graphs or, alteratively, of their incidence graphs. However, doing so comes along with a loss of information on the structure of a hypergraph with the effect that many polynomially solvable problems cannot be recognized as such because the treewidth of the underlying hypergraphs is unbounded. In particular, the treewidth of the class of acyclic hypergraphs is unbounded. In this talk, I will describe and discuss a decomposition method for hypergraphs called hypertree-decompositions (recently introduced by Gottlob, Leone, and Scarcello). In particular, I will present in form of a survey the following topics and results: 1.) queries and constraint satisfaction problems of bounded hypertree width can be solved in polynomial time (actually, the complexity is in LOGCFL). Hypergraphs of bounded hypertree width can be recognized in polynomial time and a hypertree decomposition of such a hypergraph graph can be computed in polynomial time. Hypertree decompositions generalize all other notions of hypergraph decompositions that have been used for defining polynomial classes of database queries or constraint satisfaction problems. The hypertree width of a graph has a nice game-theoretic characterization. Queries of bounded hypertree width correspond to an interesting fragment of First Order Logic. Hypertree width generalizes clique width (of the incidence graph of a hypergraph).

Martin Hofmann: The strength of non-size increasing computation

In this talk we describe a system [1,3] for defining recursive functions on inductive data structures such as lists or trees which has the intrinsic property that all definable functions are hereditarily non-size increasing so that evaluation in a finite model yields the correct result. Apart from being of foundational interest the system has applications to space-efficient evaluation of functional programs.
We determine the expressive power of the system and some subsystems, in particular, we show that a function is definable with general recursion and higher-order linear functions iff it is computable in time O(2^p(n)) for some polynomial p. Replacing general recursion with structural recursion reduces the strength to polynomial time.
The latter improves upon an earlier result by Aehlig and Schwichtenberg [2] who showed that all polynomial-time functions are definable using a certain ad-hoc extension.

[1] Martin Hofmann. Linear types and non size-increasing polynomial time computation. To appear in Theoretical Compjuter Science 2001. An extended abstract has appeared in Proc. Symp. Logic in Comp. Sci.. (LICS) 1999, Trento
[2] Klaus Aehlig and Helmut Schwichtenberg. A syntactical analysis of non-size-increasing polynomial time computation. Proc. Symp. on Logic in Comp. Sci. (LICS '00), Santa Barbara, 2000.
[3] Martin Hofmann. A type system for bounded space and functional in-place update. To appear in the Nordic Journal of Computing. An extended abstract has appeared in "Programming Languages and Systems", G. Smolka, ed. Springer LNCS, 2000.

Peter Hoyer: Introduction to recent quantum algorithms

We discuss some of the recent progress in quantum algorithmics. We review most of the primary techniques used in proving upper and lower bounds and illustrate how to apply the techniques to a variety of problems, including the threshold function, parity, searching and sorting. We also give a set of open questions and possible future research directions. Our aim is to give a basic overview and we include suggestions to further reading.

Dana Randall: Decomposition Methods for Markov Chain Analysis

Decomposition methods have been valuable tools for bounding convergence rates of Markov chains. The "State Decomposition Theorem" allows the state space of a Markov chain to be decomposed into pieces, and it shows that if some Markov chain is rapidly mixing when restricted to each of the pieces, and if there is good mixing between the pieces, then the original Markov chain must be rapidly mixing as well. The "Density Decomposition Theorem" provides a similar relation between the mixing rates of Markov chains with varying stationary distributions and a composite Markov chain which represents their weighted average. These theorems are useful because they allow a top-down approach to analyzing mixing rates by simplifying the chains. Moreover, they allow one to use a hybrid approach convergence rate analysis whereby different methods can be used to bound convergence rates of each of the pieces. In this talk we will survey these methods and give several recent applications of these techniques, including sampling returning walks in Cayley trees and the Gibbs states of the mean field Ising model.

Uwe Schoening: New Algorithms for k-SAT using the local search principle

In the last years several new deterministic and probabilistic algorithms have been developped for the k-SAT Problem. Their running times beat the previously known algorithms based on several versions of backtracking. The fastest known algorithm for 3-SAT achieves complexity 1.33^n.

Thomas Wilke: Linear Temporal Logic and Finite Semigroups

The main building blocks of temporal logic (as it used to specify properties of reactive systems) are operators for expressing simple temporal relationships such as "something happens until something else happens" or "something happens from now on". Often enough, when one tries to cast a seemingly simple temporal property in temporal logic, one experiences that the formulas one comes up with are complicated in one way or the other - they might be very long or deeply nested or use many binary operators. This raises the question if it is possible to determine for a given temporal property how simple a formula expressing it can be. This can also be phrased differently, in a more formal way. Given a temporal property, find natural fragments of temporal logic in which it can be expressed. Natural can, for instance, mean "without using binary operators" or "without referring to the next position". The key to solving these problems is to view temporal properties as sets of finite or infinite words, that is, as formal languages, and to apply the algebraic theory of formal languages. The talk will give an introduction to the field and survey the important results.


Last changed June 12, 2001.