## Past programme for 2008-2010

## 2008

- 6th October -
Leszek Kołodziejczyk, "Building models for very weak theories of arithmetic"-I will discuss a new method for constructing nonstandard models of weak fragments of bounded arithmetic, developed primarily in a paper by Boughattas and Ressayre. The method, inspired by algebraic techniques used in the study of open induction, is based on the idea that a model can be built from nonstandard reals rather than just nonstandard integers. It has been applied to obtain new independence results for weakened versions of \Sigma^b_1 induction and for Buss' theory T^0_2.- 13th October -
Phuong Nguyen, "Separating DAG-like and Tree-like Proof Systems"-I will present my paper "Separating DAG-like and Tree-like Proof Systems" in LICS 2007. In particular, I will show that constant-depth treelike Frege proof systems do not simulate cut-free daglike Frege, and that depth-d treelike Frege does not simulate depth-(2d+5) treelike Frege. The latter implies that the Sigma_0^B consequences of V^0 is not finitely axiomatizable. The proof method is an improvement and simplification of an earlier paper by Maciel and Pitassi, and is applicable to the sequents invented by Statman whose lowerbounds and upperbounds have been shown by elegant arguments.- 20th October -
Emil Jeřábek, "Abelian groups and quadratic residues in weak arithmetic"-We investigate the provability of some properties of abelian groups and quadratic residues in variants of bounded arithmetic. Specifically, we show that the structure theorem for finite abelian groups is provable in T^2_2, and use it to derive Fermat's little theorem and Euler's criterion for the Legendre symbol in T^2_2 extended by the pigeonhole principle. We prove the quadratic reciprocity theorem (including the supplementary laws) in bounded arithmetic extended with modulo-2 counting principles.

- 3rd November -
Emil Jeřábek, "Abelian groups and quadratic residues in weak arithmetic", Part 2- 10th November -
Arnold Beckmann, "Polynomial local search in the polynomial hierarchy and witnessing in fragments of bounded arithmetic"-The complexity class of \Pi^p_k-polynomial local search (PLS) problems is introduced and is used to give new witnessing theorems for fragments of bounded arithmetic. These \Pi^p_k-PLS problems can be defined in a weak base theory such as S^1_2, and proved to be total in T^{k+1}_2 . Furthermore, the \Pi^p_k-PLS definitions can be skolemized with simple polynomial time functions, and the witnessing theorem itself can be formalized, and skolemized, in a weak base theory. This gives rise to a new \forall\Sigma^b_1(X)-principle that is conjectured to separate T^k_2(X) and T^{k+1}_2(X).- 14th November -
Barbara Csima, "Computable structure theory"-In computable structure theory, one examines various countably infinite structures (such as linear orderings and graphs) for their computability theoretic properties. For example, the standard theorem that any two countable dense linear orders without endpoints are isomorphic can be carried out computably, in the sense that if the two countable dense linear orders are nicely presented, then there must be a computable isomorphism between them. However, there are many examples of computable structures that are isomorphic but not computably isomorphic. This talk will be an introduction to computable structure theory, explaining some standard examples, and indicating areas of current research.- 8th December -
Iddo Tzameret, "The Proof Complexity of Polynomial Identities"-Devising an efficient deterministic -- or even a non-deterministic sub-exponential time -- algorithm for testing polynomial identities is a fundamental problem in algebraic complexity and complexity at large. Motivated by this problem, as well as by results from proof complexity, we investigate the complexity of proving polynomial identities. To this end, we study a class of equational proof systems, of varying strength, operating with polynomial identities written as arithmetic formulas over a given ring. A proof in these systems establishes that two arithmetic formulas compute the same polynomial, and consists of a sequence of equations between polynomials, written as arithmetic formulas, where each equation in the sequence is derived from previous equations by means of the polynomial-ring axioms. This works establishes non-trivial upper and lower bounds on the size of equational proofs of polynomial identities. In this talk I will present the basic model of equational proofs, argue for its relevance and importance in both algebraic complexity and proof complexity, and demonstrate several upper and lower bounds, in various fragments of equational proofs. Joint work with Pavel Hrubes.- 15th December -
Iddo Tzameret, "The Proof Complexity of Polynomial Identities", Part 2## 2009

- 12th January -
Iddo Tzameret, "The Proof Complexity of Polynomial Identities", Part 3- 26th January -
Jindřich Zapletal, "Borel equivalences and their applications"-The study of Borel equivalence relations is growing at a very fast rate. These equivalences are compared according to a natural notion of complexity, resulting in a rich map. It turns out that such branches of mathematics as algebra, ergodic theory, or functional analysis contain key equivalences that can be placed on this map, leading to a precise quantification of their difficulty. The lecture is an invitation to a course I will teach this term at the Department of Mathematical Analysis at Charles University.- 9th February -
Olaf Beyersdorff, "The complexity of reasoning for fragments of default logic"-Default logic was introduced by Reiter in 1980. In 1992, Gottlob classified the complexity of the extension existence problem for propositional default logic as $\Sigma_2^p$-complete, and the complexity of the credulous and skeptical reasoning problem as $\Sigma_2^p$-complete, resp. $\Pi_2^p$-complete. Additionally, he investigated restrictions on the default rules, i.e., semi-normal default rules. Selman made in 1992 a similar approach with disjunction-free and unary default rules. In this talk we systematically restrict the set of allowed propositional connectives. We give a complete complexity classification for all sets of Boolean functions in the meaning of Post's lattice for all three common decision problems for propositional default logic. We show that the complexity is a trichotomy ($Sigma_2^p$-, NP-complete, trivial) for the extension existence problem, whereas for the credulous and sceptical reasoning problem we get a finer classification down to NL-complete cases. The results described in the talk are joint work with Arne Meier, Michael Thomas, and Heribert Vollmer.- 16th February -
Pavel Pudlák, "On platonism, intuitionism and Hilbert"-I will present a part of a chapter of the book I am writing. I plan to say more about the philosophy of mathematics later this semester.- 2nd March -
Stefano Cavagnetto, "Euclid’s Elements"-Euclid’s Elements is one of the most beautiful and influential works of mathematics in the history of science. Its beauty relies on its logical development of geometry and other branches of mathematics. The Elements have influenced all branches of science but none as much as mathematics. Entire generations of mathematicians have been shaped by this masterpiece and its influence is still vivid in modern mathematics. Although many of the results contained in this work originated earlier, one of Euclid’s achievements was to present them in a single and logically coherent framework. This made it easy to use the contents and easy to reference them for other mathematicians. The system of rigorous mathematical proofs still remains the basis of modern mathematics. In this talk I will give a quick overview of the contents of the Elements in the order they appear in the thirteen books. I’ll focus in some detail on the first two books and on their logical structure with the aim of giving a good grasp of Euclid’s modernity. Finally I’ll discuss a small fragment of papyrus containing the Proposition 5 of Book II. This fragment was found among the remarkable rubbish pile of Oxyrhynchus in 1896-97 and it can be interpreted in modern terms, as other results in Book II, as a geometric formulation of algebraic problems.- 9th March -
Nicola Galesi, "Lower bounds in Polynomial Calculus and its extensions to k-DNF Resolution"-We consider an algebraic proof system PCR[k], which combines together Polynomial Calculus (PC) and k-DNF Resolution (RES[k]). We study the complexity of proofs in this family of systems, extending to PCR[k] some of the results known for RES[k]. We prove that random 3-CNF formulas with a linear number of clauses are hard to prove in PCR[k]; We prove a strict hierarchy result showing that PCR[k+1] is exponentially stronger than PCR[k]. The results rely on proving degree lower bounds for PCR refutations of a Graph Ordering Principle, which gives the first example of a family of contradictions having PCR short refutations but requiring high degree and thus also proving the optimality of the size-degree tradeoffs for PCR. Finally we consider the question of whether PC/PCR are automatizable and we prove, exactly as for Resolution, that the answer is no, conditioned on a complexity assumption. The talk is based on joint work with Massimo Lauria.- 16th March -
Jan Krajíček, "A form of feasible interpolation for constant depth Frege systems"-Let L be a first-order language and Phi and Psi two Sigma^1_1 L-sentences that cannot be satisfied simultaneously in any finite L-structure. Then obviously the following principle Chain_{L,Phi,Psi}(n,m) holds: For any chain of finite L-structures C_1, ..., C_m with the universe [n] one of the following conditions must fail: (1.) C_1 satisfies Phi, (2.) C_i is isomorphic to C_{i+1}, for i = 1, ..., m-1, (3.) C_m satisfies Psi. For each fixed L and parameters n, m the principle can be encoded into a propositional DNF formula of size polynomial in n,m. For any language L containing only constants and unary predicates we show that if a constant depth Frege system in DeMorgan language proves Chain_{L,Phi,Psi}(n,n) by a size s proof then the class of finite L-structures with universe [n] satisfying Phi can be separated from those L-structures on [n] satisfying Psi by a depth 3 formula of size exponential in polylog(s) and with bottom fan-in polylog(s).- 23rd March -
Neil Thapen, "Search problems for stronger bounded arithmetic theories"-We define the "local improvement principle" LI, an NP search problem about labellings of bounded degree, acyclic graphs. This can be seen as a generalization of the game induction principle, or, in some sense, as an exponentially larger version of PLS. We show that versions of LI characterize over PV the \forall \Sigma^b_1 sentences provable in T^i_2, U^1_2 and V^1_2 and characterize over ID_0 the \forall \Sigma^B_0 sentences provable in V^1_1. By RSUV isomorphism, this last result can be interpreted as a characterization of the \forall \pi^b_1 sentences provable in S^1_2. Joint work with Leszek Kolodziejczyk and Phuong Nguyen.- 30th March -
Neil Thapen, "Search problems for stronger bounded arithmetic theories", Part 2- 6th April -
Stefan Dantchev, "Parameterized Proof Complexity"-We propose a proof-complexity approach for separating certain parameterized complexity classes. We consider proofs showing that a given propositional formula cannot be satisfied by a truth assignment that sets at most k variables to true, considering k as the parameter (we call such a formula a parameterized contradiction). One could separate the parameterized complexity classes FPT and W[SAT] by showing that there is no fpt-bounded parameterized proof system for parameterized contradictions, i.e., that there is no proof system that admits proofs of size f(k).poly(n) where f is an arbitrary computable function and n is the size of the propositional formula. By way of a first step, we introduce the system of parameterized tree-like resolution, and show that this system is not fpt-bounded. Indeed, we give a general result on the size of shortest tree-like resolution proofs of parameterized contradictions that uniformly encode first-order principles over a universe of size n. We establish a dichotomy theorem that splits the exponential case of Riis's Complexity Gap Theorem into two sub-cases, one that admits proofs of size exp(k) . poly(n) and another that requires proofs of size at least n ^ (k ^ \alpha) for some constant \alpha greater than zero.- 20th April -
Jindřich Zapletal, "Woodin's work on absoluteness"-Woodin suggests that he made progress towards resolving the continuum hypothesis. I will outline the work that lead him to this conclusion.- 11th May -
Pavel Pudlák, "Consistency, reflection principles, transfinite iterations, etc."-Assuming that an arithmetical theory T is true we can derive several true arithmetical sentences that are not provable in T. The simplest is Con(T) (the arithmetization of the statement that T is consistent). Then there is a hierarchy of reflection principles. One can make T even stronger by adding a truth predicate to it. One can further iterate these extensions along recursive ordinals. This also naturally leads to subsystems of Second Order Arithmetic. The main question I want to address is: Does it mean that there is no limit on the theories that we can recognize as true?- 12th October 2009 -
Lou van den Dries, "Model theory of reals and o-minimality"- 19th October -
Lou van den Dries, "Logarithmic - exponential series"- 2nd November -
Stefan Dantchev, "Proof Complexity lowerbounds via reductions to the Pigeon-Hole Principle"-I will show how a number of known lowerbounds in Proof Complexity plus a few new ones can be obtained in a uniform way, by a kind of reduction to proving a lower bound for the Pigeon-Hole Principle (PHP). This approach makes somewhat stronger cases for studying the so-called Structured PHP (introduced by Krajicek) and for trying to find the precise connection between proof systems and pebble games studied in Finite Model Theory. I will keep all this very informal and, in particular, I will discuss many open questions and possible conjectures.- 9th November -
Sebastian Müller, "Heuristic Proof Systems"-Heuristic Proof Systems are randomized proof systems that may prove false statements, but only with a negligibly small probability. Hirsch and Itsykson recently introduced a class of randomized proof systems that can be shown to be optimal (in a certain sense) for such heuristic proof systems. We will discuss their approach and talk about the relevant proofs and assumptions.- 23rd November -
Pavel Pudlák, "Alternating minima and maxima, Nash equilibria and Bounded Arithmetic"-We show that the least number principle for strict \Sigma_k^b formulas can be characterized by the existence of alternating minima and maxima of length k. We show simple prenex forms of these formulas whose herbrandizations (by polynomial time functions) are \forall\Sigma_1^b formulas that characterize \forall\Sigma_1^b theorems of the levels T_2^k of the Bounded Arithmetic Hierarchy, and we derive from this another characterization, in terms of a search problem about finding pure Nash equilibria in k-turn games.- 30th November -
Pavel Pudlák, "Alternating minima and maxima, Nash equilibria and Bounded Arithmetic", Part 2- 7th December -
Neil Thapen, "A uniform no-gap theorem"-I will talk about some work in progress on the low-complexity consequences of bounded arithmetic theories. If the \forall \Sigma^b_1 consequences of T^{k+1}_2 are already provable in T^k_2, then the \forall \Sigma^b_1 consequences of T^j_2 are already provable in T^k_2, for any j>k. I will show that this can be done in a uniform way, and will describe a CNF formula F with the property that a bounded depth Frege lower bound for F would imply that CNFs exist separating depth k+1 from depth k Frege for all k.- 14th December -
Pavel Pudlák, "Truth, soundness and what we actually mean when we accept some axioms"-I will present some thoughts from the book I am writing which concern the concepts mentioned above. I will also explain mistakes in the attempts to apply Godel's Incompleteness Theorem to human mind.## 2010

- 4th January-
Petr Hájek, "Mathematical fuzzy logic and axiomatic arithmetic"-Basic notions of mathematical fuzzy predicate logic (t-norm based) will be briefly surveyed; then various results on properties of Peano- or Robinson-like arithmetic over fuzzy predicate logic will be explained, among them classical Peano arithmetic with a fuzzy truth predicate and essential undecidability of some extremely weak variants of Robinson arithmetic over fuzzy logic.- 11th January -
Pavel Pudlák, "How to find new axioms?" and "Worlds with different arithmetic"-I will continue with more thoughts about the philosophy and foundations of mathematics that will appear in the book I am writing. As the title says, I will discuss the problem of finding new axioms. But rather than just looking for new set-theoretical axioms as is done in set theory, I am interested in looking for axioms of an essentially different nature. The second topic will be speculations about the possibility that the arithmetical truth that we have in our world is not the only possible one.- 15th March -
Stefan Hetzl, "On the non-confluence of cut-elimination"-This talk is about cut-elimination in classical first-order logic. I will show how to construct a sequence of polynomial-length proofs having a non- elementary number of different cut-free normal forms. These normal forms are different in a strong sense: they not only represent different Herbrand- disjunctions but also have different proof skeletons. This result illustrates that the constructive content of a proof in classical logic can strongly depend on the chosen method for extracting it. Joint work with M. Baaz.- 22nd March -
Pavel Pudlák, "Visions of future foundations"-This will be a continuation of my talks about the philosophy of mathematics. I will present the last section of my book in which I propose new directions in which one can develop the foundations of mathematics. In particular I will talk about: 1. worlds with different arithmetic than ours and how we can model this by nonstandard models; 2. the relation of the foundations of mathematics to the foundations of physics; I will argue that one should look for a unified foundation of both sciences.- 29th March -
Pavel Pudlák, "Visions of future foundations", Part 2- 12th April -
Nicola Galesi, "Hardness of Parameterized Resolution"-Parameterized Resolution and, moreover, a general framework for parameterized proof complexity was introduced by Dantchev, Martin, and Szeider (FOCS'07). In that paper, Dantchev et al. show a complexity gap in tree-like Parameterized Resolution for propositional formulas arising from translations of first-order principles. We broadly investigate Parameterized Resolution obtaining the following main results: 1) We introduce a purely combinatorial approach to obtain lower bounds to the proof size in tree-like Parameterized Resolution. For this we devise a new asymmetric Prover-Delayer game which characterizes proofs in (parameterized) tree-like Resolution. By exhibiting good Delayer strategies we then show lower bounds for the pigeonhole principle as well as the order principle. 2) Interpreting a well-known FPT algorithm for vertex cover as a DPLL procedure for Parameterized Resolution, we devise a proof search algorithm for Parameterized Resolution and show that tree-like Parameterized Resolution allows short refutations of all parameterized contradictions given as bounded-width CNF's. 3) We answer a question posed by Dantchev, Martin, and Szeider showing that dag-like Parameterized Resolution is not fpt-bounded. We obtain this result by proving that the pigeonhole principle requires proofs of size $n^{\Omega(k)}$ in dag-like Parameterized Resolution. For this lower bound we use a different Prover-Delayer game which was developed for Resolution by Pudlak. Joint work with O. Beyersdorff and M Lauria.

- 19th April -
Radek Honzík, "Failures of GCH and SCH with large cardinals"-We will study the implications large cardinals have on the behaviour of the function \alpha \to 2^\alpha for regular and singular \alpha's. The talk will be in essence introductory, no expertise in set theory is assumed. We will start by motivating the notion of large cardinals and explain the effects they have on the basic cardinal arithmetics (in terms of implications, and in terms of consistency results). In the course of the talk, some original results in this area will be stated.- 26th April -
Radek Honzík, "Failures of GCH and SCH with large cardinals", Part 2- 3rd May -
Emil Jeřábek, "Proofs with monotone cuts"-The monotone sequent calculus (MLK) is the subsystem of the usual propositional sequent calculus (LK) obtained by restricting all formulas in the proof to be monotone. Atserias, Galesi and Pudlak showed that MLK quasipolynomially simulates LK-proofs of monotone sequents. We consider a conservative extension of MLK, called MCLK (LK with monotone cuts) which allows arbitrary formulas to appear in the proof, but requires all cut formulas to be monotone. We show that MCLK quasipolynomially simulates LK (even for nonmonotone sequents). We will also mention some connections to monotone decision trees.- 10th May -
Ján Pich, "Nisan-Wigderson generators in proof systems with forms of interpolation"-Razborov conjectured that the Nisan-Wigderson generators based on suitable functions and matrices should be hard for strong proof systems like Extended Frege. We will discuss the conjecture and prove that the Nisan-Wigderson generators are hard for proof systems that admit feasible interpolation.- 17th May -
Iddo Tzameret, "Algebraic Proofs over Noncommutative Formulas"-I will discuss algebraic propositional proofs, where polynomials in proof-lines are written as noncommutative formulas. Possible formulations of such proofs lead to systems as strong as Frege. For apparently weaker formulations, we establish non-trivial upper bounds, and discuss possible lower bounds approaches. This work is motivated by the search for lower bound techniques based on rank arguments in propositional proof complexity.- 11th October -
Neil Thapen, "Approximate counting and search problems"-I will talk about work in progress looking at the relative strengths of some theories of weak induction and some theories connected to approximate counting, as measured by their provable \Sigma^b_1 sentences (also known as total search problems). One motivation is to understand why it is hard to prove separation results for T^2_2 at this level. I will give one proof, that T^1_2 together with the injective weak pigeonhole principle for polynomial time functions does not capture all of the \Sigma^b_1 consequences of T^2_2.- 18th October -
Pavel Pudlák, "On a theorem of Razborov"-Razborov proved a theorem that characterizes circuit complexity by a communication complexity game. In a sense it is a generalization of the theorem of Karchmer and Wigderson that characterizes circuit depth. This theorem has an important application in proof complexity. It was used by Krajicek to prove his Feasible Interpolation Theorem for Resolution proofs. This seminar will be a sort of an introduction to the problem that we will consider on a future seminar: How to generalize Razborov's theorem to higher levels of the hierarchy of Frege systems.- 25th October -
Vladimir Voevodsky (recorded talk at Princeton), "What if Current Foundations of Mathematics are Inconsistent?"- 8th November -
Albert Atserias, "Mean-Payoff Games and Propositional Proofs"-We associate a CNF-formula to every instance of the mean-payoff game problem in such a way that if the value of the game is non-negative the formula is satisfiable, and if the value of the game is negative the formula has a polynomial-size refutation in $\Sigma_2$-Frege (i.e.~DNF-resolution). This reduces mean-payoff games to the weak automatizability of $\Sigma_2$-Frege, and to the interpolation problem for $\Sigma_{2,2}$-Frege. Since the interpolation problem for $\Sigma_1$-Frege (i.e.~resolution) is solvable in polynomial time, our result is close to optimal up to the computational complexity of solving mean-payoff games. The proof of the main result requires building low-depth formulas that compute the bits of the sum of a constant number of integers in binary notation, and low-complexity proofs of the required arithmetic properties.- 15th November -
Pavel Pudlák, "On a theorem of Razborov", Part 2- 22nd November -
Moritz Müller, "Undefinable forcing"-In set theory one defines forcing semantically via truth in all generic extensions and proves it equivalent to a handier, syntactical notion that is defined via recursion on logical syntax. This Forcing Completeness Theorem is the key to the fundamental forcing lemmas, most importantly, the Definability Lemma that states that the forcing relation is definable in the ground model. In turn, the forcing lemmas yield the Principal Theorem stating that generic extension are models of ZFC. Then an independence question is reduced to the combinatorial task for the design of a suitable forcing frame. In this sense forcing in set theory is a method. Some independence results in bounded arithmetic have been obtained via forcing type arguments. But in contrast to set theory, no general forcing theory is available. These arguments substantially differ from the set theoretic setting in that they produce expansions instead of extensions and work without a Forcing Completeness Theorem and without a Definability Lemma. It is not completely clear in what sense they are of the forcing type. The talk presents work in progress, a trial to develop forcing in bounded arithmetic as a method. It presents a proposal of some general frame for forcing that can be seen as a common generalization of Cohen forcing in set theory and the mentioned forcing type arguments in arithmetic. This amounts to develop forcing without a Definability Lemma. For fragments of arithmetic a Principal Theorem would state that generic expansions satisfy the least number principle for a certain fragment of formulas. Such Principal Theorems can be obtained when using a forcing that is in a certain weak sense definable for the fragment in question. In this sense an independence question is again reduced to a combinatorial question for the design of a suitable forcing frame.- 29th November -
Neil Thapen, "Interpolation and a kind of generalized circuit"-I will talk about some work in progress on how the feasible interpolation theorem for resolution could be extended to higher-depth proof systems.## 2011

- 28th February -
Emil Jeřábek, "Real closures of models of arithmetic"- This is a joint work with Leszek Kołodziejczyk. If R is a countable recursively saturated real-closed field, and T any consistent r.e. extension of IOpen, then R has an integer part M which is a model of T by resplendence. In a recent paper, D'Aquino, Knight and Starchenko have shown a kind of converse: if R is a real-closed field with an integer part M which is a model of I\Sigma_4, then R is recursively saturated. We will generalize this result to fragments of bounded arithmetic (IE_2, PV, \Sigma^b_1-IND^{|x|_k}). This suggests that recursive saturation of real closure is a Tennenbaum-like property.- 7th March -
Pavel Pudlák, "Logic of space-time and relativity theory"-I will present "Logic of space-time and relativity theory" by Andreka, Madarasz and Nemeti. They propose first order axiomatizations of special and general theories of relativity. The talk should be understanble without a deeper background in physics.- 28th March -
Pavel Pudlák, "Finitism and physical reality"-This will be another seminar about the philosophy of mathematics. The main question about which it will revolve is: Are the physical natural numbers the same as the mathematical natural numbers?- 4th April -
Jan Krajíček, "On the proof complexity of TC^0 Frege systems"-We make some observations on the relation of AC^0 and TC^0 Frege proof systems.- 11th April -
Petr Glivicky, "Quantifier elimination in Linear Arithmetic and Peano Products"-While Presburger arithmetic (Pr) is a descriptively simple theory (has an elimination set consisting of $\Sigma$-formulas), Peano arithmetic (P) has no elimination at all. I will introduce a theory of Linear arithmetic (LA) which extends Pr by multiplication by one scalar (with full induction) and prove that it has still an elimination set consisting of $\Sigma$-formulas. I will use this result to show some properties of products in models of Peano arithmetic (Peano products) concerning in particular mutual (in)dependence of values of a Peano product in different points.- 18th April -
Jan Krajíček, "Algorithmic dense model theorem, a survey"-The following is from an abstract of Impagliazzo's talk. "Green and Tao used the existence of a dense subset indistinguishable from the primes under certain tests from a certain class to prove the existence of arbitrarily long prime arithmetic progressions. Reingold, Trevisan, Tulsiani and Vadhan, and independently, Gowers, give a quantitatively improved characterization of when such dense models exist. An equivalent formulation was obtained earlier by Barak, Shaltiel and Wigderson." Impagliazzo found a connection with the hard core theorems about pseudorandom generators. The theorem may also be potentially useful in proof complexity.- 2nd May -
Jan Krajíček, "Algorithmic dense model theorem, a survey", Part 2- 9th May -
Marta Bílková, "Moss' coalgebraic modal logic"-Coalgebras provide a uniform framework to deal with various semantic structures for existing modal languages (various types of Kripke frames), as well as various state based transition systems (automata). [ Technically, given a functor T: Set --> Set, a T-coalgebra is a map c: X -->TX, (X is a set of "states", T is the type of "behaviour" or "future", and c is the transition map.) ] Recently various people concentrate both on coalgebraic approach to existing modal logics and finding expressive modal languages for interesting classes of colgebras. In this talk I'll concentrate on one of the possible approaches to the latter, proposed by Larry Moss in 90's. The main idea is that to define a language for T-coalgebras one can deal with a single modality coming from the functor T (its semantics as well as syntax is given by the functor). I'll show how to give a uniform axiomatization and to define cut-free sequent style proof systems for such languages. I'll also try to say how useful can such a language be for ordinary modal language (it provides a disjunctive normal form which e.g. simplifies some interpolation proofs).- 16th May -
Massimo Lauria, "The proof complexity of Paris-Harrington tautologies"-We initiate the study of the proof complexity of propositional encoding of (weak cases of) concrete independence results. We study the proof complexity of Paris-Harringtons Large Ramsey Theorem for bicolorings of graphs. We prove a non-trivial conditional lower bound in Resolution and a quasi-polynomial upper bound in bounded-depth Frege. The lower bound is conditional on a hardness assumption for a Weak (quasi-polynomial) Pigeonhole Principle in Res(2). The proof technique of the lower bound extends the idea of using a combinatorial principle to blow-up a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced Ramsey principle for triangles is established. This is obtained by applying some highly non-trivial constructions due to Erdos and Mills. (Joint work with Lorenzo Carlucci and Nicola Galesi.)- 17th October -
Pavel Pudlák, "Models explaining pseudorandomness by randomness"-We construct a nonstandard model of arithmetic M_0, an element \Delta \in M_0, a system S of extensions of M_0 and a probability measure on S such that: M_0 is a model of PV; every N \in S is an elementary extension of M_0 w.r.t. polynomial time computable predicates; and for every pseudorandom function f with two values -1 and 1, the probability that f(\Delta)=1 is 1/2 for random N.- 24th October -
Emil Jeřábek, "Root finding in TC^0"-We show that for any constant d, there is a uniform TC^0 algorithm computing approximations of complex zeros of degree d univariate rational polynomials (given by a list of coefficients in binary). The basic idea is to compute the inverse of the polynomial by a power series, using tools from complex analysis, and the result of Hesse, Allender and Barrington that division and iterated products are computable in uniform TC^0. This work is motivated by bounded arithmetic. The complexity class TC^0 contains the basic arithmetical operations (addition, multiplication, ordering), but it is not clear which properties of these operations are provable in the corresponding theory VTC^0 (for binary numbers, i.e., the string sort). It is not even known whether VTC^0 proves integer division to be total. Our result can be equivalently restated as follows: VTC^0 extended with the set of all true \forall\Sigma^B_0 sentences includes IOpen (induction for open formulas in the basic language of arithmetic).- 31st October -
Dai Tri Man Le, "Complexity Classes and Theories for the Comparator Circuit Value Problem"-Subramanian defined the complexity class CC as the set of problems log-space reducible to the comparator circuit value problem. He proved that several other problems are complete for CC, including the stable marriage problem, and finding the lexicographical first maximal matching in a bipartite graph. We suggest alternative definitions of CC based on different reducibilities and introduce a two-sorted theory VCC* based on one of them. We sharpen and simplify Subramanian's completeness proofs for the above two problems and show how to formalize them in VCC*.- 14th November -
Alexander Smal, "Optimal heuristic algorithms for the image of an injective function"-The existence of optimal algorithms is not known for any decision problem in NP - P. We consider the problem of testing the membership in the image of an injective function. We construct optimal heuristic algorithms for this problem in both randomized and deterministic settings (a heuristic algorithm can err on a small fraction 1/d of the inputs; the parameter d is given to it as an additional input). Thus for this problem we improve an earlier construction of an optimal acceptor (that is optimal on the negative instances only) and also give a deterministic version.- 21st November -
Jakob Nordström, "A general paradigm(?) for proving time-space trade-offs in proof complexity"-I will survey a recent line of work on proving time-space trade-offs for resolution (partly joint with Hastad and Ben-Sasson) and describe how these results can be viewed as a general technique of amplifying weak lower bounds to stronger lower bounds in proof complexity. I will then show that this general paradigm generalizes to the k-DNF resolution proof systems, and discuss whether it might be made to work for Polynomial Calculus and Cutting Planes as well. Time permitting, I might briefly mention some new results that can be interpreted as indicating that this should be the case.- 28th November-
Yuval Filmus, "Scales of prefix codes for N"-We consider monotone prefix codes for the natural numbers. These codes come up naturally in software engineering, and "universal" codes have been suggested by Elias (75). Elias suggested an infinite sequence of codes which get progressively better, and diagonalized his construction to produce an ultimate code. We show that Elias's ultimate code -- or any other single code, or even sequence of codes -- is not optimal by finding a code better than any given sequence of codes. This implies that a scale exists if CH holds, and that in Cohen's model without CH, there is no scale. Given a (uniformly) recursive sequence of codes, the code our construction produces is also recursive. Consequently, we also get a version of the fast-growing hierarchy for codes.- 5th December -
Neil Thapen, "More on approximate counting and search problems"-Jeřábek's theory for approximate counting consists of the bounded arithmetic theory T^1_2 together with the surjective weak pigeonhole principle (sWPHP) for P^NP functions. It is consistent with what is known that every \Sigma^b_1 sentence provable in bounded arithmetic is already provable in this theory, although it is expected that this is not the case. We consider the provable \Sigma^b_1 sentences (also known as total search problems) of natural subtheories of approximate counting. I will talk about some new results in this area: I will give a separation between T^2_2 and PV + sWPHP for P^NP functions in the relativized case; and I will describe a plausible conjecture about narrow resolution refutations that implies a similar separation for T^1_2 + sWPHP for polynomial time functions. Joint work with Sam Buss and Leszek Kołodziejczyk.- 12th December -
Sebastian Müller, "Proof Complexity of Random 3CNF Formulas"-Random 3CNF formulas constitute an important distribution for measuring the average-case behavior of propositional proof systems. Lower bounds for random 3CNF refutations in many propositional proof systems are known. Most notable are the exponential-size resolution refutation lower bounds for random 3CNF formulas with Omega(n^{1.5-epsilon}) clauses (Ben-Sasson and Wigderson). On the other hand, the only known non-trivial upper bound on the size of random 3CNF refutations in a non-abstract propositional proof system is for resolution with Omega(n^{2/ log n}) clauses, shown by Beame et al. Using a technique by Feige, Kim and Ofek we will see that already standard propositional proof systems, within the hierarchy of Frege proofs, admit short refutations for random 3CNF formulas, for sufficiently large clause-to-variable ratio. Specifically, we will construct polynomial-size propositional refutations whose lines are TC0 formulas (i.e., TC0-Frege proofs) for random 3CNF formulas with n variables and Omega(n^1.4) clauses. Part of the argument is a spectral one and we therefore need to develop an appropriate way to reason about objects such as some Eigenvalues in a weak arithmetic.