All talks will be held in
18:00 – 21:00 |
Informal welcome gathering
Your Father's Moustache, 5686 Spring Garden Road. |
9:00 – 9:20 |
Registration
|
9:20 – 9:30 |
Opening
|
Session chair: Hansen | |
9:30 – 10:00 |
Tutorial: Dempster-Shafer theory and topological models for evidence
|
10:00 – 11:00 |
Invited lecture: Beliefs based on
conflicting and uncertain evidence: connecting
Dempster-Shafer theory and the topology of evidence
Abstract,
Slides
One problem to solve in the context of information fusion, decision-making, and other artificial intelligence challenges is to compute justified beliefs based on evidence. In real-life examples, this evidence may be inconsistent, incomplete, or uncertain, making the problem of evidence fusion highly non-trivial. In this talk, I will present a new model for measuring degrees of beliefs based on possibly inconsistent, incomplete, and uncertain evidence, by combining tools from Dempster-Shafer Theory and Topological Models of Evidence. Our belief model is more general than the aforementioned approaches in two important ways: (1) it can reproduce them when appropriate constraints are imposed, and, more notably, (2) it is flexible enough to compute beliefs according to various standards that represent agents' evidential demands. The latter novelty allows to compute an agent's (possibly) distinct degrees of belief, based on the same evidence, in situations when, e.g, the agent prioritizes avoiding false negatives and when it prioritizes avoiding false positives. Finally, I will discuss further research directions and, time permitting, report on the computational complexity of computing degrees of belief using the proposed belief model. The main part of the talk is based on joint work with Daira Pinto Prieto and Ronald de Haan. The underlying topological formalism for evidence and belief has been developed in collaboration with Alexandru Baltag, Nick Bezhanishvili, and Sonja Smets. |
11:00 – 11:30 |
Coffee break
|
Session chair: Ortiz | |
11:30 – 12:00 |
An evidence logic perspective on Schotch-Jennings forcing
Abstract,
Paper,
Slides,
Handout
Traditional epistemic and doxastic logics cannot deal with inconsistent beliefs nor do they represent the evidence an agent possesses. So-called ‘evidence logics’ have been introduced to deal with both of those issues. The semantics of these logics are based on neighbourhood or hypergraph frames. The neighbourhoods of a world represent the basic evidence available to an agent. On one view, beliefs supported by evidence are propositions derived from all maximally consistent collections evidence. An alternative concept of beliefs takes them to be propositions derivable from consistent partitions of one’s inconsistent evidence; this is known as Schotch-Jennings Forcing. This paper develops a modal logic based on the hypergraph semantics to represent Schotch-Jennings Forcing. The modal language includes an operator U(φ1,...,φn;ψ) which is similar to one introduced in Instantial Neighbourhood Logic. It is of variable arity and the input formulas are enjoy distinct roles. The U operator expresses that all evidence at a particular world that supports ψ can be supported by at least one of the φis. U can then be used to express that all the evidence available can be unified by the finite set of formulas φ1,...,φn if ψ is taken to be ⊺. Future developments will then use that semantics as the basis for a doxastic logic akin to evidence logics. |
12:00 – 12:30 |
Online
Two-layered logics for paraconsistent probabilities
Abstract,
Paper,
Slides
We discuss two two-layered logics formalising reasoning with paraconsistent probabilities that combine the Łukasiewicz [0,1]-valued logic with Baaz △ operator and the Belnap–Dunn logic. The first logic Pr Ł2 △ (introduced in [7]) formalises a ‘two-valued’ approach where each event ϕ has independent positive and negative measures that stand for, respectively, the likelihoods of ϕ and ¬ϕ. The second logic 4Pr Ł△ that we introduce here corresponds to ‘four-valued’ probabilities. There, ϕ is equipped with four measures standing for pure belief, pure disbelief, conflict and uncertainty of an agent in ϕ. We construct faithful embeddings of 4Pr Ł△ and Pr Ł2 △ into one another and axiomatise 4Pr Ł△ using a Hilbert-style calculus. We also establish the decidability of both logics and provide complexity evaluations for them using an expansion of the constraint tableaux calculus for Ł. |
12:30 – 14:30 |
Lunch
|
Session chair: Scedrov | |
14:30 – 15:30 |
Invited lecture: The epsilon calculus in non-classical logics: recent results and open questions
Abstract,
Slides
The epsilon operator [1,3] is mainly studied in the context of classical logic. It is a term forming operator: if A(x) is a formula, then εxA(x) is a term— intuitively, a witness for A(x) if one exists, but arbitrary otherwise. Its dual τxA(x) is a counterexample to A(x) if one exists. Classically, it can be defined as εx ¬A(x). Epsilon and tau terms allow the classical quantifiers to be defined: ∃x A(x) as A(εx A(x)) and ∀x A(x) as A(τ x A(x)). Epsilon operators are closely related to Skolem functions, and the fundamental so-called epsilon theorems to Herbrand’s theorem. Recent work with Matthias Baaz [2] investigates the proof theory of ετ-calculi in superintuitionistic logics. In contrast to the classical ε-calculus, the addition of ε- and τ-operators to intuitionistic and intermediate logics is not conservative, and the epsilon theorems hold only in special cases. However, it is conservative as far as the propositional fragment is concerned. Despite these results, the proof theory and semantics of ετ-systems on the basis of non-classical logics remains underexplored. References
|
15:30 – 15:45 |
Short coffee break
|
Session chair: Jacobs | |
15:45 – 16:15 |
Online
Effective Skolemization
|
16:15 – 16:45 |
Online
Structural completeness and superintuitionistic inquisitive logics
Abstract,
Paper,
Slides
In this paper, the notion of structural completeness is explored in the context of a generalized class of superintuitionistic logics involving also systems that are not closed under uniform substitution. We just require that each logic must be closed under D-substitutions assigning to atomic formulas only ∨-free formulas. For these systems we introduce four different notions of structural completeness and study how they are related. We focus on superintuitionistic inquisitive logics that validate a schema called Split and have the disjunction property. In these logics disjunction can be interpreted in the sense of inquisitive semantics as a question forming operator. It is shown that a logic is structurally complete with respect to D-substitutions if and only if it includes the weakest superintuitionistic inquisitive logic. Various consequences of this result are explored. For example, it is shown that every superintuitionistic inquisitive logic can be characterized by a Kripke model built up from D-substitutions. Additionally, we resolve a conjecture concerning superintuitionistic inquisitive logics due to Miglioli et al. |
16:45 – 17:15 |
Walk to museum
|
17:15 |
Museum visit
Maritime Museum of the Atlantic, 1675 Lower Water Street |
Session chair: Hansen | |
9:00 – 10:00 |
Invited lecture: From dynamic epistemic logic to socially intelligent robots
Abstract,
Slides
Dynamic Epistemic Logic (DEL) can be used as a formalism for agents to represent the mental states of other agents: their beliefs and knowledge, and potentially even their plans and goals. Hence, the logic can be used as a formalism to give agents a Theory of Mind allowing them to take the perspective of other agents. In my research, I have combined DEL with techniques from automated planning in order to describe a theory of what I call Epistemic Planning: planning where agents explicitly reason about the mental states of others. One of the recurring themes is implicit coordination: how to successfully achieve joint goals in decentralised multi-agent systems without prior negotiation or coordination. The talk will first motivate the importance of Theory of Mind reasoning to achieve efficient agent interaction and coordination, will then give a brief introduction to epistemic planning based on DEL, address its (computational) complexity, address issues of implicit coordination and, finally, demonstrate applications of epistemic planning in human-robot collaboration. |
10:00 – 10:30 |
Relevant reasoning and implicit beliefs
Abstract,
Paper,
Slides
Combining relevant and classical modal logic is an approach to overcoming the logical omniscience problem and related issues that goes back at least to Levesque’s well known work in the 1980s. The present authors have recently introduced a variant of Levesque’s framework where explicit beliefs concerning conditional propositions can be formalized. However, our framework did not offer a formalization of implicit belief in addition to explicit belief. In this paper we provide such a formalization. Our main technical result is a modular completeness theorem. |
10:30 – 11:00 |
Coffee break
|
Session chair: Özgün | |
11:00 – 11:30 |
Online
Reasoning about coins
Abstract,
Paper,
Slides
Aleatoric propositions are a generalisation of Boolean propo- sitions, that are intrinsically probabilistic, or determined by the toss of a (biased) coin. Rather than let propositions take a true/false valuation, we assume they act as a biased coin, that will sometimes land heads (true), and sometimes land tails (false). Complex propositions then cor- respond to a conditional series of tosses of these coins. We extend the syntax and semantics for Aleatoric Logic to include a novel fixed-point operator that is able to represent a weak form of iteration. We examine the expressivity of the of the language, showing a correspondence to the rational functions over (0,1). |
11:30 – 12:00 |
A principled approach to expectation maximisation and latent Dirichlet allocation using Jeffrey's update rule
Abstract,
Paper,
Slides
Expectation Maximisation (EM) and Latent Dirichlet Allocation (LDA) are two frequently used inference algorithms, for finding an appropriate mixture of latent variables, and for finding an allocation of topics for a collection of documents. A recent insight in probabilistic learning is that Jeffrey’s update rule gives a decrease of Kullback-Leibler divergence. Its logic is error correction. It is shown that this same rule and divergence decrease logic is at the heart of EM and LDA, ensuring that successive iterations are decreasingly wrong. |
12:00 – 12:30 |
Online
Conditional obligations in justification logic
Abstract,
Paper,
Slides
This paper presents a justification counterpart for dyadic deontic logic, which is often argued to be better than Standard Deontic Logic at representing conditional and contrary-to-duty obligations, such as those exemplified by the notorious Chisholm’s puzzle. We consider the alethic-deontic system (E) and present the explicit version of this system (JE) by replacing the alethic Box-modality with proof terms and the dyadic deontic Circ-modality with justification terms. The explicit representation of strong factual detachment (SFD) is given and finally soundness and completeness of the system (JE) with respect to basic models and preference models is established. |
12:30 – 14:30 |
Lunch
|
Session chair: Hansen | |
14:30 – 15:15 |
Tutorial: Description logics and other decidable logics for graph-structured data
Abstract,
Slides
In this tutorial we will introduce a few expressive description logics that can be used to describe graph-shaped structures, and see how these logics relate to well-established modal logics such as graded and hybrid modal logics, and variants of propositional dynamic logic (PDL). We will also summarise some computational properties of these logics, particularly the boundaries of decidability and the complexity of basic reasoning services. |
15:15 – 16:15 |
Invited lecture: A short introduction to SHACL for logicians
Abstract,
Paper,
Slides
The SHACL Shapes Constraint Language was recommended in 2017 by the W3C for describing constraints on web data (specifically, on RDF graphs) and validating them. At first glance, it may not seem to be a topic for logicians, but as it turns out, SHACL can be approached as a formal logic, and actually quite an interesting one. In this talk, we give a brief introduction to SHACL tailored towards logicians. We discuss how SHACL relates to some well-known modal and description logics, and frame the common uses of SHACL as familiar logic reasoning tasks. This connection allows us to infer some interesting results about SHACL. Finally, we summarise some of our recent work in the SHACL world, aiming to shed light on how ideas, results, and techniques from well-established areas of logic can advance the state-of-the-art in this emerging field. |
16:15 – 16:45 |
Coffee break
|
Session chair: Bolander | |
16:45 – 17:15 |
Validity in choice logics ─ a game-theoretic investigation
Abstract,
Paper,
Slides
Qualitative Choice Logic (QCL) is a framework for jointly dealing with truth and preferences. We develop the concept of degree-based validity by lifting a Hintikka-style semantic game [10] to a provability game. Strategies in the provability game are translated into proofs in a novel labeled sequent calculus where proofs come in degrees. Furthermore, we show that preferred models can be extracted from proofs. |
17:15 – 17:45 |
Online
Focus-style proofs for the two-way alternation-free μ-calculus
Abstract,
Paper,
Slides
We introduce a cyclic proof system for the two-way alternation-free modal μ-calculus. The system manipulates one-sided Gentzen sequents and locally deals with the backwards modalities by allowing analytic applications of the cut rule. The global effect of backwards modalities on traces is handled by making the semantics relative to a specific strategy of the opponent in the evaluation game. This allows us to augment sequents by so-called trace atoms, describing traces that the proponent can construct against the opponent’s strategy. The idea for trace atoms comes from Vardi’s reduction of alternating two-way automata to deterministic one-way automata. Using the multi-focus annotations introduced earlier by Marti and Venema, we turn this trace-based system into a path-based system. We prove that our system is sound for all sequents and complete for sequents not containing trace atoms. |
17:45 – 18:00 |
Break
|
Session chair: De Queiroz | |
18:00 – 19:00 |
Movie screening: Journeys of Black mathematicians
Abstract
We will show clips from a documentary film that explores the groundbreaking achievements of African American mathematicians. "Journeys of Black Mathematicians" (zalafilms.com) is a powerful film that traces the history of these pioneering individuals and their impact on mathematics. This film is not only a tribute to their achievements, but also an inspiration for Black and other minority students to pursue their studies and consider careers in mathematics. The film is currently in production and its filmmaker, George Csicsery, has agreed to show us excerpts from a rough cut of this film. He will also be available via Zoom to hear feedback and answer questions. |
Session chair: Scedrov | |
9:00 – 9:30 |
Tutorial: Compositionality: categorial variations on a theme
Abstract,
Slides
In the line of work initiated by Richard Montague, natural language syntax and semantics are related by a homomorphism, a structure-preserving map that sends the sorts and operations of a syntactic source algebra to their counterparts in an algebra for composing meanings. In categorial grammar, source and target take the form of deductive systems, logics of syntactic and semantic types respectively. Natural language syntax and semantics often pose conflicting demands on compositional interpretation and different strategies for resolving these conflicts have shaped the development of the field. The tutorial, aimed at researchers with a logic/computer science background, illustrates some of the main design choices: what is the nature of the syntactic calculus - modelling surface form (Lambek) or abstract syntax (Abstract Categorial Grammar); what is the target interpretation - truth-conditional/model-theoretic (formal semantics), or vector spaces/linear maps (distributional semantics); what is the division of labour between lexical and derivational semantics? |
9:30 – 10:30 |
Invited lecture: Lambek calculus and its modal extensions
Abstract,
Slides
In this talk, I review the different uses of modalities in extensions of the Lambek calculus and the ensuing challenges for efficient Natural Language Processing. The Syntactic Calculus, seen as a non-commutative [4] (or also non-associative [5]) precursor of Intuitionistic Linear Logic, is an early representative of substructural logic. With the revival of interest in the Syntactic Calculus came the realization that the original formulation lacked the expressivity required for realistic grammar development. The extended Lambek calculi introduced in the 1990ies enrich the type language with modalities for structural control. These categorial modalities have found two distinct uses. On the one hand, they can act as licenses granting modally marked formulas access to structural operations that by default would not be permitted. On the other hand, modalities can be used to block structural rules that otherwise would be available. Examples of modalities as licensors relate to various aspects of grammatical resource management: multiplicity, order and structure. As for multiplicity, under the control of modalities limited forms of copying can be introduced in grammar logics that overall are resource-sensitive systems. As for order and structure, modalities may be used to license changes of word order and/or constituent structure that leave the form-meaning correspondence intact. The complementary use of modalities as blocking devices provides the means to seal off phrases as impenetrable locality domains. Reviewing early results and current work on extensions of the Lambek calculus, one finds two contrasting views on the nature of modalities. One strand of research addresses the licensing type of control taking its inspiration from the ‘!’ exponential of Linear Logic, but introduces sub-exponential refinements providing access to packages of structural rules, see [1] for a recent representative of this approach. Under the alternative view, advocated in [6] and subsequent work, modalities come in residuated pairs (adjoints), unary variants of the residuated triples (product, left and right implications) of the core Syntactic Calculus; the blocking and licensing type of control here share the same logical rules. A reconciliation of these views is suggested by the multi-type approach of [2] who argue on semantic and prooftheoretic grounds that the linear exponential, rather than being treated as a primitive connective, has to be decomposed into a composition of adjoint operations. The fine-grained type theory of modally enhanced Lambek calculus increases the complexity of Natural Language Processing when it comes to supertagging (assigning words the contextually appropriate type in the light of high lexical type ambiguity) and parsing (associating a string of words with a structural representation that can serve as the scaffolding for semantic interpretation). In the final part of the talk I discuss proposals of [3] that aim to tackle these problems with an integrated neurosymbolic approach. References
|
10:30 – 11:00 |
Coffee break
|
Session chair: Zach | |
11:00 – 11:30 |
Online
Factive complements are not always unique entities: a case study with Bangla 'remember'.
Abstract,
Paper,
Slides
There are many approaches regarding the emergence of factivity in literature. Some of them who are proponents of the view that factive inferences are exported from complements, attribute it to the definiteness feature of the complements [35,28,29]. This definiteness feature can be realized covertly via a semantically-sensitive definite determiner ∆ [35], or via an overt marker (e.g., ge in Washo) [28]. Although [14] later revised their claim by calling this ge a marker of familiarity, not that of definiteness, they did not provide any evidence where the D in factive nominalized complements is not definite. This paper provides evidence from Bangla (/Bengali; an Indo-Aryan language) where an attitude verb mone pora ‘remember’ can embed nominalized complements that can be interpreted indefinitely but still remains factive. In this paper, we provide a formal compositional analysis that can account for this. |
11:30 – 12:00 |
Online
An axiom system for hybrid logic with propositional quantifiers
|
12:00 – 12:30 |
Decidability of modal logics of non-k-colorable graphs
Abstract,
Paper,
Slides
We consider the bimodal language, where the first modality is interpreted by a binary relation in the standard way, and the second is interpreted by the relation of inequality. It follows from Hughes (1990), that in this language, non-k-colorability of a graph is expressible for every finite k. We show that modal logics of classes of non-k-colorable graphs (directed or non-directed), and some of their extensions, are decidable. |
12:30 – 14:30 |
Lunch
|
Session chair: Scedrov | |
14:30 – 15:15 |
Tutorial: Prerequisites for the talk on incompleteness of static theories
and completeness of dynamic beliefs, in people and in bots
Abstract,
Slides
The claim of the main talk is that combining the encodings and self-reference leading to the incompleteness results in static logics and the belief updates in dynamic logics leads to suitable completeness results. But the combined formalism needed to prove this claim may seem unfamiliar. I will use this tutorial to explain how this unfamiliar framework arises from familiar formalisms. (I will also do my best to make it possible to follow both the main talk and the tutorial independently, but the presented research is concerned with self-fulfilling and self-deceiving claims, so it is applicable to itself.) |
15:15 – 16:15 |
Invited lecture: From incompleteness of static theories to completeness of dynamic beliefs, in people and in bots
Abstract,
Paper,
Slides
Self-referential statements, referring to their own truth values, have been studied in logic ever since Epimenides. Self-fulfilling prophecies and self-defeating claims, modifying their truth values as they go, have been studied in tragedies and comedies since Sophocles and Aristophanes. In modern times, the methods for steering truth values in marketing and political campaigns have evolved so rapidly that both the logical and the dramatic traditions have been left behind in the dust. In this talk, I will try to provide a logical reconstruction of some of the methods for constructing self-confirming and self-modifying statements. The reconstruction requires broadening the logical perspective from static deductive theories to dynamic and inductive. While the main ideas are familiar from the theory of computation, the technical prerequisites will also be discussed in the introductory tutorial. |
16:15 – 16:45 |
Coffee break
|
Session chair: Moortgat | |
16:45 – 17:15 |
Towards an induction principle for nested data types
Abstract,
Paper,
Slides
A well-known problem in the theory of dependent types is how to handle so-called nested data types. These data types are difficult to program and to reason about in total dependently typed languages such as Agda and Coq. In particular, it is not easy to derive a canonical induction principle for such types. Working towards a solution to this problem, we introduce dependently typed folds for nested data types. Using the nested data type Bush as a guiding example, we show how to derive its dependently typed fold and induction principle. We also discuss the relationship between dependently typed folds and the more traditional higher-order folds. |
17:15 – 17:45 |
Maximally multi-focused proofs for skew non-commutative MILL
Abstract,
Paper,
Slides
Multi-focusing is a generalization of Andreoli’s focusing procedure which allows the parallel application of synchronous rules to multiple formulae under focus. By restricting to the class of maximally multi-focused proofs, one recovers permutative canonicity directly in the sequent calculus without the need to switch to other formalisms, e.g. proof nets, in order to represent proofs modulo permutative conversions. This characterization of canonical proofs is also amenable for the mechanization of the normalization procedure and the performance of further formal proof-theoretic investigations in interactive theorem provers. In this work we present a sequent calculus of maximally multi-focused proofs for skew non-commutative multiplicative linear logic (SkNMILL), a logic recently introduced by Uustalu, Veltri and Wan which enjoys categorical semantics in the skew monoidal closed categories of Street. The peculiarity of the multi-focused system for SkNMILL is the presence of at most two foci in synchronous phase. This reduced complexity makes it a good starting point for the formal investigations of maximally multi-focused calculi for richer substructural logics. |
17:45 – 18:15 |
Quantitative global memory
Abstract,
Paper,
Slides
We show that recent approaches to static analysis based on quantitative typing systems can be extended to programming languages with global state. More precisely, we define a call-by-value language equipped with operations to access a global memory, together with a semantic model based on a (tight) multi-type system that captures exact measures of time and space related to evaluation of programs. We show that the type system is quantitatively sound and complete with respect to the operational semantics of the language. |
18:15 – 20:30 |
Free time
|
20:30 |
Conference dinner
McKelvies Seafood Restaurant, 1680 Lower Water Street. |
Note: later start time due to late dinner.
Session chair: Scedrov | |
10:00 – 11:00 |
Invited lecture: Learning context-free grammars from positive data and membership queries
Abstract,
Paper,
Slides
A key difficulty in learning context-free, as opposed to regular, languages from positive data and membership queries lies in the relationship between the string sets corresponding to the nonterminals of a context-free grammar and the language generated by the grammar. In the case of a regular language, the states of a minimal DFA for the language correspond to the nonempty left quotients of the language. A left quotient of L is a language of the form u \ L = { x | ux ∈ L }. Whether a string x belongs to u \ L can be determined by the membership query "ux ∈ L?”. In the case of a context-free language L generated by a context-free grammar G, there seems to be no general recipe for deciding membership in the string set associated with a nonterminal of G using the membership oracle for L. In this talk, I present some results of recent work (with Ryo Yoshinaka) about learning a special class of context-free grammars whose nonterminals correspond to “relativized extended regular expressions”. These expressions translate into polynomial-time reductions of the membership problem for nonterminals to the membership problem for the language generated by the grammar. There is a successful learner for this class that use these reductions to test postulated productions for adequacy. It is an interesting problem to determine the scope of this class of context-free grammars. We have not yet found a context-free language that is not inherently ambiguous that has no grammar in this class. Another intriguing open question is whether extended regular expressions can be restricted to star-free expressions without altering the class of context-free languages that are covered. |
11:00 – 11:30 |
Coffee break
|
Session chair: Pavlovic | |
11:30 – 12:00 |
Online
A separation logic with histories of epistemic actions as resources
Abstract,
Paper,
Slides
We propose a separation logic where resources are histories (sequences) of epistemic actions so that resource update means concatenation of histories and resource decomposition means splitting of histories. This separation logic, called AMHSL, allows us to reason about the past: does what is true now depend on what was true in the past, before certain actions were executed? We show that the multiplicative connectives can be eliminated from a logical language with also epistemic and action model modalities, if the horizon of epistemic actions is bounded. |
12:00 – 12:30 |
Online
Bisimulations between Verbrugge models and Veltman models
Abstract,
Paper,
Slides
Veltman semantics is the basic Kripke-like semantics for interpretability logic. Verbrugge semantics is a generalization of Veltman semantics. An appropriate notion of bisimulation between a Verbrugge model and a Veltman model is developed in this paper. We show that a given Verbrugge model can be transformed into a bisimilar Veltman model. |
12:30 – 14:30 |
Lunch
|
Session chair: Kanazawa | |
14:30 – 15:00 |
Online
Parameterized complexity of propositional inclusion and independence logic
|
15:00 – 15:30 |
Parallelism in realizability models
Abstract,
Paper,
Slides
Study of parallel operations such as Plotkin’s parallel-or has promoted the development of the theory of programming languages. In this paper, we consider parallel operations in the framework of categorical realizability. Given a partial combinatory algebra A equipped with an “abstract truth value” Σ (called predominance), we introduce the notions of Σ-or and Σ-and combinators in A. By choosing a suitable A and Σ, a form of parallel-or may be expressed as a Σ-or combinator. We then investigate the relationship between these combinators and the realizability model Ass(A) (the category of assemblies over A) and show the following: under a natural assumption on Σ, (i) A admits Σ-and combinator iff for any assembly X ∈ Ass(A) the Σ-subsets (canonical subassemblies) of X form a poset with respect to inclusion. (ii) A admits both Σ-and and Σ-or combinators iff for any X ∈ Ass(A) the Σ-subsets of X form a lattice with respect to intersection and union. |
15:30 – 16:00 |
Subsumption-linear Q-resolution for QBF theorem proving
Abstract,
Paper,
Slides
Subsumption-Linear Q-Resolution (SLQR) is introduced for proving theorems from Quantified Boolean Formulas. It is an adaptation of SL-Resolution, which applies to propositional and first-order logic. In turn SL-Resolution is closely related to model elimination and tableau methods. A major difference from QDPLL (DPLL adapted for QBF) is that QDPLL guesses variable assignments, while SLQR guesses clauses. In prenex QBF (PCNF, all quantifier operations are outermost) a propositional formula D is called a nontrivial consequence of a QBF Ψ if Ψ is true (has at least one model) and D is true in every model of Ψ. Due to quantifiers, one cannot simply negate D and look for a refutation, as in propositional and first-order logic. Most previous work has addressed only the case that D is the empty clause, which can never be a nontrivial consequence. This paper shows that SLQR with the operations of resolution on both existential and universal variables as well as universal reduction is inferentially complete for closed PCNF that are free of asymmetric tautologies; i.e., if D is logically implied by Ψ, there is a SLQR derivation of D from Ψ. A weaker form called SLQR–ures omits resolution on universal variables. It is shown that SLQR–ures is not inferentially complete, but is refutationally complete for closed PCNF. |
16:00 |
Closing
|