0

This is a follow-up to a question I had about foundationalism, which seems paradoxical inasmuch as it is a thesis that has been argued for (perhaps it is just the historical argumentation that is paradoxical, not the thesis itself). Here, it seems that coherentism involves rejecting the existence of foundational non-inferred premises; rather, any premise can be viewed as inferred (not necessarily deductively!) from something else, after all.

However, it seems to me that coherentism cannot avoid incorporating some non-inferred claims into itself. For example, we need a definition sentence for talk of "coherence" in the first place. On top of that, we need a sentence stating that entering into the rightly defined coherence relations, provides justification for beliefs in the first place. And then we need a method of exhibiting these relations.

Another way to illustrate the issue is in terms of the graph-theoretic account of regress-solution types. Presumably, we have beliefs about graphs, how they are defined and how they work. Wouldn't defining a regress-solution type, graph-theoretically, pre-found (so to speak) all the types, in graph theory?x So that foundationalism would end up being inescapable, in a sense. (This seems to be along the lines of Alessio Moretti's point of view, regarding the philosophical side of his geometrization of logic.) (I would say that this reasoning does apply to infinitism, too: we will need a foundational definition of infinitism, a proposition of infinitary justifiers, methods of infinite regression...)

Does coherentism collapse into a form of foundationalism where the fundamental premises are about coherence relations?

xAnd then, would such a foundation of knowledge types generally, turn graph theory into the foundation of mathematical knowledge, too, after all? I am not against this thesis, all things considered, but I am not for it in the way that I was a few years back, either.

J D
  • 19,541
  • 3
  • 18
  • 83
Kristian Berry
  • 9,561
  • 1
  • 12
  • 33
  • 1
    It would be my impulse to say that a coherent formal system relies on a meta-language, and therefore the coherence of the object language is derivative of the axiomatical foundations of the meta-language. Does this appeal to your intuitions? – J D Dec 12 '21 at 16:33
  • Of all the many and varied distinctions that philosophers have found suspicious, I find the distinction between an object language and a metalanguage to be one of the suspicious ones. That being said, put in those terms, the issue just seems to be that the coherentism of the object language collapses into the foundationalism of the metalanguage, "eventually"? – Kristian Berry Dec 12 '21 at 16:39
  • I'll answer below, but do tell about these suspicions? – J D Dec 12 '21 at 16:43
  • Maybe I'm misreading the material (right now I'm looking at "Tarski's Truth Definitions" in the SEP), but it seems the purpose of introducing these language-tiers is to have truth predicates/values on different levels, to avoid generating the liar paradox. However, I have a totally different belief about how to avoid said generation, one which doesn't require different levels of truth. On top of that, the internal content of this belief seems to rule out the formation of Gödel sentences (at least in natural language), leading to compromised (at least) incompleteness theorems. – Kristian Berry Dec 12 '21 at 16:53
  • I.e. in the theory I'm working with, the analogue of the Gödel sentence would be something like, "This sentence is not justifiable," or, "S: j(S) = 0." What then of j(S: j(S) = 0)? But so if "this sentence" is unjustifiable, it doesn't "go anywhere," does not have the traditional incompleteness consequences, it seems to me. – Kristian Berry Dec 12 '21 at 16:55
  • I also have to say that I am suspicious of the semantics-syntax difference, or at least of making "too much" out of it. Being familiar with so-called signiconic literature, for instance, it is not clear to me that syntactic glyphs are not, as such, semantic at the same time, or rather it is not clear to me that there is not so much more to the issue than is indicated by the bare distinction. – Kristian Berry Dec 12 '21 at 17:01
  • My answer is as simple as I can make it, sorry! As for using graph theory as a foundation, you're close. Category theory which is often visualized with graph theory is a perfectly legitimate foundation and alternative to set theory (see WP's 4th paragraph for assurance.) – J D Dec 13 '21 at 00:26
  • No. The coherence relations are purely *formal* requirements on admissible verbal descriptions. They go little further beyond specifying pure syntax of descriptions by adding some global requirements (such as consistency), and do not touch the substance of what is described. Foundationalism, as normally understood, advocates existence of foundational *material* premises on top of formal coherence, be it sense data or some *a priori* posits, about the substance itself, not our descriptions of it. – Conifold Dec 13 '21 at 11:20
  • What is to stop a system defining its own criterion of coherence, without reference to any external or foundational concept of it? Rival epistemological systems might not only differ in what they hold true, but even in what counts as a criterion for determining truth, and even what counts as a criterion for consistency. – Bumble Dec 13 '21 at 14:03
  • @Conifold, I considered that difference (material vs. formal premises as such) and I suppose my only rejoinder would be: but then this makes the difference between material and formal premises itself into a sort of foundation. Or, is the difference between form and matter, only formal, only material, or both (or neither)? Albeit not much can be deduced from the difference, so it would not be the most "satisfying" foundation, I suppose. – Kristian Berry Dec 13 '21 at 15:25
  • That's exactly right. At least in regards to mathematical systems, this is what J.R. Lucas says. "[W]hat Gödel's theorem shows is only that the concept of proof cannot be completely formalised... we recognize that truth out-runs provability... the fact that mathematical truth outruns provability within a formal system argues for the creativity of mathematical inference... given an inference, we can only detect the hitherto unformulated principle it exemplifies." IOW, mathematical syntax must necessarily be grounded in plain language syntax which itself is empty were it not for intuition. – J D Dec 13 '21 at 16:55
  • The fact that mathematical theories are (outside of the Flatland of mathematical academia) grounded in other epistemological theories only goes to show how the regress continues, arguably into psychological state. – J D Dec 13 '21 at 16:57
  • @Bumble Goedel's theorem shows a system's consistency, the coherence of the collection of truths provable within the system, can never be proven by the system itself, subject, of course, to the same restrictions placed on Goedel's theorem. – J D Dec 13 '21 at 18:57
  • "Foundation" is supposed to ground all available knowledge. Conventions, or even some isolated material postulates with meager consequences, are no foundation at all. – Conifold Dec 13 '21 at 19:42
  • @Conifold, I guess at the end of the day, I don't believe that coherentism is really just a peculiar example of foundationalism, after all. However, for some reason, Hamkins told me that there is some sort of bisimulation between well-founded and ill-founded set theories, so IDK. At least, I suspect that foundationalism and coherentism can either be integrated as in Haack's theory, or taken for something like non-overlapping magisteria, so to say. I haven't settled my opinions about these questions yet... – Kristian Berry Dec 14 '21 at 04:19
  • Well, ZFC and Aczel's AST are biinterpretable, as are classical and intuitionistic FOL. Does it tell us anything more than that FOL is incapable of encoding semantics? Turing machines and neural networks can simulate each other, and, more informally, materialists and idealists can "simulate" each other's conceptions in their ontologies too. Sufficiently rich frameworks, mathematical or philosophical, can "simulate" anything under the Sun, that does not tell us anything about what distinguishes them from each other. – Conifold Dec 14 '21 at 04:49
  • There are two theories of coherentism according to the SEP: a coherentism about justification and coherentism about truth. The pardigmatic example of both is Hilberts notion of formal truth. Here, a formal system is said to be true when its axioms are consistent and hence - and this is a phikmoosophical jump - coherently justifiable - together with another philosophical jump - coherently true. >We need a definition sentence for "coherence" ... We don't and can't require definitions for everything. At bottom certain things are left undefined but that does not mean not understood. These are the – Mozibur Ullah Dec 17 '21 at 23:39

1 Answers1

0

Caveat

I'm not a logician, so this will represent my best effort. Criticism of the claims is encouraged.

Short Answer

Does coherentism collapse into a form of foundationalism where the fundamental premises are about coherence relations?

Yes. A model in mathematical logic is the use of one formal system to ground the truths of a second formal system by translating the truth of the second into the first in a manner similar to use-mention distinctions in natural language. The inner system is the object language of the outer, the meta-language, where the language is taken in a formal sense as a syntactic construction of a formal grammar to ensure well-formedness. The relationship between the object language and the meta-language is that the grammar of the meta-language has to be more expressive than the object grammar. This is the nature of the grounding of truth. The object formal system is used to prove truths deductively, whereas the meta formal system is used to prove the consistency of the deductions of the object system deductively. Reread that because that's confusing just to write.

So, in the prime example, in naive set theory, the basic entities, relations, and operations can be used to prove theorems. But what they cannot do is prove theorems consistently since the system produces contradictions. But the alternative approach is to provide axioms that do not exclude sets containing themselves, ZFC being the historically inspired standard form. This works because set theory is one language, and the logic of the axioms is in a second language; set theory and arithmetic are said to be grounded in logic. Thus, set theory produces consistent set-theoretic truths (philosophical coherence) when it is translated into the foundational truths of FOPC (philosophical foundationalism).

Long Answer

Formal Systems and Languages

Generally, logicians take formal systems as little more than a collection of sentences that through logic output a single sentence, a project started by Frege. But, the notion of a formal system is itself computable, and it might shed some insight, since you talked about signs. Signs in the intuitional sense are best represented by strings of characters for computational purposes, grounding the notion of a sign in that of a string in computer science. We can consider this as one possible formalism to represent a formal system. (It's possible to formalize the notions of alphabets, formal languages, and automata with far more sophistication than what follows, which is a summary.)

Let's start with the formal notion of a formal system. A formal system can be thought of as a collection of grammar-determined strings (sentences) constructed syntactically from a formal language that concatenates a string of characters from an alphabet. In computer science, one popular way to express context-free grammars (you have to examine the Chomskian hierarchy to have a better idea of what that means) is Backus-Naur form. Backus-Naur gives a basic example of how well-formedness can be determined computationally. Once a formal language has logical connectives incorporated into its grammar, it is capable to use something like modus ponens iteratively to arrive at the conclusion iteratively reducing strings, or rather sentences, to a final sentence. Thus from antecedents to consequents we go.

Currently, the aims of mathematical logic to ensure the rigor of a formal system relies on a meta-language whose expressivity is greater than the object language, and therefore the coherence of the object language is established by the axiomatical foundations of the meta-language. The object language is generally characterized as syntactic and uses the syntatic turnstile1, abstracted, and deals in provability instead of satisfiablity, whereas the metalanguage is semantic and uses the semantic turnstile, is more specific, and deals in consistency and decidability of the object language. An object language, therefore, is a deductive tool to examine a claim extending from one axiomatic base which is primarily built to demonstrate satisfiability of sentences, which is philosophically speaking an instance of truth derived from propositions of the system, whereas the metalanguage looks to ensure claims about the claims of the subject language, i.e. consistent (mathematical coherence), but grounded in a system that speaks to the nature of the original truth with an eye not only on the validity of the object-level deduction (provability), but the validity of the entire system over a range of variables in the domain of discourse, showing that the system isn't inconsistent at proving truths (consistency). The bridge between the two languages is from the Tarskian theory of truth which uses the T-sentence to show that there's a translation in truth from the subject language to the object language, which is where the notion of deflationary truth derives.

Now, between two languages, there are necessarily two distinct grammars, and the important thing to remember is that the meta-language grammar has to be more expressive than the object language. In the language of formal languages, this simply means that the well-formed strings of the object language must be a subset of the well-formed strings of the meta-language. Remember, in a T-sentence, the use of string delimiters (sometimes called escape sequences, quotifiers, etc. such as apostrophes, quotation marks, etc.), allows the T-sentence (Tarskian method to ground by bijection truth from one language to another) is an instance of use-mention distinction and is used to contain sentences of the subject string in the sentence of meta-language. Tarski's example from Logic, Semantics, Meta-Metamathematics, p. 156:

(3) 'it is snowing' is a true sentence if and only if it is snowing.

You can see that 'it is snowing' is a proposition and is being evalutated for veracity using the biconditional logical connective that needn't be part of the conversation, that is language used, when discussing the state of the weather. The challenge in parsing this sentence is made easier by quotification, but is obviously not part of spoken language. (In linguistics, the phenomenon is called center embedding and without delimiters, can lead to confusion.)

Now, we can see the advantage of using model theory is obvious. It allows paradoxes of one set of axioms to be resolved by the addition of additional axioms instead of modifying the original axioms of the formal system, and at the same time allows to speak to the range of results of the formal system while accomodating exploring fully the notions of recursion, decidability, computability, and so on. The origin of this increased complexity was a response to the Liar's paradox formalized by Russell, and the attempt to ground set theory in logic of it's axioms resulting in ZF, and later by extension ZFC. From there, other set theories like NBG flourished.

So, it doesn't matter whether you pull an example from set theory or graph theory, or even geometry. When you have one language, for example, FOPC, and you begin to examine whether or not the conclusions arrived at in the language are consistent, you need to introduce new ideas to prove the consistency that necessarily is outside of the FOPC. And the moment you start formalizing this process, you wind up tapping into ideas like meta-mathematics, meta-logic, and meta-languages because of the recursive nature involved in using the propositions of the first language inside a more expressive second language used to evaluate it. So, kudos to you for recognizing there is epistemological coherentism that "collapses" into a form of foundationalism where the fundamental premises are about coherence relations. That's the very essence of using models to evaluate the semantics of a system.

1 The single-double turnstile is the current norm in mathematical logic, but the same ideas might be conveyed in natural language, single-double arrows, or according to WP, single-single turnstile convention.

J D
  • 19,541
  • 3
  • 18
  • 83
  • I wish I could confirm this answer twice. As an exposition of the concept/role of metalanguages, it is also a solid defense of the same concept. – Kristian Berry Dec 13 '21 at 09:02
  • I expect that as I reflect on this, I will have some better understanding of Hamkins' response to my MathOverflow post about "the justifiable universe." He said something about bisimulation facts undermining the apparent point of *V = J*, but I was at a loss to respond to that counterproposal... – Kristian Berry Dec 13 '21 at 09:15
  • Regarding your above "The object language is generally characterized as syntactic and uses the syntactic turnstile...whereas the metalanguage is semantic and uses the semantic turnstile...", generally the syntactic turnstile is also at the meta level not object language level. See [reference](https://en.wikipedia.org/wiki/Turnstile_(symbol)): *In metalogic, the study of formal languages; the turnstile represents syntactic consequence (or "derivability").* – Double Knot Dec 13 '21 at 22:45
  • @DoubleKnot I read the entry and text. The metalogic article claimed that mathematical logical and the model-theoretic has largely subsumed metalogic which would suggest that the single tee isn't used any more. I checked Tarski, and he uses a cup, and I have three other works, Chang's text on Model Theory (double turnstile), Boolos et Al on Computability (English, double turnstile). And Ono's text on Proof Theory and Sequent Calculus (double arrow, double turnstile) but there was some use of a subscript to show provability within a system. I could see a system of notation that uses a script... – J D Dec 15 '21 at 09:03
  • of course, there's no reason you couldn't just determine from context, but that would be quite the cognitive burden. Anyway, thanks for sharing, but I don't know that a digression into variations of notations to express syntactic and semantic have much value. I will put a footnote in, however. Thx! – J D Dec 15 '21 at 09:04
  • Done! (Uh, oh. Now I know how to do foot notes.) – J D Dec 15 '21 at 09:10