6

Chomskys notion of a universal grammar is his way of comprehending that human languages appear to have a deep grammar, and that children appear to learn language as though they are primed for it.

It was Kant that isolated the question of whether synthetic a priori knowledge is possible, and he suggested that this question hadn't been asked before; he suggested that this was possible, and placed under this rubric mathematics and our understanding of space & time.

Now, does language fall under this? Certainly, it appears that each specific & particular language cannot be, as they are contingent on how language has developed in the social world a hman being is born into. But can one say the same about Chomskys notion of universal grammar, which he claimed is innate?

Mozibur Ullah
  • 1
  • 14
  • 88
  • 234
  • 1
    Synthetic a priori is (according to Kant) a "type" of knowledge. But is language a type of knowledge ? According to C, there is an innate capability of "generating" linguistic "structures". This capability can be connected to kantian spce and time as "form of intuition" (if I remember well) which we can say that are "innate". But math, as synth a priori, is the product of those forms... – Mauro ALLEGRANZA May 10 '14 at 14:58
  • I have come to believe @MauroALLEGRANZA that language is how we articulate knowledge, and that grammar (whether Kantian or Euclidean) is how we build it up. Technically, they contain classes or axioms, which provide groundrules .. – sourcepov Dec 20 '15 at 01:56

4 Answers4

6

Innate does not imply a priori. Suppose we humans are so configured as to be innately afraid of heights (or, as ducklings, to strongly attach to the first person we see). And suppose this manifests to us as beliefs, so that all humans believe innately that heights are dangerous and mothers are wonderful. It should be clear that neither of these beliefs count as a priori simply because of their innateness, despite needing no personal experience to come to believe them.

Universal grammar, if it exists, would be an innate feature of humans. But it wouldn't be an innate feature of all possible rational creatures (whereas logic presumably would). Sufficiently advanced space aliens wouldn't necessarily share that "universal grammar". Therefore, it isn't a priori.

Jonathon Jones
  • 221
  • 2
  • 5
  • I take your point; somehow the intuition of space & time seems more basic; but I think it might be a bit more subtle than that. What right do we have to suppose that sufficiently advanced aliens see the world euclideanly - to coin an adverb? To make this thought a little more concrete, suppose that we simulate the world by a computer using euclidean geometry, then represent that graphically (or pictorially) by some 1-1 mapping. Then the simulation is still euclidean but the representation is not. Might one say these advanced aliens could have a universal grammar that is different from ours? – Mozibur Ullah May 10 '14 at 18:29
  • I suppose, that the difference is that the spatial & temporal sense represents the objective world, in some sense; but language is not objective. – Mozibur Ullah May 10 '14 at 18:31
  • I'm not satisfied by this answer. The question isn't whether universal grammar is "innate" (as you say) -- it's whether universal grammar can be formally derived through logic. If, as you say, logic is a feature of all possible rational creatures, and if a universal grammar can be derived from logic alone, then wouldn't it also be a priori knowledge? So innateness is really a red herring here... – senderle May 12 '14 at 22:51
4

There is a difference between "ability" and "synthetic knowledge." For example, there is no recorded instance of a human being running 100m in less than 9 seconds. That doesn't imply that "all human beings have a synthetic a priori knowledge of sports biomechanics."

Similarly there is an empirical discovery that human short term memory is capable of holding seven (plus or minus two) "chunks" of information. The knowledge that the limit exists, and that the limit is between five and nine chunks, is synthetic knowledge. The limit itself is just a limitation of human brains (an ability to remember more than four chunks and an inability to remember more than nine chunks).

Chomsky's universal grammar is an empirical statement about the biophysical limits of the machinery that human beings seem to use to process language. Most humans seem to have the ability to differentiate between grammatical and ungrammatical sentences in their native language, with a complexity that is approximately equal to the mathematical class of context-free languages. This tells us something about the complexity of the machinery in the human brain that is required to process language. It needs to have more state storage than a finite automaton, approximately the storage ability of a push-down automaton, and probably less state storage than a Turing machine. (That's just determination of gramaticalness, not meaning/understanding.)

Other commonalities between the grammars of human languages is that they all seem to have the same kinds of classes: "nouns", "verbs", "adjectives", and usually have conjugation of verbs and/or declension of nouns and adjectives based on grammatical categories. The grammars and contents of the classes are different in every language, but there is significant structure that all languages share. This, again, tells us something about the complexity and storage capacity of the human machinery that processes language. The fact that there are commonalities in the structure of English, Mandarin, Urdu and Arabic does not mean that we are all born with synthetic a priori knowledge of what that structure is.

Wandering Logic
  • 301
  • 1
  • 8
3

I think this is a challenging question, but one that can be thought through in a detailed way. The conclusion I'm going to defend is that a universal grammar that looks anything like what Chomskyans expect will be analytic a priori knowledge -- assuming those terms are indeed well-defined. I'll do my best to select fairly robust definitions of those terms, but keep in mind that anyone who rejects the existence of a priori knowledge, or who rejects the analytic-synthetic distinction, will reject my conclusion as meaningless or ill-formed.

I'll also discuss the lingering possibility that knowledge of a universal grammar might indeed be synthetic a priori knowledge, and what one would have to demonstrate to persuade me of that claim.

Space does not permit a full development of the argument I want to make, so take what I'm offering here only as a rough sketch -- in two parts. I'll begin by talking about aprioricity; then I'll talk about analyticity.

Knowledge of Universal Grammar is A Priori

First, I want to defend the position that if anything is a priori knowledge, then any well-formed universal grammar is a priori knowledge -- regardless of whether it is "innate." The argument is very simple and goes like this:

  1. If a priori knowledge exists at all, then any knowledge that we can mathematically formalize is a priori knowledge.
  2. A "well-formed universal grammar" is a syntactic structure that we can mathematically formalize.

The desired conclusion immediately follows. Recall that "a priori" knowledge is not necessarily innate knowledge -- it's simply knowledge that can be verified as true without having to turn to experience. (That might always count as "innate" depending on what your definition of "innate" is; but let's not get into that!)

Now, premise one seems indispensable if we're using these terms in ways that are even close to standard. Premise two is defensible because the whole point of Chomskyan grammars is that they can be formalized; for example, transformational grammars can be formalized as tree automata. So if the Chomskyan program is on the right track, then the particular universal grammar inside the heads of all humans is mathematically formalizable, and is therefore a priori knowledge.

Now, what if this grammar isn't really universal? What if different people have different grammars in their heads? I don't think that would change anything. If we have multiple differing grammars inside our heads, they all should still count as a priori knowledge if they are mathematically formalizable. But if there are no mathematically formalizable grammars in our heads, then the Chomskyan program is on the wrong track, and the question stops being coherent. (We would still have a priori knowledge of things like context-free grammars, transformational grammars, pushdown automata, and tree automata! They just wouldn't have any particular relation to the grammars of natural human language.)

Knowledge of Universal Grammar is Analytic

The difficult part of this question is whether our knowledge of a well-formed universal grammar would be synthetic or analytic. Here again, we have to accept that the distinction exists; otherwise the question is incoherent. But what might the distinction mean in this case? In particular, we need a precise understanding of the term "analytic." Then we need to understand what it takes for a priori knowledge to be synthetic. This last problem is very difficult, and I think the best approach is to look at what might make mathematical knowledge synthetic rather than analytic from a post-Fregean point of view.

So I'll begin by turning to Frege's account of analyticity, which is usefully summarized by the SEP. In short, Frege tries to clarify the notion of "containment" that Kant uses to define analyticity. According to Kant, an analytic statement is one that states a fact already contained in the definitions of the terms it uses. So the statement "all bachelors are unmarried" is analytic, but the statement "all bachelors are sad" is synthetic. Frege attempted to refine this definition by linking it to the idea of formal or logical equivalence. If, by a process of purely formal substitution, one can derive a statement from a set of given prior terms, then that statement is analytic.

Now, Frege's hope was that he could show that all arithmetical knowledge was analytic. But there's a convincing argument that he failed. This argument has to do with the problem of the actual existence of mathematical entities. Frege's system explicitly commits itself to the existence of mathematical entities, but the justification for that commitment must be synthetic!

Why should we believe that? Because for any given formalization of arithmetic, there exist diophantine equations that do not have solutions, but that cannot be proven unsolvable within that formalization. Since diophantine equations are really quite elementary components of mathematics, we would like a commitment to the existence of mathematical entities to include a commitment to the existence of diophantine equations. And if we are committed to the existence of those equations, then we would like there to be a fact of the matter whether or not any given diophantine equation is solvable. But if we depend only on analytic knowledge of mathematics -- if we rely only on formalization -- then we have to accept that in some cases, there is not a fact of the matter whether a particular diophantine equation is solvable. The conclusion that there is a fact of the matter is an inescapably synthetic judgment -- it posits the existence of Something outside of the formal system of definitions and substitutions that describes it. But because that Something is strictly mathematical in nature, it seems unreasonable to describe our knowledge of it as a posteriori -- unless you reject the idea of a priori knowledge altogether.

If you don't want to confront this problem, then you don't have to commit yourself to the existence of mathematical entities, but you then give up some kinds of certainty. If you don't want to make that sacrifice, then you have good reason to accept the claim that at least some mathematical knowledge is synthetic a priori knowledge.

So to sum up, it seems we need to say yes to at least three questions to make a convincing claim that some knowledge of X is synthetic a priori knowledge.

  1. Is there a truth about X that the formal definition of X doesn't already "contain"?
  2. Do we feel a strong motivation to accept that truth rather than remaining agnostic?
  3. Is that truth indeed a priori?

Applying these three questions to a hypothetical Chomskyan universal grammar, I think the answer is probably no in all three cases. Now this is where my argument breaks down a bit, because of course there is no established universal grammar yet. It may turn out that linguists discover the actual universal grammar, and find that 1, 2, and 3 are all true of it. But I see no particular reason to accept that conclusion yet!

Furthermore, there as been at least some speculation that universal grammar is itself the very paradigm of analyticity. In this account, it is precisely the structure of the universal grammar that gives us our understanding of analytic truth. In that case, it would seem strange that our knowledge of universal grammar is itself synthetic. On the other hand, there doesn't seem to be a strong reason to assume that it is not. Perhaps the best route is to remain agnostic on the matter. But if I had to place a bet, I'd bet that our knowledge of universal grammar, such as it is, is analytic.

senderle
  • 528
  • 3
  • 8
  • 1
    Excellent answer. Could your first possibility be put in reverse - that is mathematics is like a grammar or language - thus assuming that a universal grammar is synthetic a priori - then so is mathematics. The historical instances that spring to mind here is that when Euclid was axiomatising geometry in Greece, Panini was axiomatising Sanskrit Grammar in India. – Mozibur Ullah May 13 '14 at 23:26
  • Yes, there's some complexity in my reasoning that I didn't fully express above. If universal grammar (UG) is mathematical at heart -- is maybe even the _source_ of mathematical knowledge -- then why wouldn't knowledge of it be just as synthetic as mathematical knowledge? What I didn't get to is that I think we should accept that not _all_ mathematical knowledge is really synthetic. Lots of mathematical knowledge can be expressed analytically. My feeling is that UG is necessarily linked to the analytic part of our mathematical knowledge, but not necessarily linked to the synthetic part. – senderle May 14 '14 at 12:29
  • @MoziburUllah, see above. Also, the point about Panini axiomatizing Sanskrit is interesting, and I think there's something to it. But note Tarski's proof that [geometry is complete](http://math.stackexchange.com/a/90396/13723). So geometry doesn't seem to require us to make anything other than analytic claims. (To be perfectly explicit, I think that analyticity is closely related to completeness -- not the same thing, of course, but there's some kind of deep connection there.) The upshot of all this is that I don't know whether you can define the natural numbers using UG! – senderle May 14 '14 at 12:48
  • I don't want to get into a priority argument as to where & who invented axiomatisation; the two types of axiomatisations are diffeent but there is a kind of family resemblence to them; programming languages are for example axiomatised using [BNF](http://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form) which the article explains is particular for context-free languages and Panini invented something of equivalent power. Now the interesting point here - to drag in some category theory - is that toposes, which are a generalisation of set theories come equipped – Mozibur Ullah May 15 '14 at 13:43
  • with an [internal logic](http://ncatlab.org/nlab/show/internal+logic) - a language with you will - that is equivalent to higher-order intuitionistic logic, and thus following Hilberts formalist programme, we can develop mathematics within a topos; and thus can define a [Natural Numbers Object](http://ncatlab.org/nlab/show/natural+numbers+object) - ie the Peano Axios for the natural numbers in categorical form. In plain language, given a grammar for a language, say UG, we can define the Peano Axioms. But obviously interpretation here is an issue. – Mozibur Ullah May 15 '14 at 13:48
  • @Mozibur Ullah, no worries about priority; I don't especially care about that TBH! I don't grok category theory well enough to have an intuition about what you're saying -- I've looked at it before and come away mostly empty-handed. What you're saying now makes me want to give it another shot -- _really_ interesting, and the motivation of CT is suddenly clearer to me. – senderle May 16 '14 at 00:04
  • @MoziburUllah, however -- there's something about your line of reasoning that troubles me. What you're saying suggests that a grammar formalizable by a pushdown automaton can be used to state the Peano Axioms. But once you have the Peano Axioms, don't you have the formal equivalent of a turing machine? (Isn't that why Goedel numbering works?) And so that suggests that you could use a pushdown automaton to emulate a turing machine. But that sounds wrong to me... – senderle May 16 '14 at 00:06
  • ok, sometimes its difficult to judge especially on online forums; I can't say I've come across a formal parallel between Peano Axioms & Turing Machines - do you have a ref for that? I think the reasoning falls down at the first step: 'a grammar formalisable by pushdown automaton'. First a topos has what is called an internal language, but is in fact higher-order intuitionistic logic. The question, if I want to formalise my argument - which isn't what I was looking, I was indicating a possible avenue to think about this - is can a pushdown automaton can be simulated by this logic. – Mozibur Ullah May 16 '14 at 14:57
  • One piece of evidence for this, is by the [Church-Turing Thesis](http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) the notion of computability given by Turing Machines, the lambda-calculus & primitive recursive functions are all equivalent. Now, the internal language of a [CCC (Cartesian Closed Category)](http://en.wikipedia.org/wiki/Cartesian_closed_category) is that of simply typed lambda calculus, and this equivalence is the subject of the [Howard-Lambek-Curry](http://en.wikipedia.org/wiki/Curry-Howard_correspondence#Curry.E2.80.93Howard.E2.80.93Lambek_correspondence). – Mozibur Ullah May 16 '14 at 15:04
  • [Howard-Lambek-Curry](http://en.wikipedia.org/wiki/Curry-Howard_correspondence#Curry.E2.80.93Howard.E2.80.93Lambek_correspondence). Finally, a topos is a CCC with a sub-object classifier. Putting the three together indicates then your suggestion is possible; and more since by the properties of the Turing Machine one can then model a Pushdown Automaton. Why do you say this 'sounds wrong'? Looking back on what I've written, I should have said that I couldn't find direct evidence of the simulation, but that there is direct evidence of simulating a Turing Machine. – Mozibur Ullah May 16 '14 at 15:12
  • I should add in no way do I think the mind is identical with a grammar, turing machine or pushdown automatons. – Mozibur Ullah May 17 '14 at 05:42
  • @MoziburUllah, Looking over what you've said (I've been out of town!) I think your argument is valid; but you appear to be reasoning from a stronger assumption (we have the equivalent of Turing Machines in our heads) to a weaker (we have the equivalent of PDA in our heads). But what I'm saying is that to make the argument that the UG would be synthetic a priori, you'd have to argue in the other direction. – senderle May 20 '14 at 12:21
  • No proposal that I've seen suggests a UG that formalizes a non-recursive but [recursively enumerable language](https://en.wikipedia.org/wiki/Recursively_enumerable_language). They all appear to suggest "weaker" kinds of languages as the basis for UG. (But note there could be a proposal that I'm unaware of!) But those weaker languages can all be _completely_ formalized, which I take (for this argument) to mean that they are analytic. – senderle May 20 '14 at 12:26
  • @MoziburUllah, also, you asked for a reference regarding formal parallels between Peano Arithmetic and Turing Machines. My reasoning goes like this: you can use Goedel numbering to create formulas in Peano Arithmetic that are equivalent to sentences in a formal language. So then you could recreate (say) the lambda calculus, which has the computational power of a Turing Machine. I don't have a direct reference, but the sketch of [this proof](https://en.wikipedia.org/wiki/Lambda_calculus#Undecidability_of_equivalence) strongly suggests to me that this line of reasoning is sound. – senderle May 20 '14 at 13:34
  • Yes, you're right:)! Where are you placing [UG](http://en.wikipedia.org/wiki/Universal_grammar) on the [Chomsky Hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy)? Its only context-free languages that are equivalent to PDAs. The type-0, unrestricted or recursively enumerable languages are equivalent to Turing Machines; and type-1, or context-sensitive grammars, which according to this [article](https://en.wikipedia.org/wiki/Context-sensitive_grammar) was introduced by Chomsky to treat natural languages, and notably this class of grammar is equivalent to a – Mozibur Ullah May 20 '14 at 15:40
  • [linear bounded Turing Machine](https://en.wikipedia.org/wiki/Linear_bounded_automaton), which as the article notes, in one sense models a coputer more effectively as one cannot say a *real* computer has an infinite 'tape'. – Mozibur Ullah May 20 '14 at 15:41
  • I'm not quite convinced about the equivalence of Turing Machines & Peano Axioms - but can't quite pin-down what poits troubling me. I'll have to ponder it. – Mozibur Ullah May 20 '14 at 16:00
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/14626/discussion-between-senderle-and-mozibur-ullah). – senderle May 20 '14 at 16:47
  • @senderle I realize this thread is 18 months old, and you are engaging with Mozibur at a level I can't follow, but if you are interested and able to take it up a level, I am imagining questions that relate Chomsky's grammar to Kant's class structure (c.f.) to explore similarities. I have heard math described as both a language and a grammar, depending on context. Relating Kant's analytic and synthetic judgements .. and the role of grammar .. to me, provides fascinating linkages. Curious if you still have energy on this. If so, perhaps a new question or two could be framed .. – sourcepov Dec 20 '15 at 02:04
  • @MoziburUllah same question for you. Want to see if you are interested in helping to reframe this topic re: epistemic factors (i.e., up a level !) .. to see if Kant and Chomsky are working on grammars in parallel .. ? – sourcepov Dec 20 '15 at 02:06
  • @senderle providing a link forward to a more recent conversation, which led me to this one .. aka "more digging" !! .. http://philosophy.stackexchange.com/questions/16045/are-we-born-with-kantian-categories .. – sourcepov Dec 20 '15 at 02:13
  • @sourcepov I'm happy to think through some of this further, but I'm a bit busy right now, alas. If you go to the "continue this discussion in chat" link, you'll see that my email appears towards the bottom. Feel free to email me and I'll let you know when I have time to return to this line of thinking. – senderle Dec 20 '15 at 19:01
0

UG (not language as a whole) would be a synthetic a priori, since it is informative (in Kant's sense of synthesis) and it does not depend on experience. Not depending on exprerience should be took with a grain of salt. It obviously doesn't mean independent of each human experience of any human ever been, and it obviously doesn't mean analytic, since that is not what Kant meant by apriority. UG is supposed to be innate in the sense that every human should develop language if exposed to language, but it does not develop on its own and avoiding language exposition to a child will compromise his hability to make use of language, and obviously other problems. The case is that UG is a priori in the sense that UG it is a experience-independent potentiallity common with all humans.

The finite grammar which produces infinite sentences is a inherently synctactic notion. Chomsky in Synctactic Structures makes it clear that he intends to leave semantics as it were in terms of structuralist semantics, by that I mean that his proposal does not involve concepts or meaning as in the meaning of words. UG involves syntax solely.

The guiding process of the whole project is "Given a set of gramatical english sentences, we may now ask ourselves which type of mechanism can produce this set"(CHOMSKY, Synctactic Structures, ch.3, §1). This production of a sentence is given through the series of synctactical rules in which it is possible to derive a gramatical english sentence.

Language is socially constructed in terms of the meaning of particular words and some synctactic possibilities. A subsequent model of linguistics claims an internal structure of principles and parameters, where principles are invariable through human language and parameters are local aspects of a given language (for example, synctactic ordering SVO or SOV). So it is innate as an abstract structured syntax that sets possibilities which can be instantiated with particular words to form sentences. Consider the following:

A synctactic tree

The structure illustrated by the tree is that which is innate possibility within universal grammar. The particular words are learned. In the beginning of the generative linguistics, the form of derivation of sentences was analogous to that of a derivation in an axiomatic system. The rule of inference from one to another was called the rewriting rule.

The tree above could be constructed by the following rules and a set of words. In syntax, we would have the rules (R1) "S → Nominal Phrase + Verb phrase", (R2) "Verb phrase → Verb + Nominal Phrase" and (R3) "Nominal phrase → determinant + Name". Given a sentence S, we may apply R1, then R2 in the Verb phrase, then R3 in the nominal phrase in the verb phrase. That would give us the tree above illustrated.

Given the terminal nodes in the trees, like V,N or D, we may have a set of words for the chosing, such as V = {hit}, N = {John, Ball} and D={the}. We may then form the phrase "John hit the ball" and "Ball hit the John". The second case may sound strange, but at a synctactical level they are the same.

An observation: synctactic strucutres do heavily influence meaning given the composition of the components of a sentence. It is worthwhile noting the case in which quantifiers are ambiguous relative to the scope of different synctactic structures, e.g. "Every man loves a woman".

Other obs: Language is commonly understood to be organized T-scheme (not Tarski's truth conception) in linguistics. enter image description here

Deep and shallow structures are synctactic, while the phonetic form has to do with the sounds the language uses, and the logical form deals heavily with quantifier ambiguity with very general semantic structures (not words).

Obs 3: A priori should be considered in Kant's context, so I disagree with an answer about considering a distinction between innateness and apriority. Clearly there is a distinction between a priori and analiticity for Kant, since synthetic a priori does not imply synthetic analytic. a priori in this context should be understood as independent of experience.