ATEG Archives

February 2009

ATEG@LISTSERV.MIAMIOH.EDU

Options: Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Bruce Despain <[log in to unmask]>
Reply To:
Assembly for the Teaching of English Grammar <[log in to unmask]>
Date:
Tue, 10 Feb 2009 07:12:37 -0700
Content-Type:
multipart/alternative
Parts/Attachments:
text/plain (10 kB) , text/html (18 kB)
Bill,
Thank you for fleshing out my comments with your helpful insights.  I believe they really gets down to the bottom of the issues involved.  I suppose the objection of teachers is to formalization let alone to addressing the specific framework in which it is presented.

________________________________
From: Assembly for the Teaching of English Grammar [[log in to unmask]] On Behalf Of Spruiell, William C [[log in to unmask]]
Sent: Monday, February 09, 2009 5:44 PM
To: [log in to unmask]
Subject: Re: Quick note on SFL and Cog. Grammar

Bruce,

There is a sense in which cognitive approaches involve string-level phenomena. To the extent to which a cognitive linguistic theory imports concepts from psychology, it has to acknowledge “chunking” in some way. “Chunking” here can be seen as a not-specific-to-language equivalent of “constituency.” Research in the 50s, for example, showed that when subjects who didn’t play chess were presented with a chess board in mid-game, and then distracted for a moment, they could later recall where only a few of the pieces were; chess Grand Masters, on the other hand, could position every piece given the same amount of exposure. It wasn’t because the Grand Masters had better memories, really – it was because they didn’t see a whole bunch of pieces; they saw four or five patterns they were familiar with. In general, we appear to recognize that particular sequences or visual arrangements constitute “chunks,” and handle large amounts of information by reducing it via hierarchically-organized chunking schemes. The same thing happens with telephone numbers – we remember “area code + prefix + number” as basically three chunks, rather than as ten numerals, and there’s a syntax to those chunks. As someone who watches a lot of very bad science-fiction movies, I know the major chunking sequences of the genre quite well, and can use that knowledge to make predictions (“Next, the threat posed by <insert monster HERE> should cause the estranged couple to agree to temporarily lay aside their differences, clearing the way for their re-engagement during the last five minutes”).

You’ll notice, though, that I started with the term “cognitive approaches” rather than “Cognitive Grammar.” The capitalized version is a specific theory founded by Langacker, and I honestly don’t know exactly how it instantiates chunking (although I should, and I’m sure it does). I do know that Stratificational Grammar, another cognitive approach, has no trouble with constituents/chunking at all, and has moved toward a neural-network interpretation of it.

A relevant point of contrast here is that Generative approaches tend to approach constituency as a kind of abstract, static phenomenon (“movement” rules don’t really entail actual movement, so the effects of movement on constituency don’t involve actual changes during production, etc.). Constituents exist as a kind of set of potentials in an abstract space; the grammar determines what the potentials are. Cognitive approaches tend to approach it as in issue involving real-time processing and storage, and thus motivate constituency partly via performance. That breaks down the performance/competence distinction, and thus creates a fundamental mismatch between the two platforms. I think functionalists in general don’t mind claiming that performance (including comprehension in a social context, rather than just production) partly creates competence as an epiphenomenon, but Generativists take the opposite view. It’s Aristotle vs. Plato all over again (but then, that’s the history of social science for the past two millennia).

Bill Spruiell
Dept. of English
Central Michigan University




From: Assembly for the Teaching of English Grammar [mailto:[log in to unmask]] On Behalf Of Bruce Despain
Sent: Monday, February 09, 2009 3:09 PM
To: [log in to unmask]
Subject: Re: Quick note on SFL and Cog. Grammar

As long as we’re commenting on linguistic approaches, allow me add a few words about “generative grammar.”

I believe that generative grammar was originally concerned (as many linguists still are) with the structures (in the brain, like the CG linguists) that enable language.  For example, Chomsky, in his 1965 Aspects of Syntax pointed out five primitive structures, as it were, that characterize human language syntax: nesting, right-branching, left-branching, multi-branching, and self-embedding.  These devices of language may be imitated by generative algorithms, but I have yet to see any other approach characterize them in any other way.  In that sense, perhaps all approaches have a generative base.

1.       I called the man who wrote that interesting book up. [nesting]

2.       That man who the boy who the students recognized pointed out is here. [self-embedding]

3.       Tom, Bill, John, and several of their friends are coming.  [multiple branching]

4.       John’s brother’s father’s uncle died.  [left-branching]

5.       The uncle of the father of the brother of John died. [right-branching]
There must be structures or switches in the brain that help us keep track of these relationships.  Is that simply part of our general cognitive ability?  Perhaps.  I need to see some examples of other behavior that uses these structures.  That might settle the argument for me.

Our general cognitive ability seems much easier to map to the structure of a neural network.  The structure of the syntax of a string, which is characterized by generative rules, is perhaps part of, but mostly quite distinct and specialized from that of a neural network.  Such a network allows associations and connections of a great variety and at varying strengths and “distance.”  When we express the ideas and concepts of a neural network, we are obliged to stringify it, i.e., map parts of it into a one dimensional string of symbols.  Each one of these symbols (“clumps of concepts”) has associations that can in turn be mapped to new strings.

My impression is that Grammar is involved principally with strings of symbols and must relate to syntax in the above sense.  On the other hand, the linguistic approaches of SFL and CL take a few steps back (higher?) and create other kinds of structures that do not have the syntax of a string.  These structures move away from the constraints that  language syntax has on the form of our expression and concentrate on the effects of other aspects of the neural network.


From: Assembly for the Teaching of English Grammar [mailto:[log in to unmask]] On Behalf Of Spruiell, William C
Sent: Monday, February 09, 2009 11:48 AM
To: [log in to unmask]
Subject: Quick note on SFL and Cog. Grammar

Dear All:

I’d like to address one point in the recent debate about developmental phases of grammar – but I want to be careful to emphasize that it’s a very focused (in other words, it doesn’t have a large impact on the debate as a whole, but hey, it came up).  And I think I may be able to address it noninflammatorily (Word just red-lined that, but I have a deriving license).

Halliday is quite clear about his grammatical model being a statement about social practice, rather than about cognition. In a sense, he’s recapitulating an old trend in linguistics: we’re much more confident with statements about what we observe going on than we are with statements about what we think might be going on in people’s heads, unless we have some way to measure the latter directly. He’s also from the “hocus-pocus” approach to linguistics rather than the “god’s truth” approach, for the same kinds of reasons. In other words, if the grammar describes what’s going on well, and acts as an explanation insofar as  it lets you predict the kinds of things you’ll encounter, why go out on a limb and claim Full Truthiness?

Cognitive grammar, a la Langacker and others, is a “god’s truth” model, and does make claims about what’s going on in people’s heads. It would thus seem at first to stand in opposition to Halliday’s – and it does, if the only dimension we’re organizing along is the internal/external-phenomena one.

There are, however, other dimensions along which CG and SLF tend to “cluster” together. Both acknowledge that social context directly affects what is produced, and more importantly, both consider the social environment to have a *direct* effect on the basic structure of language. In most Generative approaches I’m familiar with, any kind of selection effect due to social context is outside the scope of “grammar” – the grammar defines the set of what is within the realm of possibility, and that set has nothing to do with social interaction. In a particular social situation, a speaker might choose a subset of that set, but that’s not an issue for the grammar. There’s a sense in which sociolinguists, to generativists, are looking at something fundamentally different from what “core linguists” look at.

There’s an additional reason for CG/SFL clustering, but it’s one that exists “outside” either theory.  CG, by its nature, must also acknowledge that cognitive processing constraints – short term memory limitations, etc. – have a direct effect on the structure (and structures) of language. SFL theorists have spent a fair amount of time describing what Halliday terms the “textual metafunction,” which among other things is concerned with maintaining new vs. old information contrasts, and cohesion. The kinds of constructs one needs for the textual metafunction happen to dovetail fairly well with notions of processing constraints – in other words, part of CG may be quite useful as a kind of backdrop to SFL and vice versa. In generative models, the *basic* structure of language has nothing to do with more general nonlinguistic processing constraints, so there isn’t that kind of “contact point” between the theories.

Sincerely,

Bill Spruiell

To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"

Visit ATEG's web site at http://ateg.org/


NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"

Visit ATEG's web site at http://ateg.org/

To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"

Visit ATEG's web site at http://ateg.org/

To join or leave this LISTSERV list, please visit the list's web interface at:
     http://listserv.muohio.edu/archives/ateg.html
and select "Join or leave the list"

Visit ATEG's web site at http://ateg.org/


ATOM RSS1 RSS2