ATEG Archives

January 2015

ATEG@LISTSERV.MIAMIOH.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Karl Hagen <[log in to unmask]>
Reply To:
Assembly for the Teaching of English Grammar <[log in to unmask]>
Date:
Mon, 5 Jan 2015 19:59:01 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (231 lines)
The matrix clause is not necessarily the main clause, it's the immediate 
clause that contains the constituent at issue. Technically, you're 
right, that "model" isn't the subject of this matrix, but it's a 
distinction without a difference for my point. The matrix for the 
participle is the relative clause "that flashes the time..." I analyze 
the subject here as a gap (if you're traditional, you might call the 
subject "that"), but in either case, that subject refers anaphorically 
to "model." The implied subject of "pressing" doesn't.

Given the widespread occurrence of such modifiers, a descriptive 
approach would certainly want to account for why we produce such 
sentences all the time. Descriptively, in other words, I'll concede that 
it's probably OK, although I don't think you need to be schooled in the 
arcana of dangling modifiers to feel that there's something about the 
sentence that presents an interpretive obstacle to the reader. It's very 
likely that a first reading would mis-parse such a sentence. It's also 
true that such constructions are widely condemned in usage books, and 
many editors would correct them. So judged by the admittedly fuzzy 
criterion of standard written English, I'd argue that it counts as 
something in need of correction.

In the quiz question, the matrix clause is the content clause, "that a 
decade would fly by before...", so I stand by my keyed answer.



On 1/5/2015 7:30 PM, Hancock, Craig G wrote:
> Karl,
>      To me, "model" is not the subject of a matrix clause in your below example, though the sentence is a good example of a dangling particlple. I see everything after "phone" as an appositional noun phrase with a relative clause embedded. The implied subject of "pressing" is nowhere to be found in the sentence. (When someone presses a button.)
>      In your quiz example, the subject of the matrix clause IS the implied subject of "never dreaming." It reads OK to me.
>
> Craig
>
> ________________________________________
> From: Assembly for the Teaching of English Grammar <[log in to unmask]> on behalf of Karl Hagen <[log in to unmask]>
> Sent: Monday, January 05, 2015 10:04 PM
> To: [log in to unmask]
> Subject: Re: SAT question
>
> The way I analyze “dangling” participles, the McPhee example is well formed, even in a fairly strict, traditional scheme. I take what traditional grammarians would call the participial phrase (if you want to call it a clause, I’m down with that) to have an implied subject. If that implied subject matches the actual subject of the matrix clause that the participle modifies, there’s no error. In the McPhee example, “we” (the subject of the main clause) matches the implied subject of the participle (we were looking).
>
> Although most textbook examples of dangling modifiers put those modifiers in sentence-initial position, the same constraints apply to trailing modifiers. A good example (from an SAT, but they actually stole the idea from an old Language Log post) is the following:
>
> "Mr. Hanson proudly demonstrated his company’s latest cell phone, a model that flashes the time in color-coded numerals when pressing a button."
>
> Notice that “pressing a button” sound’s weird, because the implied subject ought to be Mr. Hanson, but the subject in the matrix clause is “model."
>
>
>
> Having looked at many, many examples of this construction on SATs and ACTs, I’m confident in saying that sentence that both tests would consider your McPhee example to contain no error.
>
>> On Jan 5, 2015, at 6:41 PM, Hancock, Craig G <[log in to unmask]> wrote:
>>
>> Karl,
>>      This is exactly the KIND of quiz I would advocate. I missed one question out of carelessness, one from not understanding what you mean by "modern usage manual," (most are still archaic in their attitudes), one because I was unfamilair with the range of 'deictic', and only one that I would disagree with (dangling participle--I could show you hundreds of sentences like that in the work of award winning writers. The participle is often distant from its implied subject without risking ambiguity. Here's an example from John McPhee: "This time, we flew north, low over the river, upstream, looking for the glint of the canoe.")
>>      I suspect that it would be far easier to come to agreement on a test like this than it would a test on "error." At any rate, a test like this would be even more effective locally, since students would have practice with this kind of question and become comfortable with the terminology.
>>      You can design an error test for high levels of reliability, but what you get are versions of a test that would give you repeat scores in the same range. It doesn't answer the question of whether the test measures what it is designed for (whether it is VALID as a measure of writing capability.) Your arguments for validity seem based on the premise that other choices are even worse.
>>      Perhaps more important would be whether the existence of the test PUSHES the curriuclum in desirable ways. I would vote for your quiz hands down because it would creat incentive for a curriculum focused on language.
>>
>> C
>>
>> From: Assembly for the Teaching of English Grammar <[log in to unmask]> on behalf of Karl Hagen <[log in to unmask]>
>> Sent: Monday, January 05, 2015 8:37 PM
>> To: [log in to unmask]
>> Subject: Re: SAT question
>>
>> If I'm understanding you correctly, Craig, you're calling for a test of grammar that requires explicit knowledge of grammatical terms. One practical hurdle to doing it that way would be that we would have to agree on a set of concepts that would form the basis of the test, and we all know how much variation and bickering there is about definitions of terms, and even analysis (as the question of what  to call the "who" clause that kicked off this discussion shows).
>>
>> As it happens, I just wrote (for other reasons) a quiz of explicit grammatical knowledge that I put on my website. It's far more demanding than anything I expect we would want to give high school students, but perhaps that's closer to what you are calling. And yet, I also suspect that there will be plenty of people who violently disagree with my answers to some of the questions, or the approach itself.
>>
>> Weeding out such issues for a high-stakes test would be extremely fraught. The current format does have the virtue that many more people can agree that a particular part of the sentence needs fixing without necessarily sharing the same analytical framework.
>>
>> And while I don't necessarily disagree that certain categories of students could theoretically be at a disadvantage here, on a practical level it's also true that every question that makes it onto an operational form undergoes very rigorous statistical screening that's pretty good at weeding out questions that perform poorly, for example when lots of high-performing students wind up picking a wrong answer, say because they're too literate for their own good. (That's not true of practice questions, of course, especially not for ones written by most test-prep companies.)
>>
>> If anyone is interested in my quiz, you can find it at http://www.polysyllabic.com/?q=node/295
>>
>> On 1/5/2015 5:08 PM, Hancock, Craig G wrote:
>>>      I think it's a mistake to call this a rule since I know of no instance of its formulation as a rule. It's a pattern of usage, and if it tests the ear, then it is favorable to those whose discourse communitites are most similar to the makers of the exam. If Dick is right with his examples, then someone who has spent much time with the bible would be at a disadvantage. Why did the Elizabethan ear hear it differently?
>>>      Choosing between "those" and "they" is artificial, since they are not normal alternatives for each other. (They versus them or these versus those would make more sense.)
>>> ​    It is silly to think that we either test error or can't test knowledge about grammar at all. If our goal is to deepen understanding about language, then we should build a curriculum and then test what we teach. If the SAT focuses on error reduction, then much valuable classroom time will be spent preparing for that. If the SAT tested language knowledge, then the curriculum would shift accordingly.
>>>     I agree that essays under timed conditions on topics that the students have no opportunity (or need) to research tell us very little. My own rule of thumb is that a strong sample tells us something, but a weak sample may just mean a bad day. In decades of testing (and placing) incoming first year students, I have come to trust reading comprehension tests as better predictors, perhaps because writing can be improved dramatically just by adjusting what the student is trying to do, but reading seems to take much more time. At any rate, we do a five week summer program that teaches us much than we can learn from a few tests. The fact that high school teachers give grades that can't be trusted should give us all pause. A teacher who has worked with a student should know  much more than we can learn from these tests. Technically, the SAT derives from a need to remove the schools from the equation.
>>>      Meanwhile, students come out of high school year after year without knowing what a "phrase" is. If we thought that was important, we could teach it and test it.
>>>      Since we are the Assembly for the Teaching of Grammar, we should advocate teaching grammar, not avoiding it. Tests that focus on error and not on knowledge about language are part of the problem.
>>>
>>> Craig
>>> From: Assembly for the Teaching of English Grammar <[log in to unmask]> on behalf of Geoffrey Layton <[log in to unmask]>
>>> Sent: Monday, January 05, 2015 6:23 PM
>>> To: [log in to unmask]
>>> Subject: Re: SAT question
>>>
>>> My question (as always) is: why teach "the rules"?
>>>
>>>> Date: Mon, 5 Jan 2015 18:09:56 -0500
>>>> From: [log in to unmask]
>>>> Subject: Re: SAT question
>>>> To: [log in to unmask]
>>>>
>>>> These are interesting perspectives all-around. Regarding tricky questions, my concern is with the "No error" option that seems to be part of every question. I'm wondering if a student cannot label a particular error as we've been discussing, would that lead to a "No error" answer? If "No error" were not provided, then selecting "by them" as the definitive error seems more likely.
>>>>
>>>> Regarding standardized tests, I am in favor of them and glad to see the reasons some of you provided to support their use. My issue has always been with including questions containing such subtle errors that they lead to more head-scratching than definitive answers for students.
>>>>
>>>> This is true for grammar exercises in textbooks too. I've always been a fan of questions that reinforce the rules being taught rather than veer into confusing exceptions. Agreed?
>>>>
>>>> Linda
>>>>
>>>>
>>>>
>>>> Linda Comerford
>>>> 317.786.6404
>>>> [log in to unmask]
>>>> www.comerfordconsulting.com
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Karl Hagen [mailto:[log in to unmask]]
>>>> Sent: Monday, January 05, 2015 3:57 PM
>>>> To: [log in to unmask]
>>>> Subject: Re: SAT question
>>>>
>>>> I have my own issues with the SAT, but I�m going to be the advocatus diaboli here, because the contrary position is not as flimsy as you perhaps believe.
>>>>
>>>> First, no one who knows anything about standardized testing is claiming that the answer to a single question like this, in isolation, can indicate whether someone is an adequate college-level writer. The real claim (although I suspect you�ll object to this one too) is that, when taken in aggregate, a series of questions sampling a wide variety of usage problems is, when combined with the score from an essay, an adequate stand-in for the thing we�re really interested in: how ready is the student for college-level writing. In other words, the claim is that the ability to edit someone else�s writing to conform to the norms of standard written English can serve as a reasonable proxy for the ability to produce one�s own college-level writing.
>>>>
>>>> That claim is one that is subject to empirical testing, and there is published literature providing evidence that there is a reasonably good correlation. How persuasive those studies are can be the subject for a later discussion, but College Board has assembled a non-trivial body of evidence to support the use of the SAT Writing test, correlating its scores to grades in freshmen composition classes (which, I trust, are not graded with multiple-choice tests). That evidence isn�t unassailable, but this isn�t just a test cobbled together by uninformed amateurs, and if you�re going to attack it reasonably, you need to get into the psychometric weeds.
>>>>
>>>> But why should we use a proxy measure at all? Why not just directly assess their writing ability, as Craig asks? The question is how can we do that in a way that is as fair as possible to as many people as possible and provides useful information to allow test-users (i.e., admissions officers) to make informed decisions. What are the alternatives?
>>>>
>>>> (1) No standardized test of writing at all, in which case, we rely on high school grades alone. If we go this route, not only do we lose the ability to compare students, since grading standards vary wildly from school to school, and even from teacher to teacher within a school, but we also lose the ability to determine whether a particular grade average means that the student is college ready or not (because of grade inflation).
>>>>
>>>> (2) Use a portfolio to assess writing ability. This method allows for more authentic writing, but there is no protection against cheating, as a portfolio cannot be created under proctored, secure conditions. In any high-stakes assessment, some students will cheat.
>>>>
>>>> (3) Administer a standardized writing test that is a pure essay test. Intuitively, this seems like the best thing to do, but in fact, the reliability of essay-only tests is very bad. Writers are much less consistent from session to session, and graders have their problems too. There was a study a few decades back that showed AP tests would be significantly more reliable and make better decisions at the score cut points if the essays were removed. Of course essays are still on the APs. This is really a matter of �face� validity, that is, of making the test conform to what test users think it ought to contain, rather than really providing any useful information.
>>>>
>>>> To me, a bigger weakness of the SAT Writing test is the essay, as it�s currently structured. In addition to the lower reliability of the essay than the other components, it�s not an authentic task. And College Board hasn�t, to my knowledge, provided very good evidence to suggest that the essay actually adds anything to the overall score. This situation may change with the new SAT, where the essay will be radically different, but I remain skeptical.
>>>>
>>>> You dismiss this particular question as �tricky,� but I�m not sure I agree. The universal sentiment of everyone who answered the question seemed to be that it felt wrong, even if they couldn�t advance a theory as to why. If a student wrote this, I suspect that most of us would note it as an infelicity (mentally, at least, even if we�re not marking errors explicitly). I�d further venture that a student who had an accumulation of this sort of phrasing in a paper would not receive the highest grade.
>>>>
>>>> A tricky question would be one that depended narrowly on some particular rule that is found only in some usage books, or enforced only by a few editors. For example, if students were expected to mark �which� in restrictive clauses as an error, I would call that a fussy trick. To my knowledge, the SAT steers away from such points of disputed usage.
>>>>
>>>>
>>>>> On Jan 5, 2015, at 10:33 AM, Hancock, Craig G <[log in to unmask]> wrote:
>>>>>
>>>>> I can follow Karl's logic, but what it seems to come down to is that it's an error because people whose use of language matters generally say it that way. My own inclination is to think the pattern of using "those" derives from shortening a noun phrase in which "those" functions as determiner. "By those people who do not approve of it" becomes "by those who do not approve of it." You can't use "them" as determiner (at least in standard English). It's interesting that there are three passives in the sentence, none problematic.
>>>>> Does anyone believe for a moment that someone who sees this as an error is better prepared for college than someone who doesn't? Every time I look at these tests, I wonder whether they are doing much more harm than good. Whoever designs them seems to be looking for tricky little ways to catch people. Is our primary purpose for studying language to avoid error? What exactly makes an error an error, especially if usage differs? Why don't we test knowledge about language directly? Shouldn't we, as proponents of TEACHING grammar, be arguing constantly for that? Catching people on some obscure (and questionable) error diminishes the subject.
>>>>>
>>>>> Craig
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Assembly for the Teaching of English Grammar [mailto:[log in to unmask]] On Behalf Of Karl Hagen
>>>>> Sent: Monday, January 05, 2015 12:51 PM
>>>>> To: [log in to unmask]
>>>>> Subject: Re: SAT question
>>>>>
>>>>> This problem has nothing directly to do with who/whom (a distinction that the SAT does not test).
>>>>>
>>>>> You can�t just look at the single word following �by.� The object of the preposition is �them/those who did not approve it,� and the required word has to do with how it functions in this unit, which is a noun phrase headed by them/those.
>>>>>
>>>>> The relative clause �who did not approve it� modifies them/those. But �them,� as a personal pronoun, virtually always stands alone in the noun phrase. It doesn�t take modifiers like the relative clause. I won�t get into a detailed analysis of �those," as modern accounts differ from a traditional analysis and the differences aren�t to the point here. Suffice it to say that �those� isn�t a personal pronoun and doesn�t have the same restriction.
>>>>>
>>>>>> On Jan 5, 2015, at 9:34 AM, Jane Saral <[log in to unmask]> wrote:
>>>>>>
>>>>>> A recent SAT "ID the error" question reads:
>>>>>>
>>>>>> Although it is widely regarded as a masterpiece now, when it was built
>>>>>> A B
>>>>>>
>>>>>> the Eiffel Tower was compared to a "ridiculous smokestack" by them who did
>>>>>> C
>>>>>>
>>>>>> not approve of it. No error
>>>>>> D E
>>>>>>
>>>>>>
>>>>>> C just sounds wrong. I would say "by those who did not approve of it." But isn't the "them/those" word the stand-alone O.P. of by, unaffected by the relative clause that follows? This does not seem to be dealing with the who/whom question; "who" is correctly the subject of "did not approve."
>>>>>>
>>>>>> So why is this an error?
>>>>>>
>>>>>> Jane Saral
>>>>>> To join or leave this LISTSERV list, please visit the list's web interface at:http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"
>>>>>> Visit ATEG's web site at http://ateg.org/
>>>>>>
>>>>> To join or leave this LISTSERV list, please visit the list's web interface at:
>>>>> http://listserv.muohio.edu/archives/ateg.html
>>>>> and select "Join or leave the list"
>>>>>
>>>>> Visit ATEG's web site at http://ateg.org/
>>>>>
>>>>> To join or leave this LISTSERV list, please visit the list's web interface at:
>>>>> http://listserv.muohio.edu/archives/ateg.html
>>>>> and select "Join or leave the list"
>>>>>
>>>>> Visit ATEG's web site at http://ateg.org/
>>>> To join or leave this LISTSERV list, please visit the list's web interface at:
>>>> http://listserv.muohio.edu/archives/ateg.html
>>>> and select "Join or leave the list"
>>>>
>>>> Visit ATEG's web site at http://ateg.org/
>>>>
>>>> To join or leave this LISTSERV list, please visit the list's web interface at:
>>>> http://listserv.muohio.edu/archives/ateg.html
>>>> and select "Join or leave the list"
>>>>
>>>> Visit ATEG's web site at http://ateg.org/
>>> To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"
>>> Visit ATEG's web site at http://ateg.org/
>>> To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"
>>> Visit ATEG's web site at http://ateg.org/
>> To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"
>> Visit ATEG's web site at http://ateg.org/
>> To join or leave this LISTSERV list, please visit the list's web interface at: http://listserv.muohio.edu/archives/ateg.html and select "Join or leave the list"
>> Visit ATEG's web site at http://ateg.org/
> To join or leave this LISTSERV list, please visit the list's web interface at:
>       http://listserv.muohio.edu/archives/ateg.html
> and select "Join or leave the list"
>
> Visit ATEG's web site at http://ateg.org/
>
>
> To join or leave this LISTSERV list, please visit the list's web interface at:
>       http://listserv.muohio.edu/archives/ateg.html
> and select "Join or leave the list"
>
> Visit ATEG's web site at http://ateg.org/

To join or leave this LISTSERV list, please visit the list's web interface at:
     http://listserv.muohio.edu/archives/ateg.html
and select "Join or leave the list"

Visit ATEG's web site at http://ateg.org/

ATOM RSS1 RSS2