Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

Why ‘it’s’ means ‘it is’ or ‘it has’

Q: I can’t stand the use of “it’s” for “it has” in writing. When I see “it’s,” I read “it is” and then have to translate this to “it has.” Am I too picky?

A: There’s nothing wrong with using “it’s” as the contraction of “it is” or “it has,” whether in writing or in speech. One can easily tell from the context which sense is meant, and both uses are long established in standard English.

The American Heritage Dictionary of the English Language, for example, says “it’s” has two meanings: “1. Contraction of it is. 2. Contraction of it has.” And Fowler’s Dictionary of Modern English Usage (4th ed.) says “its is the possessive form of it (The cat licked its paws) and it’s is the shortened form of it is (It’s raining again) or it has (It’s come).”

In fact, “it’s” has been a contraction of both “it is” and “it has” for hundreds of years, though “it’s” was once the usual form of the possessive adjective and “ ’tis” was the usual contraction of “it is.” Confusing, ’tisn’t it? Here’s the story.

In Old English (roughly 450 to 1150) and Middle English (about 1150 to 1450), the usual nominative or subject form of “it” was hithyt, etc. The usual genitive or possessive form (“its” or “of it”) was hishys, etc. The nominative it was seen only occasionally in Old English, more often in Middle English.

Here’s an early example of the nominative hit in Beowulf, an epic poem that may have been written as early as 725: “hit wearð ealgearo, healærna mæst” (“it stood there ready, the noblest of halls”).

And here’s an example of the genitive his in an Anglo-Saxon herbal remedy: “Gedrinc his þonne on niht nistig þreo full fulle” (“Drink of it, after a night of fasting, three full cups”). From the Old English Herbarium, a 12th-century manuscript at the British Library (Cotton Vitellius C. iii).

(By the way, “he” was he in Old English, “she” was heo or hie, “his” was his or hys,  and “her” was hire.)

Both “its” and “it’s” first came into use as possessive adjectives in early Modern English, probably because the older neuter genitive his was being confused with the masculine possessive his.

(We’re using the term “possessive adjective” here to describe a dependent genitive like “her” or “their,” and “possessive pronoun” to describe an independent genitive like “hers” or “theirs.”)

The earliest citation in the Oxford English Dictionary for “its” as a possessive adjective is from a late 16th-century translation of a collection of Latin anecdotes for clerics: “There stands a bedde, its death to tell.” From Certain Selected Histories for Christian Recreations (1577), by Ralph Robinson.

And the first OED citation for the apostrophized “it’s” used as a possessive is from the definition of spontaneamente in an Italian-English dictionary: “willingly, naturally, without compulsion, of himselfe, of his free will, for it’s owne sake.” From A Worlde of Wordes (1611), by John Florio.

Of the two versions of the possessive adjective—with and without the apostrophe—“it’s” was apparently the predominant spelling throughout the 17th and 18th centuries, according to the Merriam-Webster Dictionary of English Usage. (In fact, “her’s,” “our’s,” “their’s,” and “your’s” were also possessives in early Modern English.)

The dictionary cites a half-dozen examples of the possessive “it’s,” including one from a Nov. 8, 1800, letter by Jane Austen to her sister Cassandra. We’ve expanded the citation, which describes the reaction of Austen’s neighbors, the Harwoods, on learning that their son Earle, a marine lieutenant, had accidentally shot himself in the thigh:

One most material comfort however they have; the assurance of it’s being really an accidental wound, which is not only positively declared by Earle himself, but is likewise testified by the particular direction of the bullet. Such a wound could not have been received in a duel.”

We’ll add this earlier one from Shakespeare’s Henry IV, Part 2, believed written in the late 1590s and first published in the 1623 Folio: “As milde and gentle as the Cradle‑babe, / Dying with mothers dugge betweene it’s lips.”

As Merriam-Webster explains, “the unapostrophized its was in competition with it’s from the beginning and began to rise to dominance in the mid 18th century.” M-W cites several language authorities to show how the usage evolved.

In A Short Introduction to English Grammar (1762), Robert Lowth gave “its” as the possessive form of “it.” But in The Philosophy of Rhetoric (1776), George Campbell gave “it’s.” In Reflections on the English Language (1770), Robert Baker preferred “it’s,” then switched to “its” in the 1779 edition. And in English Grammar (1794), Lindley Murray endorsed its.

As for the “it is” contractions, “ ’tis” appeared about a century before “it’s,” according to citations in the OED.

This is Oxford’s earliest example of “ ’tis” is written without an apostrophe (for the missing “i” in “it”): “Alas, tys pety yt schwld be þus” (“Alas, ’tis a pity it should be thus”). From Mankind, an anonymous morality play written around 1475.

The dictionary’s earliest example with an apostrophe is from Shakespeare’s Macbeth, first published in the 1623 Folio but believed to have been performed in 1606: “If it were done, when ’tis done, then ’twer well, It were done quickly.”

Meanwhile, “it’s” had emerged as a competing contraction. This is Oxford’s first example:  “And ambition is a priuie [private] poison, It’s also a pestilens.” From Rewarde of Wickednesse, a 1574 poem by Richard Robinson.

At first, the competition of “ ’tis” and “it’s” was pretty one-sided. A comparison using Google’s Ngram Viewer, which tracks words and phrases in digitized books, suggests that “ ’tis” was the usual contraction of “it is” from the mid-16th century to the mid-19th.

In fact, the early dominance of “ ’tis” was even greater than the comparison shows, since the Ngram results include the use of “it’s” as a possessive adjective as well as a contraction of “it has” and “it is.”

Language authorities in the late 18th and early 19th centuries indicated a preference for “ ’tis.” Campbell, for instance, complains in The Philosophy of Rhetoric about what he considers the misuse of “it’s, the genitive of the pronoun it, for ’tis, a contraction of it is.”

And both Samuel Johnson’s A Dictionary of the English Language (1775) and Noah Webster’s American Dictionary of the English Language (1828) include entries for “ ’tis” (but not “it’s”) as a contraction of “it is.”

Getting back to your complaint about the use of “it’s” as a contraction of “it has,” the earliest example we’ve seen for the usage is from the 1623 Folio of King Lear.

In addition to the contraction “it’s” for “it has,” Shakespeare used “it” twice by itself as a possessive: “the Hedge-Sparrow fed the Cuckoo so long, that it’s had it head bit off by it young.”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Expression Grammar Language Linguistics Usage Writing

Imperatively speaking

Q: A sign in the bathroom of the ladies’ locker room says, “It is imperative that nothing but TP is put in the toilet.” Aside from the fact that a couple of other things also go in the toilet, shouldn’t this read “be put,” not “is put”?

A: A sentence like that is referred to as a mandative construction; it demands something. It includes a mandative adjective (“imperative”) that governs a subordinate clause expressing what’s demanded.

The two usual ways to write such a sentence are (1) “It is imperative that nothing but TP be put in the toilet” and (2) “It is imperative that nothing but TP should be put in the toilet.” A much less common and somewhat iffy version is (3) “It is imperative that nothing but TP is put in the toilet.”

The Cambridge Grammar of the English Language, by Rodney Huddleston and Geoffrey K. Pullum, says a mandative adjective can be followed by (#1) a “subjunctive mandative” clause, (#2) a “should mandative” clause, or (#3) a “covert mandative” clause. The term “covert” here describes a tensed usage with a hidden subjunctive sense.

“Clear cases of the covert construction are fairly rare,” the authors add, “and indeed in AmE are of somewhat marginal acceptability. In AmE the subjunctive is strongly favoured over the should construction, while BrE shows the opposite preference.”

The Cambridge Grammar includes many examples of the three types of mandative construction, including these: (1) “It is essential that everyone attend the meeting”; (2) “It is essential that everyone should attend the meeting”; (3) “It is essential that everyone attends the meeting.”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Pronunciation Spelling Usage Word origin Writing

I can’t believe it’s not margerine!

Q: Why is “margarine” pronounced as if it were spelled “margerine”? The letter “g” is almost always hard when followed by an “a” and soft when followed by an “e.”

A: You’re right in thinking that the letter combination “ga” normally produces a hard “g,” as in the name “Margaret,” while the combination “ge” usually produces a soft “g,” as in “Margery.” In fact, “margarine” was originally pronounced with a hard “g,” as you’d suppose from its spelling.

It’s spelled with “ga” because the word was coined in the early 19th century in French, where margarine has a hard “g.” And when the word first entered English in the mid-19th century, it had the same hard “g” sound that it has in French.

Only later, in the early 20th century, did the original English pronunciation begin to shift. Today the letter is soft, like the “g” in “gin,” a development the Chambers Dictionary of Etymology says was probably influenced by “words like margin and such alterations in pronunciation as those of Margaret and Margie.”

We’ll have more on the pronunciation later. First, a little history of this word, which didn’t originally refer to something you’d put on your pancakes. It got its start in French as a chemical term, margarine. The butter substitute wasn’t invented until many decades later.

The word was coined in 1813 by the French chemist Michel-Eugène Chevreul. In experimenting with animal fats, he synthesized what he believed to be a previously unknown fatty substance, which he’d extracted from soap made of pork lard.

He gave this substance the chemical name margarine, a term soon adopted into English chemistry as “margarin” or “margarine.” And three years later, in 1816, Chevreul gave the name acide margarique (“margaric acid”) to the fatty acid he thought it came from.

Why those names? As the Oxford English Dictionary explains, the substance had “the appearance of mother-of-pearl,” so Chevreul adapted the name from the ancient Greek word for “pearl,” μαργαρίτης (margarites).

Keep in mind that in the first half of the 19th century, the words margarine, “margarin” and “margarine” were French and English chemical terms, not the names of edibles. The butter substitute wasn’t yet invented. The same is true of oléomargarine, a later French chemical term.

What the inventors of oléomargarine—Théophile-Jules Pelouze (a pharmaceutical scientist) and Félix Henri Boudet (a pharmacist)—synthesized in 1838 was a fatty solid derived from olive oil. They believed it to contain the same substances that Chevreul had synthesized from animal fats—margarine and another called oléine. By the late 1830s, these scientific terms were “olein” and “margarin” or “margarine” in English.

Pelouze and Boudet believed their discovery could have applications in the soap and candle industries. In fact, the terms “margarine candles” and “margarine soap” began appearing in English in the 1840s.

Although they discovered it in 1838, the new substance wasn’t given the name oléomargarine until 1854, when the French chemist Pierre Eugène Marcellin Berthelot chose that name because of its supposed constituents, oléine and margarine. (Incidentally, the French oléine and English “olein” are derived from the Latin word for “oil,” eleum.)

Finally we come to the edible, spreadable butter substitute. Its invention in 1869 was inspired by a butter shortage in France and a contest sponsored by Napoleon III, who offered a prize to anyone who could develop an artificial butter.

The winner was yet another French chemist, Hippolyte Mège-Mouriès, who described his invention in the original 1869 patent as “comme le beurre” (“like butter”), according to the Oxford English Dictionary. He said its chemical constituents included the oléine and margarine identified by Chevreul more than half a century earlier.

In a later patent, filed in 1874, Mège-Mouriès added skimmed cow’s milk to the mixture, so it “a la même composition que le beurre” (“has the same composition as butter”), the OED says.

And based on its supposed ingredients, oléine and margarine, he formally gave his invention both a scientific and a general name: “L’oléomargarine, nommé vulgairement margarine” (“Oleomargarine, commonly called margarine”).

So the French word margarine didn’t specifically mean artificial butter until 60 years after the term was coined in chemistry.

Though Mège-Mouriès didn’t officially name his invention until 1874, two English nouns for it, “margarine” and “oleomargarine,” jumped the gun slightly—no doubt borrowed from his formula.

The OED’s earliest citation for “margarine” to mean artificial butter is from an American patent  issued in 1873: “When it is cold … it constitutes … a greasy matter of very good taste, and which may replace the butter in the kitchen, where it is employed under the name of ‘margarine.’ ”

The dictionary’s earliest example of “oleomargarine” in the buttery sense is from Scientific American (Oct. 18, 1873): “The manufacture of artificial butter by the ‘Oleomargarine Manufacturing Company.’ ”

The names “margarine” and “oleomargarine” have meant the kitchen product ever since. But we can’t overlook the short forms: “oleo” and “marge.” These are Oxford’s oldest examples:

“There is one firm in London which is able to turn out from ten to twenty tons of this valuable oleo per week” (Daily News, London, Dec. 11, 1884) … “Potatoes and marge, marge and potatoes” (James Joyce’s novel Ulysses, 1922).

Notice that “marge” as a short form developed after the English “margarine” had largely shifted to a soft “g,” a development that was noticed—and condemned as a mispronunciation—at the turn of the century.

The soft “g” pronunciation wasn’t accepted by lexicographers until 1913, when it was included, though as a lesser variant, in the Phonetic Dictionary of the English Language, by Hermann Michaelis and Daniel Jones.

But soon after, the pronunciations switched places in the opinion of phoneticians. In An English Pronouncing Dictionary (1917), Daniel Jones listed the preferred pronunciation is /dʒə/ (soft “g”), with /ɡə/ (hard “g”) as a less frequent variant.

The older pronunciation, according to the OED,  “became rare in the second half of the 20th cent.” Now for a historic footnote:

The French terms oléomargarine and margarine were based on a scientific misunderstanding, according to the OED. “As subsequent research showed that neither the margarine of Chevreul, nor the oléomargarine of Berthelot, were definite chemical compounds,” the dictionary says, “these names are no longer in chemical use.”

But though defunct in scientific use, they live on in the names used today for the butter substitute.

[Note: On Sept. 21, 2022, a reader writes to say, “ ‘Margarine’ has hard ‘g’ in winter and a soft ‘g’ in summer.”]

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English language Etymology Expression Language Linguistics Pronunciation Spelling Usage Word origin Writing

Phee-phi-pho-phum

Q: As a mathematician, I’m bothered by the inefficiency of transliterating the Greek letter ϕ as “ph.” Since it’s one letter in Greek, and we have “f,” which makes the same sound, why do we use two letters for it?

A: The letter ϕ (phi) in ancient Greek, spelled “ph” in many English words of Greek origin, didn’t originally have an “f” sound.

The ϕ sounded like the aspirated “p” in “pot,” as opposed to the ancient Greek π (pi), which sounded like the unaspirated “p” in “spot.” (An aspirated letter is pronounced with the sound of a breath.)

When the ancient Romans borrowed words from Greek, they transliterated the ϕ with the digraph ph to differentiate it from the unaspirated p in Latin. (A digraph is a pair of letters representing one sound.)

However, the pronunciations of both the Greek ϕ and the Latin ph evolved during the first few centuries AD and came to sound like the English fricative “f.” (A fricative is a consonant produced by the friction of forcing air through a narrow space.)

In Vox Graeca (1968), a guide to ancient Greek pronunciation, the Cambridge philologist W. Sidney Allen says “the first clear evidence for a fricative pronunciation of ϕ comes from 1 c. A.D. in Pompeiian spellings such as Dafne ( = Δάφνη).” He adds that “from the 2 c. A.D. the representation of ϕ by Latin f becomes common.”

In Old English, spoken from roughly the mid-5th century to the late 11th, the “ph” digraph in words of Greek origin that the Anglo-Saxons borrowed from Latin was sometimes transliterated as f and sometimes as ph.

In an anonymous Old English version of a Latin history, for example, “philosopher” is filosofum in one place and  philosophe in another:

  • “Gesetton him to ladteowe Demoste[n]on þone filosofum” (“They appointed as their leader Demosthenes the philosopher”).
  • “Philippus … wæs Thebanum to gisle geseald, Paminunde, þæm strongan cyninge & þæm gelæredestan philosophe” (“Philip … was given as a hostage to the Theban Paminunde, that strong king and learned philosopher”).

The passages are from the Old English Orosius, a loose translation in the late 9th or early 10th century of Historiarum Adversum Pagano Libri VII (“Seven Books of History Against the Pagans”), a 5th-century chronicle by Paulus Orosius. Modern scholars doubt an attribution of the translation to King Ælfred.

The linguists Thomas Pyles and John Algeo say Old English had “somewhat more than 500 in all” loanwords from Latin, including those of Greek origin. Some loanwords came directly from Latin and others indirectly from Celtic or Germanic terms. (The Origins and Development of the English Language, 4th ed., 1993.)

However, the majority of Greek words in English appeared after the Norman Conquest of the 11th century and the adoption of Anglo-Norman as the language of the aristocracy in England.

“From the Middle English period on, Latin and French are the immediate sources of most loanwords ultimately Greek,” Pyles and Algeo write.

In Middle English (roughly 1150 to 1450), the “f” sound in words of Greek origin was sometimes represented with an “f” and sometimes with a “ph” digraph. So “philosopher” was spelled variously felesophre, filosofre, filosophre, fylosofre, phelesophrephilesofre, philisofre, and so on. Yes, spelling was a mess in Middle English.

In the late 15th century, as Middle English was giving way to early Modern English, the printing press arrived in England and helped standardize spelling, including the use of “ph” for the “f” sound in words from Greek.

As it turns out, some Romance languages derived from Latin (such as Spanish and Italian) preferred “f” in these words, while  others (notably French) chose “ph.”

Getting back to your question, the use of the “ph” digraph here may be less efficient than using “f,” but we find it more interesting. The usage preserves a fascinating chapter in the history of English.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Grammar Language Linguistics Usage Word origin Writing

A sticky question

Q: The verb “stick” seems to have uses that don’t allow conjugation. You can say, “We got stuck in the elevator,” but not “The elevator sticks us.” Are there other verbs with one sense applicable only in the past tense?

A: In a clause like “We were stuck in the elevator” or “We got stuck in the elevator,” the word “stuck” is either a past participle or a participial adjective, depending on the meaning. In either case, “stuck” is a nonfinite verb form, one that isn’t inflected for tense.

When a state or condition is meant, “stuck” is usually a participial adjective in an intransitive clause. When an action is meant, “stuck” is usually a past participle in a passive transitive clause.

The “be” version is used for a condition or an action, while the “get” version tends to be used for an action.

You can expand the two elevator clauses above to make clear that the first refers to a condition (“We were stuck in the elevator all night”) and that the second refers to an action (“We got stuck in the elevator when the power failed”).

The Cambridge Grammar of the English Language refers to the past participle in such uses as a “verbal passive” and the participial adjective as an “adjectival passive.”  Cambridge calls the two conditions “stative” and “dynamic.” It discusses “be” and “get” passives in more detail on pages 1429-1443.

The grammar’s authors, Rodney Huddleston and Geoffrey K. Pullum, cite several examples of adjectives derived from past participles but with special meanings:

“She’s bound to win” … “We’re engaged (to be married)” … “Aren’t you meant to be working on your assignment?” … “His days are numbered” … “Are you related?” …  “I’m supposed to pay for it” … “He isn’t used to hard work.”

For readers who’ve forgotten the terminology, a verb is transitive when it needs a direct object to make sense (“Beverly raises calla lilies”) and intransitive when it makes sense without one (“The yellow ones died”).

A  verb is active when the subject performs the action (“Gertrude grows lupins”) and passive when the action is performed on the subject (“The lupins are grown by Gertrude”).

When an active transitive clause becomes passive, as in that latter example, the former direct object (“lupins”) becomes the subject, and the former subject (“Gertrude”) becomes the object of a prepositional phrase, though the prepositional phrase is not always expressed.

As for the etymology here, when “stick” was originally used to mean fix in place it was an intransitive verb spelled sticiað in Old English. The Oxford English Dictionary says transitive uses “are typically recorded later than their intransitive equivalents and chiefly occur in the passive, as to be stuckto get stuck, etc.”

The earliest intransitive example in the OED is from the Old English Boethius, a translation made in the late ninth or early tenth century of De Consolatione Philosophiae (“The Consolation of Philosophy”), a sixth-century Latin treatise by the Roman philosopher Boethius:

“Gesihst þu nu on hu miclum & on hu diopum & on hu þiostrum horoseaða þara unðeawa ða yfelwillendan sticiað” (“Do you see now in how great and in how deep and in how dark an abyss of sins men of evil vices stick”).

The dictionary’s first citation for “stick” used as a transitive passive is from a letter written on Oct. 4, 1635, by William Laud, Archbishop of Canterbury, to the English statesman Thomas Wentworth:

“When he saw the man and his horse stuck fast in the quagmire.” (Here “stuck” is a participial adjective.)

The OED’s earliest “be stuck” example is figurative: “It is Natural to men in the wrong to persist, and believe they take Wing when they are deepest stuck in the Mire” (from The Portugues Asia, John Stevens’s 1695 translation of a work by the Portuguese historian Manuel de Faria e Sousa).

And the dictionary’s first “get stuck” citation is from the transcript of an 1899 case before the New York State Court of Appeals: “If the logs get stuck we keep men there with pevies and work them through.” A “peavey” (the usual spelling) is a hooked lumberjack tool.

Finally, we should mention that the verb “stick” took on a bloody sense in Middle English when it came to mean “to impale (a thing) on (also upon) something pointed.” The OED’s first citation is from an anonymous medieval romance:

“And Þe bor is heued of smot, / And on a tronsoun of is spere / Þat heued a stikede for to bere” (“And he beheaded the boar and stuck the head on the end of his spear so he could carry it”). From The Romance of Sir Beues of Hamtoun (circa 1300).

Two centuries later, the verb came to mean “to pin (a person) to a wall, the ground, etc., by running a weapon through his or her body.” The first OED citation is from the Coverdale Bible of 1535:

“And Saul had a iauelynge [javelin] in his hande, and cast it, and thoughte: I wyll stycke Dauid fast to the wall” (1 Samuel 18:11).

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Usage Word origin Writing

Not ‘al-’ there

Q: Is the “al-” at the beginning of “although” related to the “al-” of “albeit”?  And what about the archaic “un-” of “unto” and the gradually fading “un-” of “until”?

A: The “al-” at the beginning of the conjunctions “although” and “albeit” is a shortening of “all” that’s seen in some words that were originally compounds.

“Although” originated in Middle English as a compound of the adverb “all” plus the conjunction “though,” while “albeit” appeared around the same time as a compound of the conjunction “all,” the verb “be,” and the pronoun “it.”

(“All” is now an adjective, a pronoun, a noun, and an adverb, but it was once a conjunction too.)

Interestingly, a precursor of “although” appeared in Old English as two words (eal and þeah) with the order reversed, according to the Oxford English Dictionary. Here’s an example from Beowulf, an epic poem that may have been written as early as 725:

“Ic hine sweorde swebban nelle, aldre beneotan þeah ic eal mæge” (“With my sword I won’t slay him, deprive him of life, although I could”). The phrase “þeah ic eal mæge” is literally “though I all could.” Beowulf is speaking here about the monster Grendel.

As for the Middle English compound, the earliest OED example, which we’ve expanded, is from a homily written in the first half of the 14th century that warns against the temptations of the world, the flesh, and the devil. This passage refers to worldly pleasure:

“sone is sotel as ich ou sai / þis sake alþah [although] hit seme suete / þat i telle a poure play / þat furst is feir & seþþe vnsete / þis wilde wille went awai” (“Soon it’s clear, I say to you, / this sin, although it seems sweet, / I judge a poor pleasure / That first is fair and afterward foul”). From the Harley Lyrics. The homily is known by its first line, “Middelerd for mon wes mad” (Middle Earth was made for men), or more commonly as “The Three Faces of Men.”

The earliest Oxford example for “albeit,” a vintage way of saying “although,” is from an entry, dated sometime before 1325, in The Statutes of the Realm, a collection of Acts of Parliament in England:

“Also þerase man rauisez womman … mit strenkþe, albehit þat heo assente afterward, he sal habbe þilke iugement þat his iseid bifore” (“Also in that case where man ravishes woman … with violence, albeit that she assents afterward, he shall have such judgment as was said of him before”). From A Middle English Statute-Book (2011), edited by Claire Fennell.

If you’d like to read more about “albeit,” we wrote a post about it in 2017.

The shortening of “all” to “al-” appears at the beginning of other words that originated as compounds, including “almighty,” “almost,” “also,” “altogether,” and “always.” And “al-” is seen at the beginning of some English words of Arabic origin, including “alchemy,” “alcohol,” “alcove,” “algebra,” and “almanac.” (In Arabic, al- is a definite article.)

As for “unto,” we’d describe it as old-fashioned or literary rather than archaic. The term still shows up in contemporary writing, as in this recent example:

“The Skiing Aigners Are a Nation Unto Themselves” (the headline on a New York Times article about the Beijing Paralympics, March 13, 2022).

In the earliest OED citation, which we’ve expanded, “unto” is hyphenated: “Cum nu swiþe un-to him / Þat king is of þis kuneriche / Þu fule man. þu wicke swike” (“Come now unto him, / the king of this country, / thou foul man, thou wicked traitor”). From The Lay of Havelok the Dane, an anonymous tale of chivalry written in the late 13th century.

The dictionary says “unto” was modeled after “until,” with “to” replacing “til.” (The preposition “until” had appeared more than a century earlier.)

And that brings us to your comment about “the gradually fading ‘un’ of ‘until.’ ” As it turns out, “til” appeared by itself hundreds of years before “un-” joined it to form the compound “until.”

In northern Old English, til was a preposition, used as we would now use “to.” The OED’s earliest til citation  is from an Anglo-Saxon inscription on the Ruthwell Cross. The stone cross is in the Scottish village of Ruthwell, which used to be in the Anglo-Saxon kingdom of Northumbria.

Here’s the inscription written in Old English runes: ᛣᚱᛁᛋᛏ ᚹᚫᛋ ᚩᚾ ᚱᚩᛞᛁᚻᚹᛖᚦᚱᚨ ᚦᛖᚱ ᚠᚢᛋᚨ ᚠᛠᚱᚱᚪᚾ ᛣᚹᚩᛗᚢ ᚨᚦᚦᛁᛚᚨ ᛏᛁᛚ ᚪᚾᚢᛗ. And here it is, transliterated into Old English script: “krist wæs on rodi hweþræ þer fusæ fearran kwomu æþþilæ til anum ic þæt al bih[eald]” (“Christ was on the cross. Yet the eager came there from afar to the noble one that all beheld”).

When the preposition “until” appeared in Middle English, it meant “to” or “unto,” roughly the same sense as the Old English til. The term is derived from the Old Norse und (under) and the Northumbrian til (to). This is Oxford’s earliest citation:

“Forr whatt teȝȝ fellenn sone dun off heoffne. & inn till helle” (“For what they soon fell down off heaven and unto hell”). From the Ormulum (circa 1175), a collection of homilies written by an Augustinian monk who identifies himself as Orm in one place and Ormin in another. The word “until” is written as “inn till,” “unntill” and “inntill” in various parts of the Ormulum.

Around the same time, the words “til” and “till” showed up as conjunctions meaning up to a certain time, action, event, and so on. (The OED includes Old English and early Middle English examples of “til” among its citations for “till” as a preposition and a conjunction.)

The first OED citation for “til” used in the conjunctive sense is from an 1154 Middle English document in The Anglo-Saxon Chronicle:

“dide ælle in prisun til hi iafen up here castles” (“he put them all in prison until they gave up their castles”). The passage refers to King Stephen’s arrest of several bishops, one of them the Lord Chancellor, in 1137.

The dictionary’s first citation for “till” used this way appeared a few decades later: “Fra þatt he wass full litell. Till þatt he waxenn wass” (“From when he was very little till he was grown”). From the Ormulum (c. 1175).

In the early 13th century, “until” took on a similar sense as a conjunction. The first OED example is from the Middle English Harrowing of Hell, an anonymous manuscript that the dictionary dates at sometime before 1250:

“lucifer, here y þe binde, / schaltow neuer heþen winde / vntil it com domesday” (“Lucifer, here I bind thee. Never shall thou wend to heaven until Doomsday comes” (published in the Middle English Harrowing of Hell, and Gospel of Nicodemus, 1907, edited by William Henry Hulme).

In the 14th century all three terms—“until,” “till,” and “til”—appeared as prepositions with the same sense (up to a certain time), according to citations in the OED.

The first “until” example is from Cursor Mundi, an anonymous Middle English poem written sometime before 1325: “Fra adam tim until noe” (“From Adam’s time until Noah’s”).

The earliest “till” citation is from a chronicle written around 1330 by the English monk Robert Mannyng: “Fro Eneas till Brutus tyme.” And the first “til” example is from The Last Age of the Church (1380), by John Wycliffe: “Fro Crist til now.”

The terms are prepositions when followed by a noun or noun phrase (“I’ll be busy from noon till three o’clock”), and conjunctions when followed by a clause. (“Can you stay until the office closes”).

The use of “til” as a preposition or a conjunction died out in Middle English, but “till” and “until” have continued to be used that way, and both are now considered standard English.

However, two questionable variants of “till” appeared in the late 18th and early 19th centuries, the apostrophized “ ’till” and “ ’til.” The apostrophe was apparently added in the mistaken belief that “ ’till” and “ ’til” were contractions of “until.” But as we’ve shown, “until” is an expansion of “til.”

The earliest “ ’till” example that we’ve found is from the announcement of a court-decreed sale of 110 acres of land, four slaves, household furniture, and livestock to satisfy a debt of  “seventy two pounds, fourteen shillings and [e]leve[n] pence, with interest from the 8th day of May, 1788, ’till paid, together with the costs and expenses of the said decree.” (The Virginia Argus, Richmond, Feb. 14, 1797.)

And this “ ’til” example appeared a dozen years later in an Indiana newspaper: “The Thebans were indebted for their victories over the ’til then unconquered Spartans, as much to some new manoeuvres which had been introduced into their tactics and which they had practiced with unwearied assiduity” (from The Western Sun, Vincennes, Aug. 11, 1810).

As for today, all ten standard dictionaries that we regularly consult include “until” and “till” as standard English terms meaning up to a certain time, event, etc., though some note that “until” is more common at the beginning of a sentence. None include “ ’till,” though a few recognize “ ’til” as an informal variant

Help support the Grammarphobia Blog with your donation. And check outour books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Genealogy Language Linguistics Pronunciation Usage Word origin Writing

On Ralphs and Rafes

Q: I’ve read that the British don’t pronounce the “l” of Ralph because it was originally silent in Old English. Is that true?

A: No, the “l” was pronounced in the Old English predecessors of the name Ralph, and it’s usually pronounced now in both Britain and the US. However, some Ralphs in the UK, like the actor Ralph Fiennes and the composer Ralph Vaughan Williams, have pronounced their name as if it were spelled “Rafe.”

Words were pronounced as they were spelled in Old English, which was spoken from roughly 450 to 1100. There were no silent letters. So the “l” was vocalized in Radulf, Radolf, Raulf and Raulfus—the Old English predecessors of Ralph.

The Oxford Dictionary of Family Names in Britain and Ireland (2016), by Patrick Hanks, Richard Coates, and Peter McClure, says Radulf and Radolf first appeared in the Domesday Book (1086), a survey of taxpayers in England and Wales that was ordered by William I, known as William the Conqueror.

The authors add that the other two names, Raulf and Raulfus clericus (Latin for Raulf the clerk), showed up soon afterward in the 1095 feudal records of the abbey of Bury Saint Edmunds. The four Old English names are all derived from the Old Norse Raðulfr (“counsel wolf” or “wise wolf”).

The dictionary, now considered the definitive authority on British and Irish family names, is a four-volume, 2,992-page work that was 20 years in the making.

Some other family-name references cite Raedwulf (“red wolf” in Old English) as the original Anglo-Saxon ancestor of Ralph. However, the Oxford authors don’t include it and apparently don’t consider Raedwulf, the name of an obscure king of Northumbria, an early form of Ralph.

The ancestors of the name Ralph in Middle English, which was spoken from roughly 1100 to 1500, include Radulfus (1140), Raulf (1296), Rolf (1308), Ralf (1327), and Rolffe (1410), according to the Oxford authors.

The earliest “l”-less version, Radufus, appeared around 1200 in a Danelaw document from Lincolnshire. Danelaw, or Danish law, held sway in parts of northern and eastern England that had been occupied by the Danes and other Norse invaders.

Additional early “l”-less versions cited in the Oxford reference were Raffe and Rauf, which were recorded in 1273 in the Hundred Rolls, a census in England and part of what is now Wales.

The “Rafe” pronunciation of Rauf and Raulf emerged as the articulation of vowels underwent a vast upheaval in late Middle English and early Modern English (from roughly 1350 to 1550). Linguists refer to this as the Great Vowel Shift.

As the Oxford authors explain, “In late Middle English the diphthong -au- was sometimes simplified to long -a-, later pronounced ‘ay’ as in modern English day, which accounts for Rafe. This pronunciation of the personal name Ralph is still occasionally found in modern times.”

The “Ralph” spelling of Raulf and Rauf became common in the 16th century, according to the family-name dictionary. Printing, which had been introduced into England the century before, helped standardize that spelling, but some Ralphs have continued to pronounce their name without the “l,” as “Rafe.”

One of those Rafes, the British philosopher Ralph Wedgwood, says, “My name has always been pronounced in this way by my family and close friends. (I was named after my great-grandfather Ralph L. Wedgwood (1874–1956), who always pronounced it in this way as well.)”

In a page entitled Ralph on his website, Wedgwood says he doesn’t object when strangers pronounce his first name the usual way, but he doesn’t feel this pronunciation “is really my name at all.”

“I love my name,” he writes. “To me, it somehow seems to sum up the quirky historical contingency and poetry of language, all in one sonorous monosyllable.” (His full name is Sir Ralph Nicholas Wedgwood, 4th Baronet, though he doesn’t mention the title on his website.)

We’ll end with a passage from Gilbert and Sullivan’s H. M. S. Pinafore, in which Little Buttercup rhymes the first name of Ralph Rackstraw with “waif”:

In time each little waif
Forsook his foster-mother,
The well-born babe was Ralph––
Your captain was the other!

Help support the Grammarphobia Blog with your donation. And check outour books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Language Linguistics Punctuation Usage Writing

Who invented the question mark?

Q: It’s Lent Madness time again, which reminds me of a saint contest a few years ago that pitted two deacons against each other: Alcuin of York vs. Ephrem of Edessa. I voted for Alcuin because he was identified as the inventor of the question mark (among more spiritual accomplishments). Are you familiar with him?

A: Alcuin, Charlemagne’s éminence grise, was quite a guy—scholar, poet, teacher, and cleric—but he didn’t invent what we now know as the question mark. More to the point, we’ve seen no evidence that he created its medieval ancestor, the punctus interrogativus, which didn’t look or act much like the modern question mark.

The punctus interrogativus, a squiggle rising diagonally from left to right above a point, appeared in the late eighth century in Carolingian miniscule, the Latin script used when Charlemagne (747-814) ruled much of Europe. Alcuin (735-804) oversaw Charlemagne’s palace school and scriptorium at Aachen in Francia from 782 to 793.

The evidence we’ve found indicates that Godescalc, a poet, scribe, and illuminator, was the first person to use the punctus interrogativus at the scriptorium, or copying room, in Aachen. Godescalc used it in producing an illuminated manuscript commissioned by Charlemagne in 781, the year before Alcuin arrived in Aachen.

The usage appeared in the Godescalc Gospel Lectionary (781-83), the first known manuscript produced at the scriptorium at Aachen. The manuscript is named after Godescalc because he refers to himself as the author in a poem at the end.

We found several examples of the punctus interrogativus in the first dozen or so pages of the manuscript at the Bibliothèque Nationale de France (Ms. NAL 1203), suggesting that Godescalc began using them in 781—before Alcuin’s arrival. In this example from folio 6v, the punctus interrogativus can be seen on the third line.

Sedsic eum volo manere donec venia[m] quid ad te?

(But so I want him to stay until I come, what is it to you?
Gospel of John, 21:22.)

(We’ll have more to say later about Godescalc’s symbolic use of gold letters on purple parchment.)

We’ve seen reports of possible earlier sightings of the punctus interrogativus in manuscripts from the scriptorium at Corbie Abbey to the southwest of Aachen, but we haven’t found any Corbie examples produced before the Godescalc Gospel Lectionary.

As the paleographer Malcolm B. Parkes explains in Pause and Effect: Punctuation in the West (1993), the punctus interrogativus “seems to have spread rapidly from the court of Charlemagne to other centres,” reaching Corbie “at the end of the eighth or beginning of the ninth century.”

The punctus interrogativus was originally used in liturgical writing to tell reciters and singers when to raise their voices inquiringly and pause at the end of a question, according to paleographers, specialists in ancient writing.

Parkes says the punctus interrogativus was one of several symbols developed in the second half of the eighth century to fill the “need for adequate punctuation in liturgical texts.” Such texts were designed to be spoken or sung. A lectionary, like Godescalc’s, is a collection of liturgical readings.

The new system of symbols, called positurae, used a punctus versus to signal a pause at the end of a sententia, a punctus elevatus to signal an interior pause in a sententia, and a punctus interrogativus to signal a rising vocal inflection and pause at the end of an interrogatio.

(The punctus versus looked somewhat like a modern semicolon, while the punctus elevatus looked a bit like an upside-down semicolon.)

“In western manuscripts the positurae fulfilled the need for more accurate indication of the nature of the pauses required to elucidate the sense of the text when it was intoned or sung in the liturgy,” Parkes says in Pause and Effect.

The paleographer Albert Derolez has noted that the punctus interrogativus originated “in a neume or sign of musical notation, which indicated that the voice had to rise at the end of the sentence” (The Palaeography of Gothic Manuscript Books, 2003).

And the musical historian Leo Treitler points out in With Voice and Pen (2007) that the upward stroke of the punctus interrogativus corresponds “to the inflection of the voice in questions.”

Although Alcuin wrote about the need for proper punctuation in copying ancient manuscripts, we haven’t found any writing of his that either mentions or uses the punctus interrogativus. He doesn’t use it, for example, in the interrogative passages we’ve read from Quaestiones in Genesim, a series of questions and answers about Genesis.

Alcuin favored a two-fold system of punctuation with distinctiones to mark pauses at the end of sententiis and subdistinctiones to mark interior pauses. In a letter written to Charlemagne in 799, Alcuin complained that scribes hadn’t been using them:

“punctorum vero distinctiones vel subdistinctiones licet ornatum faciant pulcherrimum in sententiis, tamen usus illorum propter rusticitatem paene recessit a scriptoribus” (“points for distinctions and subdistinctions make the most beautiful sentences, but their use has almost disappeared because of the rusticity of scribes”).

In fact, those “rustic” scribes began adding punctuation marks on their own initiative to Alcuin’s two-part system. Godescalc, as we’ve said, was apparently the first person at the Aachen scriptorium to use the punctus interrogativus.

In a poem at the end of the Godescalc Gospel Lectionary, he says Charlemagne commissioned the manuscript in the fall of 781.

Septenis cum aperit felix bis fascibus annum
Hoc opus eximium franchorum scribere Carlus
Rex plus egregia hildgarda cum conjuge iussit.

(As he happily opened the 14th year of his reign [Oct. 9, 781], King Charles of the Franks, with his wife Hildegard, commissioned the writing of this exceptional work.)

And in this excerpt from the poem, Godescalc mentions himself as the creator of the illuminated manuscript and suggests that he began working on it six months before Charlemagne commissioned it. Godescalc’s name appears at the end of the second line.

Ultimus hoc famulus studuit complere godescalc
Tempore vernali transcensis alpibus ipse.
Urbem romuleam voluit quo visere consul.

(The humblest servant Godescalc was diligently at work on this opus in the springtime, when the Consul himself [Charlemagne], having crossed the Alps, wished to visit the city of Romulus. [Charlemagne visited Rome in April 781.])

In 2011, a Cambridge manuscript specialist, James F. Coakley, reported finding ancient marks of interrogation in fifth-century biblical manuscripts written in Syriac, a Middle Eastern language. But paleographers believe that the punctus interrogativus, not the Syriac symbol, is the ancestor of our question mark.

The modern question mark, a grammatical device indicating the end of an interrogative sentence, evolved over hundreds of years from the Carolingian punctuation mark, which originated, as we’ve said, as a rhetorical device in liturgical writing to signal a rising vocal inflection and pause.

The earliest example we’ve seen for a punctuation mark that looks and acts like the modern question mark is from a book printed in Latin in the late 15th century by Aldus Manutius, an Italian scholar, educator, and publisher.

This image is from Pietro Bembo’s De Ætna, a Latin account of his ascent of Mount Etna with his father, Bernardo. The work, written as a dialogue between Bembus Pater (B. P.) and Bembus Filius (B. F.), was published in February 1496 by Manutius’s Aldine Press. The question mark ends the last sentence.

[B. F.] Ego uero existimabam pater errauisse me sic etiam nimis diu. B. P. Non est ita: sed, ne nunc tandem erremus; perge de ignibus, ut proposuisti: uerum autem, quid tu haeres?

([B. F.] I thought my father was wrong for too long. B. P. It is not so: but let us not stray from the point; go on [continue telling me] about fire, as you intended, but what is keeping you?)

Getting back to  Godescalc, we’ll end with the opening lines of his poem, which describe the symbolism of the colors he uses in producing the manuscript.

Aurea purpureis pinguntur grammata scedis
Regna poli roseo pate sanguine facta tonantis.
Fulgida stelligeri promunt et gaudia caeli
Eloquiumque dei digno fulgore choruscans.
Splendida perpetuae promittit praemia vitae.

(Gold letters painted on purple pages reveal in rose-red blood the celestial kingdom and the joys of heaven. And the eloquence of God, shining brightly, promises the splendid reward of eternal life.)

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Usage Word origin Writing

On mom, pop, and dad

Q: I can see how “mother” gave birth to “mom,” “mommy,” and so on, but how did we get “dad,” “daddy,” “pop,” etc., from “father”?

A: The various “mom,” “pop,” and “dad” words are all probably derived from the “ma,” “pa,” and “da” sounds that babbling infants utter and that parents mistakenly think are references to mother and father. The parents then respond with baby talk that gives reduplicative, or doubled, sounds like “mama,” “papa,” and “dada” a maternal or paternal sense.

The linguist Roman Jakobson has suggested that this process begins while babies are nursing: “Often the sucking activities of a child are accompanied by a slight nasal murmur, the only phonation which can be produced when the lips are pressed to mother’s breast or to the feeding bottle and the mouth full.”

After nursing, he says, “the nasal murmur may be supplied with an oral, particularly labial release; it may also obtain an optional vocalic support.” (The “nasal murmur” is an m-m-m sound; the “labial release” and “vocalic support” produce an a-a-h sound.)

Jakobson’s comments are from “Why ‘Mama’ and ‘Papa,’ ” a paper presented on May 26, 1959, at a linguistics seminar at Stanford University, and published in Perspectives in Psychological Theory (1960), edited by Bernard Kaplan and Seymour Wapner.

Since the mother is the source of a baby’s nourishment, Jakobson writes, “most of the infant’s longings are addressed to her, and children, being prompted and instigated by the extant nursery words, gradually turn the nursery interjection [“mama”] into a parental term, and adapt its expressive make-up to their regular phonemic patter.” In other words, “mama” comes to mean “mother” to the child.

Jakobson’s baby-talk approach, which is generally accepted by linguists, focuses on the physiological ability of infants to make various vowel and consonant sounds:

“Nursery coinages are accepted for a wider circulation in the child-adult verbal intercourse only if they meet the infant’s linguistic requirements” and “reflect the salient features and tendencies of children’s speech development.”

As it turns out, “a” is the easiest vowel for a babbling baby to produce. All you have to do is open your mouth and make a noise. Two of the easiest consonant sounds are “m” and “p.” All you have to do is put your lips together—no tongue or teeth required. That’s why they’re called labials.

The letter “d” is a bit harder since you have to put the tip of your tongue against the upper gum or upper teeth (the upper teeth don’t arrive until around 8 to 10 months of age).

The “f” and “th” sounds in “father” and “mother” are much harder to make, and even a toddler may have trouble with them. (In Old English, the “th” of “father” and “mother” was a “d,” which may have made things a little easier for Anglo-Saxon children.)

Of the various parental nursery terms in English, babbling infants generally say “mama” first, followed by “papa,” and then “dada,” according to linguists. The duplicatives “baba,” “nana,” and “tata” (plus “mama” and “papa”) and their variants are infantile parental terms in other languages.

English-speaking parents, as we’ve said, hear familiar speech sounds and assume that “mama,” “papa,” and “dada” are attempts to say “mother” and “father.” By repetition, pointing, smiling, head-shaking, and so on, the parents instill that belief in their babies and it becomes mutually reinforced.

In addition to Jakobson’s paper, we’ve relied on related comments by the linguists Larry Trask, John McWhorter, William Poser, Nancy J. Frishberg, and Robert A. Papen.

The Oxford English Dictionary says “mom” is a shortening of “momma,” a variant of “mama,” which is “probably ultimately [from] a (reduplicated) syllable /ma/ which is characteristic of early infantile vocalization and regarded by some as a development of the sound sometimes made by a baby when breastfeeding.”

Similarly, the OED says “dad” is probably derived from “an imitative or expressive formation” made up of the reduplicated syllable “da” and “characteristic of early infantile vocalization.”

And “papa” is also probably derived from a reduplicated syllable characteristic of early infantile vocalization. Oxford notes that πάππας (“pappas”) was the way a young child in ancient Greece pronounced πατήρ (“patir” or father).

Here, according to Oxford citations, are the dates when various parental terms were first recorded in English writing: “mama” (1555), “momma” (1803), “mom” (1846), and “mommy” (1846); “papa” (1681), “pa” (1773), “pop” (1840); “daddy” (1523), “dad” (1533), “dada” (1672), and “da” (1851). Linguists note that similar nursery words in other languages aren’t etymologically related, but the result of early infant-adult communication.

Of course, unrecorded examples of “mama,” “papa,” and “dada” undoubtedly occurred long before those dates. In fact, one scholar has suggested that a version of “mama” may have been one of the first words uttered by humans or their hominin ancestors:

“It does not seem unreasonable to assume that the equivalent of the English word ‘Mama’ may well have been one of the first conventional words developed by early hominins.” From a 2004 article by Dean Falk, a neuroanthropologist who specializes in the evolution of the brain and cognition in higher primates. (“Prelinguistic Evolution in Early Hominins: Whence Motherese?” in the journal Behavioral and Brain Sciences.)

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Usage Word origin Writing

Congregate or congregant care?

Q: Is health-care housing where lots of people live in close proximity “congregant” or “congregate” living? I see both terms used interchangeably, even within the same publication.

A: “Congregate” is overwhelmingly more popular than “congregant” as an adjective to describe group services or facilities for people, especially the elderly, who need supportive care. And it’s the only one of the two usages included in the ten standard dictionaries we regularly consult.

American Heritage, for example, defines “congregate” as a verb meaning “to bring or come together in a group,” and as an adjective meaning “involving a group: congregate living facilities for senior citizens.” It defines “congregant” solely as a noun for “one who congregates, especially a member of a group of people gathered for religious worship.”

Collins, Dictionary.com, Merriam-Webster, Merriam-Webster Unabridged, and Webster’s New World have similar definitions. Lexico has similar definitions in its American English version but doesn’t include “congregate” as an adjective in its British version. Cambridge, Longman, and Macmillan don’t have either the noun “congregant” or the adjective “congregate.”

In the News on the Web corpus, a database from articles in newspapers and magazines on the Internet, the “congregate” usage is significantly more popular than the one with “congregant.”

Here are the results of some recent searches: “congregate living,” 820 examples; “congregant living,” 35; “congregate care,” 579; “congregant care,” 18; “congregate housing,” 95; “congregant housing,” 0.

In searches with Google’s Ngram viewer, which compares words and phrases in digitized books, “congregant living” barely registered, while “congregant care” and “congregant housing” didn’t show up at all.

As for the etymology, both “congregate” and “congregant” are derived from congregare, classical Latin for to collect together into a flock or company, according to the Oxford English Dictionary.

“Congregate,” the oldest of the two English words, showed up around 1400 as a verb meaning to collect or gather things together. In the 1500s, it took on the modern sense of to gather together into a group of people.

The adjective, which is derived from congregatus, past participle of congregare, appeared soon after the verb in this OED citation: “These men somme tyme congregate schalle goe furthe” (from an early 15th-century translation of Ranulf Higden’s Polychronicon, a 14th-century Latin work of history and theology).

The latecomer, “congregant,” is derived from congregantem, present participle of congregare. It showed up in the late 19th century as a noun that Oxford defines as “one of those who congregate anywhere; a member of a congregation; esp. a member of a Jewish congregation.”

We’ve expanded the dictionary’s first example: “The Bevis Marks synagogue, the only building of genuine historical interest in England which the Jews can boast, is at the present moment threatened with destruction at the hands of a portion of its own governing body, to the dismay of the majority of its congregants and of the community in general” (The Pall Mall Gazette, London, March 24, 1886).

The OED, an etymological dictionary based on historical evidence, doesn’t have an entry for “congregant” used as an adjective. As far as we can tell from a cursory search, the usage showed up in the 20th century, perhaps originally as an eggcorn, a word or phrase substitution like “egg corn” for “acorn.”

Here’s an example from a few decades ago: “Joan is a young woman who does considerable work with older people and serves on the board of a congregant housing facility for the elderly” (from Ministry of the Laity, 1986, by James Desmond Anderson and Ezra Earl Jones).

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Punctuation Usage Word origin Writing

From A to &, et cetera

Q: What is the story of “&” and why is it replacing “and”?

A: The “&” character, or ampersand, is seen a lot these days in texting, email, and online writing, but the use of a special character for “and” isn’t a new phenomenon. English writers have been doing this since Anglo-Saxon days, a usage borrowed from the ancient Romans.

In his book Shady Characters: The Secret Life of Punctuation, Symbols & Other Typographical Marks (2013), Keith Houston writes that the Romans had two special characters for representing et, the Latin word for “and.” They used either ⁊, a symbol in a shorthand system known as notae Tironianae, or the ancestor of the ampersand, a symbol combining the e and t of et.

The Tironian system is said to have been developed by Tiro, a slave and secretary of the Roman statesman and scholar Cicero in the first century BC. After being freed, Tiro adopted Cicero’s praenomen and nomen, and called himself Marcus Tullius Tiro.

Houston says the earliest known recorded version of the ampersand was an et ligature, or compound character, scrawled on a wall in Pompeii by an unknown graffiti artist and preserved under volcanic ash from the eruption of Mount Vesuvius in AD 79.

He cites the research of Jan Tschichold, author of Formenwandlungen der &-Zeichen (1953), which was translated from German to English in 1957 as The Ampersand: Its Origin and Development. An illustration that Houston based on Tschichold’s work shows the evolution of the ampersand over the years.

(Image #1 is from Pompeii, while the modern-looking #13 is from the Merovingian Latin of the eighth century.)

In Shady Characters, Houston describes how the ampersand competed with the Tironian ⁊ in the Middle Ages. “From its ignoble beginnings a century after Tiro’s scholarly et, the ampersand assumed its now-familiar shape with remarkable speed even as its rival remained immutable,” he writes.

“Whatever its origins, the scrappy ampersand would go on to usurp the Tironian et in a quite definitive manner,” he says, adding, “Tiro’s et showed the way but the ampersand was the real destination.”

Today, Houston writes, the Tironian character “survives in the wild only in Irish Gaelic, where it serves as an ‘and’ sign on old mailboxes and modern road signs,” while the ampersand “ultimately earned a permanent place in type cases and on keyboards.” (We added the links.)

Although the ampersand was common in medieval Latin manuscripts, including works written in Latin by Anglo-Saxon scholars, it took quite a while for the character to replace the Tironian et in English. In most of the Old English and Middle English manuscripts we’ve examined, the Tironian symbol is the usual short form for the various early versions of “and” (end, ond, ænd, ande, and so on).

A good example is the original manuscript of Beowulf, an epic poem that may have been written as early as 725. The anonymous author uses ond for “and” only a few times, but the Tironian symbol appears scores of times. However, modern transcriptions of the Old English in Beowulf often replace the “⁊” with ond or “&.” When the Tironian character does appear, it’s often written as the numeral “7.”

Here are the last few lines of the poem with the Tironian characters (or notes) intact: “cwædon þæt he wære wyruldcyning, / manna mildust ⁊ monðwærust / eodum liðost, ⁊ lofgeornost” (“Of all the world’s kings, they said, / he was the kindest and the gentlest of men, / the most gracious to his people and the most worthy of fame”).

Although you can find dozens of ampersands in transcriptions of Old English and Middle English manuscripts, an analysis of the original documents shows that most of the “&” characters were originally Tironian notes.

Dictionaries routinely transcribe the Tironian note as an ampersand in their citations from Old and Middle English. As the Oxford English Dictionary, the most influential and comprehensive etymological dictionary, says in an explanatory note, “In this dictionary the Old and Middle English Tironian note is usually printed as &.”

However, the ampersand does show up at times in early English. For example, it’s included in an Anglo-Saxon alphabet dating from the late 10th or early 11th century. A scribe added the alphabet to an early 9th-century copy of a Latin letter by the scholar, cleric, and poet Alcuin of York (British Library, Harley 208, fol. 87v).

Harley abc

The alphabet is in the upper margin of the image. It includes the 23 letters of the classical Latin alphabet (with a backward “b”) followed by the ampersand, the Tironian et, and four Anglo-Saxon runes: the wynn (ᚹ), the thorn (þ), the aesc (ᚫ), and an odd-looking eth (ð) that resembles a “y.” At the end of the alphabet, the scribe added the first words of the Lord’s Prayer in Latin (pater noster). The British Library’s digital viewer lets readers examine the image in more detail.

At the end of Harley 208, which includes copies of 91 letters by Alcuin and one by Charlemagne, the scribe wrote a line in Old English, “hwæt ic eall feala ealde sæge (“Listen, for I have heard many old sagas”), which is reminiscent of line 869 in Beowulf: “eal fela eald gesegena” (“all the many old sagas”). Is the scribe suggesting that the letters are ancient tales?

A similar alphabet appears in Byrhtferð’s Enchiridion, or handbook (1011), a wide-ranging compilation of information on such subjects as astronomy, mathematics, logic, grammar, and rhetoric. However, the alphabet in the Enchiridion (Ashmole Ms. 328, Bodleian Library, Oxford), differs somewhat from the one above—the æsc rune is replaced by an ae ligature at the end.

We’ve seen several other Old English alphabets arranged in similar order. In most of them, an ampersand follows the letter “z.”  Fred C. Robinson, a Yale philologist and Old English scholar, has said the “earliest of the abecedaria is probably” the one in Harley 208 (“Syntactical Glosses in Latin Manuscripts of Anglo-Saxon Provenance,” published in Speculum, A Journal of Medieval Studies, July 1973). An “abecedarium” (plural “abecedaria”) is an alphabet written in order.

We haven’t seen any examples of the ampersand used in Old English other than in alphabets. The earliest examples we’ve found for the ampersand in actual text are in Middle English. Here’s an example from The Knight’s Tale of the Hengwrt Chaucer, circa 1400, one of the earliest manuscripts of The Canterbury Tales:

The middle line in the image reads: “hir mercy & hir grace” (“her mercy & her grace”). Here’s an expanded version of the passage: “and but i have hir mercy & hir grace, / that i may seen hire atte leeste weye / i nam but deed; ther nis namoore to seye” (“And unless I have her mercy & her grace, / So I can at least see her some way, / I am as good as dead; there is no more to say”).

Middle English writers also used the ampersand in the term “&c,” short for “et cetera.” In a 1418 will, for example, “&c” was used to avoid repeating a name: “quirtayns [curtains] of worsted … in warde of Anneys Elyngton, and … a gowne of grene frese, in ward, &c” (from The Fifty Earliest English Wills in the Court of Probate, edited by Frederick James Furnivall, 1882).

Although literary writers didn’t ordinarily use a symbol for “and” in early Modern English, the ampersand showed up every once in a while. For example, the character slipped into this passage from The Shepheardes Calender (1579), Edmund Spenser’s first major poem: “The blossome, which my braunch of youth did beare, / With breathed sighes is blowne away, & blasted.”

And in the 1603 First Quarto of Hamlet, Shakespeare has Hamlet telling Horatio, “O the King doth wake to night, & takes his rouse [a full cup of wine, beer, etc.].” But “and” replaces the ampersand, and the “O” disappears, in the Second Quarto (1604) and the First Folio (1623).

As for today, we see nothing wrong with using an ampersand in casual writing (we often use “Pat & Stewart” to sign our emails), but we’d recommend “and” for formal writing and noteworthy informal writing.

Nevertheless, formal use of the ampersand is common today in company names, such as AT&T, Marks & Spencer, and Ben & Jerry’s. And some authors, notably H. W. Fowler in A Dictionary of Modern English Usage (1926), have used them regularly in formal writing.

Finally, we should mention that the term “ampersand” is relatively new. Although the “&” character dates back to classical times, the noun “ampersand” didn’t show up in writing until the 18th century.

The earliest OED example for “ampersand” with its modern spelling is from a travel book written in the late 18th century. Here’s an expanded version:

“At length, having tried all the historians from great A, to ampersand, he perceives there is no escaping from the puzzle, but by selecting his own facts, forming his own conclusions, and putting a little trust in his own reason and judgment” (from Gleanings Through Wales, Holland and Westphalia, 1795, by S. J. Pratt).

The expression “from A to ampersand” (meaning from the beginning to the end, or in every particular) is an old way of saying “from A to Z.” It was especially popular in the 19th century.

As we’ve noted, the ampersand followed the letter “z” in some old abecedaria, a practice going back to Anglo-Saxon days. And when children were taught that alphabet in the late Middle Ages, they would recite the letters from “A” to “&.”

In Promptorium Parvolorum (“Storehouse for Children”), a Middle English-to-Latin dictionary written around 1440, English letters that are words by themselves, including the ampersand, are treated specially in reciting the alphabet, according to The Merriam-Webster New Book of Word Histories (1991), edited by Frederick C. Mish.

As Mish explains, when a single letter formed a word or syllable—like “I” (the personal pronoun) or the first “i” in “iris”—it was recited as “I per se, I.”  In other words, “I by itself, I.”

“The per se spellings were used especially for the letters that were themselves words,” Mish writes. “Because the alphabet was augmented by the sign &, which followed z, there were four of these: A per se, A; I per se, I; O per se, O, and & per se, and.”

Since he “&” character was spoken as “and,” children reciting the alphabet would refer to it as “and per se, and.” That expression, Mish says, became “in slightly altered and contracted form, the standard name for the character &.” In other words, “ampersand” originated as a corruption of “and per se, and.”

The two earliest citations for “ampersand” in the OED spell it “ampuse and” (1777) and “appersiand” (1785). Various other spellings continued to appear in the 1800s—“ampus-and” (1859), “Amperzand” (1869)—before the modern version became established.

We’ll end with “The Ampersand Sonnet,” the calligrapher A. J. Fairbank’s take on Shakespeare’s Sonnet 66. In this version of the sonnet, each “and” in Shakespeare’s original is replaced by a different style of ampersand:

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Usage Writing

Try and stop us!

Q: How old is the use of “and” in place of “to” to join infinitives?  For example, “He wants to try and kill her” instead of “He wants to try to kill her.” I heard the usage on British TV, so it’s not just American.

A: The use of “and” to link two infinitives is very old, dating back to Anglo-Saxon days. And as you’ve discovered, it’s not just an American usage. In fact, the use of “try and” plus a bare, or “to”-less, infinitive (as in “He wants to try and kill her”) is more common in the UK than in the US.

As the Oxford English Dictionary explains, “and” is being used here for “connecting two verbs, the second of which is logically dependent on the first, esp. where the first verb is come, go, send, or try.” With the exception of “come” and “go,” the dictionary adds, “the verbs in this construction are normally only in the infinitive or imperative.”

In other words, the “come” and “go” versions of the usage can be inflected—have different verb forms, such as the future (“He’ll come and do it”) or the past (“He went and did it”). But the “try” version is normally restricted to the infinitive (“He intends to try and stop us”) or the imperative (“Try and stop us!”).

The earliest example of the construction in the OED is from Matthew 8:21 in the West Saxon Gospels, dating from the late 900s: “Drihten, alyfe me ærest to farenne & bebyrigean minne fæder” (“Lord, let me first go and bury my father”). The verbs “go” (farenne) and “bury” (bebyrigean) here are infinitives.

The dictionary’s first citation for a “try” version of the construction is from the records of the Burgh of Edinburgh. A 1599 entry reports that the council “ordanis the thesaure [orders the treasurer] to trye and speik with Jhonn Kyle.”

However, we’ve found an earlier “try” example in the Early English Books Online database: “they are also profitable for the faithfull / for they trye and purefye the faith of goddes [God’s] electe.” From A Disputacio[n] of Purgatorye, a 1531 religious treatise by John Frith, a Protestant writer burned at the stake for heresy.

In the 19th century, some language commentators began criticizing the use of “try and” with a bare infinitive. For example, George Washington Moon chided a fellow British language authority, Henry Alford, Dean of Canterbury, for using “try and” in a magazine article based on Alford’s A Plea for the Queen’s English: Stray Notes on Speaking and Spelling (1863):

“Near the end of a paragraph in the first Essay occurs the following sentence, which is omitted in the book:— ‘And I really don’t wish to be dull; so please, dear reader, to try and not think me so.’ Try and think, indeed! Try to think, we can understand. Fancy saying ‘the dear reader tries and thinks me so’; for, mind, a conjunction is used only to connect words, and can govern no case at all” (The Dean’s English: A Criticism on the Dean of Canterbury’s Essays on the Queen’s English, 1865).

Moon was apparently bugged by Alford’s use of “try and think” because the phrase couldn’t be inflected. But as we’ve shown, English writers had been using “try and” phrases this way for hundreds of years before any commentator raised an objection.

Although some later language authorities have echoed Moon’s objection to the usage, others have said it’s acceptable, especially in informal English.

As Henry W. Fowler says in A Dictionary of Modern English Usage (1926), “It is an idiom that should not be discountenanced, but used when it comes natural.” It meets “the proper standard of literary dignity,” he writes, though it is “specially appropriate to actual speech.”

Jeremy Butterfield, editor of the fourth edition of Fowler’s usage guide, expands on those comments from the first edition, adding that the “choice between try to and try and is largely a matter of spontaneity, rhythm, and emphasis, especially in spoken forms.”

In Garner’s Modern English Usage (4th ed.), Bryan A. Garner describes “try and” as a “casualism,” which he defines as “the least formal type of standard English.” And Pat, in her grammar and usage guide Woe Is I (4th ed.), recommends “try to” for formal occasions, but says “try and, which has been around for hundreds of years, is acceptable in casual writing and in conversation.”

Merriam-Webster’s Dictionary of English Usage, which has dozens of examples of “try and” and similar constructions used by respected writers, says, “About the only thing that can be held against any of these combinations is that they seem to be more typical of speech than of high-toned writing—and that is hardly a sin.” Here are few of the usage guide’s “try and” citations:

“Now I will try and write of something else” (Jane Austen, letter, Jan. 29, 1813).

“ ‘Stand aside, my dear,’ replied Squeers. ‘We’ll try and find out’ ” (Charles Dickens, Nicholas Nickleby, 1839).

“The unfortunate creature has a child still every year, and her constant hypocrisy is to try and make her girls believe that their father is a respectable man” (William Makepeace Thackeray, The Book of Snobs, 1846).

“to try and soften his father’s anger” (George Eliot, Silas Marner, 1861).

“We are getting rather mixed. The situation entangled. Let’s try and comb it out” (W. S. Gilbert, The Gondoliers, 1889).

“If gentlemen sold their things, he was to try and get them to sell to him” (Samuel Butler, The Way of All Flesh, 1903).

Some linguists and grammarians describe the “and” here as a “quasi-auxiliary,” and its use in “try and” constructions as “pseudo-coordination,” since “and” is functioning grammatically as the infinitive marker “to,” not as a conjunction that coordinates (that is, joins) words, phrases, or clauses.

Getting back to your question about the usage in American versus British English, Lynne Murphy, an American linguist at the University of Sussex in Brighton, England, discusses this in The Prodigal Tongue (2018). She cites two studies that indicate “try and” is more popular in the UK than in the US.

One study found that the British “used try and (where they could have said try to) over 71% of the time in speech and 24% of the time in writing,” Murphy writes. “The American figures were 24% for speech and 5% in writing.” (The study she cites is “Try to or Try and? Verb Complementation in British and American English,” by Charlotte Hommerberg and Gunnel Tottie, ICAME Journal, 2007.)

Another study, Murphy says, shows that older, educated people in the UK “prefer try to a bit more, but those under forty-five say try and 85% of the time, regardless of their level of education.” (The study here is “Why Does Canadian English Use Try to but British English Use Try and?” by Marisa Brook and Sali A. Tagliamonte, American Speech, 2016.)

In a Dec. 14, 2016, post on Murphy’s website, Separated by a Common Language, she has more details about the studies. She also notes a theory that some people may choose “try and get to know” over “try to get to know” because of a feeling that the repetition of “to” in the second example sounds awkward. Linguists refer to the avoidance of repetition as horror aequi, Latin for “fear of the same.”

Some grammarians and linguists have suggested that there may be a difference in meaning between the “try and” and “try to” constructions. But the linguist Åge Lind analyzed 50 English novels written from 1960 to 1970 and concluded: “If a subtle semantic distinction exists it does not seem to be observed” (from “The Variant Forms Try and/Try to,” English Studies, December 1983).

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Pronunciation Usage Writing

Something wicked this way comes

Q: We were reading Shakespeare, and wondered about the pronunciation of the final “-ed” in words like “beloved” and “blessed.” I just assumed that people in Elizabethan England spoke that way, but my partner thought it was merely a poetical device to fill out a metrical line. What do you say?

A: When the “-ed” suffix first appeared in Old English writing, according to scholars, it sounded much like the modern pronunciation of the last syllable in adjectives like “crooked,” “dogged,” and “wicked.”

In Old English, spoken from the mid-5th to the late 11th centuries, the “-ed” suffix was one of several endings used to form the past participle of verbs and to form adjectives from nouns. For example, the past participle of the verb hieran (to hear) was gehiered (heard). And the adjectival form of the noun hring (ring) was hringed.

The “-ed” syllable was still usually pronounced in Middle English, which was spoken from around 1150 to 1450, but writers occasionally dropped the “e” or replaced it with an apostrophe, an indication that the syllable was sometimes lost in speech. The Old English gehiered (heard), for instance, was variously hered, herrd, herd, etc., in Middle English writing.

It’s clear from the meter that Chaucer intended the “-ed” of “perced” (pierced) to be pronounced at the beginning of The Canterbury Tales (1387): “Whan that Aprill with his shoures soote [its showers sweet] / The droghte of March hath perced to the roote.”

As far as we can tell, the word “pierced” was two syllables in common speech as well as poetry when Chaucer was writing, but a one-syllable version showed up in writing (and probably speaking) by the mid-1500s.

Here’s an example, with the past participle written as “perst,” from “The Lover Describeth His Being Stricken With Sight of His Love,” a sonnet by Thomas Wyatt:

“The liuely sparkes, that issue from those eyes, / Against the which there vaileth [avails] no defence, / Haue perst my hart, and done it none offence” (Songes and Sonettes, 1557, a collection of works by Wyatt, Henry Howard, Nicholas Grimald, and various anonymous poets).

In the early Modern English period, when Shakespeare was writing, the “-ed” ending was often contracted in writing to “-d” or “-t,” indicating that this was the usual pronunciation. Here are a few examples from Shakespeare:

“O, never will I trust to speeches penn’d, / Nor to the motion of a schoolboy’s tongue” (Love’s Labour’s Lost, believed written in the mid-1590s).

“I remember the kissing of her batlet [butter paddle] and the cow’s dugs [uddders] that her pretty chopt hands had milked” (As You Like It, circa 1599). The adjective “chopped” here meant cracked or chapped.

“This would have seem’d a period / To such as love not sorrow” (King Lear, early 1600s).

However, writers in the early Modern English period tended to keep the full “-ed” ending in many words where the syllable is still heard now, as in these examples from Shakespeare:

“To cipher what is writ in learned books, / Will quote my loathsome trespass in my looks” (The Rape of Lucrece, 1594).

“And the stony-hearted villains know it well enough” (King Henry IV, Part I, late 1500s).

“O heaven, the vanity of wretched fools!” (Measure for Measure, early 1600s).

“Something wicked this way comes” (Macbeth, early 1600s).

Although people began dropping the “e” of “-ed” in writing and apparently pronunciation in early Modern English, the full syllable was still being written and pronounced in the 18th and 19th centuries in some words where it’s now lost.

In A Critical Pronouncing Dictionary and Expositor of the English Language (1791), John Walker says the adjectives “crabbed,” “forked,” “flagged,” “flubbed,” “hooked,” “scabbed,” “snagged,” “tusked,” and others are “pronounced in two syllables.” An 1859 update of the dictionary, edited by Townsend Young, adds “hawked,” “scrubbed,” “tressed,” and a few more.

However, writers continued to drop the final syllable of “-ed” words despite the objections of lexicographers and pronunciation  guides. In the early 18th century, one of the sticklers, Jonathan Swift, condemned the loss of the final syllable in verbs written as “drudg’d,” “disturb’d,” “rebuk’d,” and “a thousand others, everywhere to be met with in Prose as well as Verse.”

In a 1712 letter to Robert, Earl of Oxford, Swift argued that “by leaving out a Vowel to save a Syllable, we form so jarring a Sound, and so difficult to utter, that I have often wondred how it could ever obtain.” Yes, “wondred” used to be a past tense of the verb “wonder,” which was originally wondrian in Old English and wondri or woundre in Middle English. Thus language changes.

Today, the “-ed” suffix is used in writing for the past tense and past participle of regular (or weak) verbs, for participial adjectives, and for adjectives derived from nouns. It’s usually not pronounced as a syllable, but there are some notable exceptions.

As the Oxford English Dictionary explains, “this -ed is in most cases still retained in writing, although the pronunciation is now normally vowelless.” The dictionary says “-ed” is usually pronounced as either “d” (as in “robed”) or “t” (“reaped”). The “t” sound follows a voiceless consonant, one produced without the vocal cords.

The OED says the “full pronunciation” of “-ed” as a syllable (pronounced id) “regularly occurs in ordinary speech only in the endings -ted, -ded” (that is, after the letters “t” and “d” as in “hated” and “faded”).

“A few words, such as blessed, cursed, beloved, which are familiar chiefly in religious use, have escaped the general tendency to contraction when used as adjectives,” the OED says, adding that “the adjectival use of learned is distinguished by its pronunciation” as two syllables. Additional exceptions include the adjectives “aged,” “jagged,” “naked,” “ragged,” “wretched,” and others mentioned in this post.

As we said at the beginning, the suffix “-ed” was used in Old English  to form the past participle of verbs and to turn nouns into adjectives.

The past participle of a weak verb was formed by adding “-ed,” “-ad,” “-od,” or “-ud” to the stem. The past participle of a strong verb (now commonly called an irregular verb) was formed by changing the stressed vowel or by adding the suffix “-en.”

And as we said earlier, the use of “-ed” to turn nouns into adjectives has also been around since Anglo-Saxon times. Nevertheless, some language commentators objected to the usage in the 18th and 19th centuries.

Samuel Johnson, for example, apparently considered the usage new and was surprised to see it in these lines from “Ode on Spring” by Thomas Gray: “The insect youth are on the wing, / Eager to taste the honied spring.” Here’s Johnson’s comment:

“There has of late arisen a practice of giving to adjectives derived from substantives, the termination of participles; such as the cultured plain, the daisied bank, but I was sorry to see, in the lines of a scholar like Gray, the honied spring” (from Johnson’s Lives of the Most Eminent English Poets, 1779-81).

We’ll end with a grumpy comment about the adjective “talented,” written by Samuel Taylor Coleridge on July 8, 1832. This is from Specimens of the Table Talk of Samuel Taylor Coleridge (1836), edited by Henry Nelson Coleridge, a frequent visitor to his uncle’s home:

“I regret to see that vile and barbarous vocable talented, stealing out of the newspapers into the leading reviews and most respectable publications of the day. … The formation of a participle passive from a noun is a licence that nothing but a very peculiar felicity can excuse. If mere convenience is to justify such attempts upon the idiom, you cannot stop till the language becomes, in the proper sense of the word, corrupt. Most of these pieces of slang come from America.” (The OED’s earliest examples for the adjective “talented” used to mean “possessing talent” come from British sources.)

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

What’s up with ‘below’?

Q: Merriam-Webster describes “below” as an adverb in these two examples: “gazed at the water below” and “voices from the apartment below.” My understanding is that adverbs modify verbs, adjectives, and other adverbs. But “below” here is modifying two nouns, “water” and “apartment.” So what am I missing?

A: You raise a very good question. As it happens, linguists have asked themselves the same thing, and in the last few decades they’ve abandoned the traditional thinking about the status of “below” and similar words that express spatial relationships.

Traditionally, “below” has been classified as either a preposition or an adverb. It’s a preposition if an object follows, as in “the water below the bridge” and “the apartment below ours.” It’s an adverb if it doesn’t have an object, as in “the water below” and “the apartment below.” As far as we can tell, that’s been the thinking among grammarians since the late 18th century.

But as we’ll explain later, linguists now regard “below” solely as a preposition, a view reflected in recent comprehensive grammar books but not yet recognized in popular grammars and standard dictionaries.

Of course, for all practical purposes the word hasn’t changed, either in its meaning or in the way it’s used. In the scholarly comprehensive  grammars, the word has merely shifted in some cases from one lexical category (adverb) to another (preposition).

Standard dictionaries haven’t yet caught up to this new way of thinking about “below.” The 10 standard dictionaries we usually consult say it can be either an adverb or a preposition in constructions like those above.

Cambridge, for example, calls it a preposition in “below the picture” but an adverb in “the apartment below.” The dictionary adds: “When the adverb below is used to modify a noun, it follows the noun.” (We know what you’re thinking: An adverb modifying a noun? Stay tuned.)

Despite the differing labels, the adverb and the preposition have virtually the same meaning. By and large, the standard dictionaries that define them say the adverb means “in or to a lower position” or “beneath,” while the preposition means “lower than” or “beneath.”

And in the Oxford English Dictionary, an etymological dictionary based on historical evidence, the broad definitions for the adverb and the preposition are identical: “Expressing position in or movement to a lower place.”

As we mentioned above, this view of “below” and words like it has a long history. Some similar words of this kind, prepositions that have traditionally been called adverbs when used without an object, include these:

“aboard,” “about,” “above,” “across,” “after,” “against,” “ahead,” “along,” “around,” “before,” “behind,” “below,” “beneath,” “besides,” “between,” “beyond,” “by,” “down,” “for,” “in,” “inside,” “near,” “off,” “on,” “opposite,” “out,” “outside,” “over,” “past,” “round,” “since,” “through,” “throughout,” “to,” “under,” “underneath,” “up,” “within,” “without.”

For example, Lindley Murray’s English Grammar, Adapted to the Different Classes of Learners (1795), says that in some instances a preposition “becomes an adverb merely by its application.” The word “since,” he says, is a preposition in “I have not seen him since that time” and an adverb in “Our friendship commenced long since.”

Murray also says, “The prepositions after, before, above, beneath, and several others, sometimes appear to be adverbs, and may be so considered,” giving as an example “He died not long before.” But when a complement follows, he writes, the word is a preposition, as in “He died not long before that time.”

A generation later, the philosopher John Fearn echoed Murray, referring to “the known Principle” that prepositions at the end of a sentence “become Adverbs by Position.”

Fearn also distinguishes between prepositions that require an object (like “with” and “from”) and those that don’t (like “through”). Those in the second group, he says, are “prepositional adverbs” when they’re used without an object (as in “He went through”).

(From Fearn’s Anti-Tooke: Or an Analysis of the Principles and Structure of Language, Vol. II, 1827, an extended argument against the language theories of John Horne Tooke.)

As we said above, the traditional view persists in standard dictionaries but is no longer found in up-to-date comprehensive grammar. Thinking began to change in the late 1960s, when some academic linguists began questioning the “adverb” label and widening the definition of “preposition.”

In the early ’90s, the linguist Ronald W. Langacker gave four examples of “below” as a preposition—“the valley below; the valley below the cliff; A bird flew below; A bird flew below the cliff.” (From “Prepositions as Grammatical(izing) Elements,” published in the journal Leuvense Bijdragen, 1992.)

Note that in those examples “below” is classified as a preposition (1) whether it’s used alone or with a complement, and (2) whether it follows a noun or a verb—thus resembling an adjective in one case (“valley below”) and an adverb in the other (“flew below”).

Most linguists today would agree with that interpretation: “below” and words like it are prepositions. Used with a complement, they’re said to be “transitive prepositions”; used without one, they’re “intransitive prepositions.”

The newer interpretation has only gradually made its way into major books on English grammar.

For example, the old view persisted at least through the publication in 1985 of A Comprehensive Grammar of the English Language, by Randolph Quirk et al. It uses the terms “postmodifying adverb” and “prepositional adverb” for “below” and similar words in constructions like these.

A “postmodifying adverb,” according to the Comprehensive Grammar, is identical to a preposition except that it has no complement and modifies a preceding noun. Examples given include “the sentence below” … “the way ahead” … “the people behind.”

A “prepositional adverb,” the book says, is identical to a preposition but has no complement and modifies a verb. Examples include “She stayed in” … “A car drove past.

The word is a preposition, according to Quirk, only if a complement is present (and regardless of what it seems to modify). Examples include “below the picture” … “She stayed in the house” … “A car drove past the door.

The Comprehensive Grammar doesn’t use the words “transitive” and “intransitive” for prepositions, but it comes close: “The relation between prepositional adverbs and prepositional phrases may be compared to that between intransitive and transitive uses of certain verbs.”

The next exhaustive grammar book to come along, The Cambridge Grammar of the English Language (2002), does use those terms. In this book, words that Quirk had previously classified as either postmodifying adverbs or prepositional adverbs are newly categorized as prepositions. The Cambridge Grammar uses “transitive” for prepositions that have a complement, “intransitive” for those that don’t—and it’s the first important English grammar to do so.

The book calls “in” and “since” intransitive prepositions here: “He brought the chair in” … “I haven’t seen her since.” And it calls them transitive prepositions here: “He put it in the box” … “I haven’t seen her since the war.”

The authors of the Cambridge Grammar, Rodney Huddleston and Geoffrey K. Pullum, don’t discuss “below” at length, but they do say that it “belongs only to the preposition category.” It’s also included among a list of prepositions that are used with or without a complement, and these examples show it without one: “the discussion below” … “the room below.

Huddleston and Pullum essentially redraw the boundary between prepositions and adverbs, defining prepositions more broadly than “traditional grammars of English.” In this, they say, they’re “following much work in modern linguistics.” And they give two chief reasons why they  reject the traditional view and reclassify words like “below” as prepositions.

(1) The traditional view “does not allow for a preposition without a complement.” The Cambridge Grammar argues that the presence or absence of a complement has no bearing on the classification. So “the traditional definition of prepositions,” one that says they require a complement, is “unwarranted.”

The book makes an important point about these newly recognized prepositions. Their ability to stand alone, without a complement, “is not a property found just occasionally with one or two prepositions, or only with marginal items,” the book says. “It is a property found systematically throughout a wide range of the most central and typical prepositions in the language.”

(2) The “adverb” label is inappropriate for words like “below” because they don’t behave like adverbs. In “The basket is outside,” for instance, the word “outside” is traditionally defined as an adverb. But as the authors point out, typical adverbs, such as those ending in “-ly,” aren’t normally used to modify forms of the verb “be.”

That role is normally played by adjectives, or by prepositions of the kind we’re discussing—“inside,” “outside,” “above,” “below,” and so on. And such words, the authors write, “no more modify the verb than does young in They are young.”

[Here you might ask, Then why aren’t these words adjectives? “Below” certainly looks like an adjective in uses like “the water below.” The Cambridge Grammar discusses this at length and gives reasons including these: Prepositions can have objects but adjectives can’t. Prepositions are fixed, while adjectives can be inflected for degree (as in “heavy,” “heavier,” “heaviest”) or modified by “very” and “too.” As we wrote on the blog in 2012, the adjectival use of “below” premodifying a noun, as in “Click on the below link,” is not generally accepted.]

In summary, Huddleston and Pullum suggest that if an “-ly” adverb cannot be substituted for the word, then it’s not an adverb. And if a complement could be added (as in “The basket is outside the door”), then it’s not an adverb.

The next influential scholarly grammar to be published, the Oxford Modern English Grammar (2011), written by Bas Aarts, reinforces and builds on this distinction between transitive and intransitive prepositions. And it includes “below” in a list of prepositions that can be used either way—with or without a complement.

Aarts also discusses prepositions that follow a verb and can either stand alone or have a complement: “We might go out” or “We might go out for a meal “I shall probably look in” … or “I shall probably look in at the College.”

In short, modern developments in linguistics have given “below” a new label—it’s a preposition, and only a preposition. The traditional view lives on in dictionaries, and no doubt it will persist for quite some time. But in our opinion, the new label makes more sense.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Pronunciation Usage Writing

Tawk of the Town

[Pat’s review of a book about New York English, reprinted from the September 2020 issue of the Literary Review, London. We’ve left in the British punctuation and spelling.]

* * * * * * * * * * * *

PATRICIA T O’CONNER

You Talkin’ to Me? The Unruly History of New York English

By E J White. Oxford University Press 296 pp

You know how to have a polite conversation, right? You listen, wait for a pause, say your bit, then shut up so someone else can speak. In other words, you take your turn.

You’re obviously not from New York.

To an outsider, someone from, say, Toronto or Seattle or London, a conversation among New Yorkers may resemble a verbal wrestling match. Everyone seems to talk at once, butting in with questions and comments, being loud, rude and aggressive. Actually, according to the American linguist E J White, they’re just being nice.

When they talk simultaneously, raise the volume and insert commentary (‘I knew he was trouble’, ‘I hate that!’), New Yorkers aren’t trying to hijack the conversation, White says. They’re using ‘cooperative overlap’, ‘contextualization cues’ (like vocal pitch) and ‘cooperative interruption’ to keep the talk perking merrily along. To them, argument is engagement, loudness is enthusiasm and interruption means they’re listening, she writes. Behaviour that would stop a conversation dead in Milwaukee nudges it forward in New York.

Why do New Yorkers talk this way? Perhaps, White says, because it’s the cultural norm among many of the communities that have come to make up the city: eastern European Jews, Italians, and Puerto Ricans and other Spanish speakers. As for the famous New York accent, that’s something else again.

White, who teaches the history of language at Stony Brook University, New York, argues that ‘Americans sound the way they do because New Yorkers sound the way they do’. In You Talkin’ to Me? she makes a convincing case that the sounds of standard American English developed, at least in part, as a backlash against immigration and the accent of New York.

Although the book is aimed at general readers, it’s based on up-to-the-minute research in the relatively new field of historical sociolinguistics. (Here a New Yorker would helpfully interrupt, ‘Yeah, which is what?’) Briefly, it is about how and why language changes. Its central premise is that things like social class, gender, age, group identity and notions of prestige, all working in particular historical settings, are what drive change.

Take one of the sounds typically associated with New York speech the oi that’s heard when ‘bird’ is pronounced boid, ‘earl’ oil, ‘certainly’ soitanly, and so on. Here’s a surprise. That oi, White says, was ‘a marker of upper-class speech’ in old New York, a prestige pronunciation used by Teddy Roosevelt and the Four Hundred who rubbed elbows in Mrs Astor’s ballroom. Here’s another surprise. The pronunciation is now defunct and exists only as a stereotype. It retired from high society after the First World War and by mid-century it was no longer part of New York speech in general. Yet for decades afterwards it persisted in sitcoms, cartoons and the like. Although extinct ‘in the wild’ (as linguists like to say), it lives on in a mythological ‘New York City of the mind’.

Another feature of New York speech, one that survives today, though it’s weakening, is the dropping of r after a vowel in words like ‘four’ (foah), ‘park’ (pahk) and ‘never’ (nevuh). This was also considered a prestige pronunciation in the early 1900s, White says, not just in New York City but in much of New England and the South as well, where it was valued for its resemblance to cultivated British speech. Until sometime in the 1950s, in fact, it was considered part of what elocutionists used to call ‘General American’. It was taught, the author writes, not only to schoolchildren on the East Coast, but also to aspiring actors, public speakers and social climbers nationwide. But here, too, change lay ahead.

While r-dropping is still heard in New York, Boston and pockets along the Eastern Seaboard, it has all but vanished in the South and was never adopted in the rest of the United States. Here the author deftly unravels an intriguing mystery: why the most important city in the nation, its centre of cultural and economic power, does not provide, as is the case with other countries, the standard model for its speech.

To begin with, White reminds us, the original Americans always pronounced r, as the British did in colonial times. Only in the late 18th century did the British stop pronouncing r after a vowel. Not surprisingly, the colonists who remained in the big East Coast seaports and had regular contact with London adopted the new British pronunciation. But those who settled inland retained the old r and never lost it. (As White says, this means that Shakespeare’s accent was probably more like standard American today than Received Pronunciation.)

Posh eastern universities also helped to turn the nation’s accent westward. Towards the end of the First World War, White says, Ivy League schools fretted that swelling numbers of Jewish students, admitted on merit alone, would discourage enrolment from the Protestant upper class. Admissions practices changed. In the 1920s, elite schools began to recruit students from outside New York’s orbit and to ask searching questions about race, religion, colour and heritage. The result, White says, was that upper-crust institutions ‘shifted their preference for prestige pronunciation toward the “purer” regions of the West and the Midwest, where Protestants of “Nordic” descent were more likely to live’. Thus notions about what constituted ‘educated’ American speech gradually shifted.

Another influence, the author writes, was the Midwestern-sounding radio and television ‘network English’ that was inspired by the Second World War reporting of Edward R Murrow and the ‘Murrow Boys’ he recruited to CBS from the nation’s interior. Murrow’s eloquent, authoritative reports, heard by millions, influenced generations of broadcasters, including Walter Cronkite, Chet Huntley and Dan Rather, who didn’t try to sound like they had grown up on the Eastern Seaboard. The voice of the Midwest became the voice of America.

This book takes in a lot of territory, all solidly researched and footnoted. But dry? Fuhgeddaboutit. White is particularly entertaining when she discusses underworld slang from the city’s ‘sensitive lines of business’ and she’s also good on song lyrics, from Tin Pan Alley days to hip-hop. She dwells lovingly on the ‘sharp, smart, swift, and sure’ lyrics of the golden age of American popular music – roughly, the first half of the 20th century. It was a time when New York lyricists, nearly all of them Jewish, preserved in the American Songbook not only the vernacular of the Lower East Side but also the colloquialisms of Harlem and the snappy patois of advertising.

You Talkin’ to Me? is engrossing and often funny. In dissecting the exaggerated New York accents of Bugs Bunny and Groucho Marx, White observes that ‘Bugs even wielded his carrot like Groucho’s cigar’. And she says that the word ‘fuck’ is so ubiquitous in Gotham that it has lost its edge, so a New Yorker in need of a blistering insult must look elsewhere. ‘There may be some truth to the old joke that in Los Angeles, people say “Have a nice day” and mean “Fuck off,” while in New York, people say “Fuck off” and mean “Have a nice day.”’

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Usage Word origin Writing

Lex education

Q: I recently received a list of the finalists in a wordplay contest for lexophiles. The winner: “Those who get too big for their pants will be totally exposed in the end.” So, what can you say about the term “lexophile”? I love a clever turn of aphasia.

A: That wordplay list, which has been making the rounds online, purports to be from an annual New York Times lexophile contest. As far as we know, the Times has never had such a contest. In fact, we couldn’t find the word “lexophile” in a search of the newspaper’s archive.

We also couldn’t find “lexophile” in the Oxford English Dictionary or any of the 10 standard dictionaries we regularly consult. However, we did find it in two collaborative references, the often helpful Wiktionary and the idiosyncratic Urban Dictionary.

Wiktionary defines “lexophile” as “a lover of words, especially in word games, puzzles, anagrams, palindromes, etc.” The elements are Greek: “lex-” from λέξις (lexis, word), and “-phile” from ϕίλος (philos, loving).

One contributor to Urban Dictionary defines “lexophile” as “a lover of cryptic words,” while another defines “lexiphile” as “a word used to describe those that have a love for words.”

A more common noun for a word lover, “logophile,” is found in eight standard dictionaries as well as the OED, which is an etymological dictionary. The element “log-” is from the Greek λόγος (logos, word); both logos and lexis are derived from λέγειν (legein, to speak).

The earliest OED citation for “logophile” is from the Feb. 1, 1959, issue of the Sunday Times (London): “We are pretty sure that since all Sunday Times readers are natural and inveterate logophiles … he [the lexicographer R. W. Burchfield] will get some invaluable assistance.”

We’ve found an earlier example for “logophile” in a California newspaper, but the term was used to mean someone who loves to talk, not someone who loves words: “One who loves to talk, but does not carry it to the point of mania, is a logophile, pronounced: LOG-uh-file” (San Bernardino Sun, Jan. 17, 1951).

Interestingly, the noun logophile appeared in French in the mid-19th century with a similar voluble sense. Dictionnaire National ou Dictionnaire Universel de la Langue Française (1850), by Louis-Nicolas Bescherelle, defines a logophile as “Qui aime à parler, à faire de longs discours.”

Merriam-Webster says the first known use of “logophile” in English was in 1923, but it doesn’t include a citation. We haven’t been able to find any examples earlier than the mid-20th century.

As for your “clever turn of aphasia,” the less said the better.

Help support the Grammarphobia Blog with your donation. And check outour books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

Whomspun history

Q: I often see the use of “whomever” as an object in a subordinate clause like “whomever he chooses.” I can see the logic of this, but it feels awkward to me. Is it because I grew up surrounded by grammatical laxity that “whomever” seems like a neologism born of pedantry? Was it already established as correct English before my time?

A: If “whomever” seems awkward to you, its stuffier sidekick “whomsoever” must strike you as even more awkward. The roots of both pronouns, as well as of “whom” itself, go back to Anglo-Saxon times, though it looks as if all three may be on the way out.

In Old English, the language was much more inflected than it is now—that is, words changed their forms more often to show their functions. You can see this in some of the forms, or declensions, of hwa, the ancestor of “who,” “whom,” and “what.”

When used for a masculine or feminine noun, as we use “who” and “whom” today, the Old English forms were hwa (subject), hwam or hwæm (indirect object), and hwone or hwæne (direct object). When used for a neuter noun, as we use “what” today, the forms were hwæt (subject), hwam or hwæm (indirect object), and hwæt (direct object).

As for “whoever” and “whomever,” the two terms ultimately come from swa hwa swa, the Old English version of “whoso,” and swa hwam swaswa, the early Middle English rendering of “whomso.”

An Old English speaker would use swa hwa swa (literally “so who so”) much as we would use “whoever” and “whosoever.” And his Middle English-speaking descendants would use swa hwam swaswa (“so whom soso”) as we would use “whomever” and “whomsoever.”

Here’s an early “whoso” example that we’ve found in King Alfred’s Old English translation (circa 888) of De Consolatione Philosophiae, a sixth-century Latin work by the Roman philosopher Boethius: “swa hwa swa wille dioplice spirigan mid inneweardan mode æfter ryhte” (“whoso would deeply search with inner mind after truth”).

And here’s a “whomso” citation in the Oxford English Dictionary from a 12th-century document in the Anglo-Saxon Chronicle: “Þæt hi mosten cesen of clerchades man swa hwam swaswa hi wolden to ercebiscop” (“that they could choose from the secular clerks whomso they wished as archbishop”).

“Whosoever” (hwase eauer) and “whoever” (hwa efre) also first appeared writing in the  12th century, while “whomever” (wom euer) showed up in the 14th century and “whomsoever” (whom-so-euyr) followed in the 15th.

The first OED citation for “whoever,” which we’ve expanded, is from an Old English sermon in the Lambeth Homilies (circa 1175):

“Hwa efre þenne ilokie wel þene sunne dei. oðer þa oðer halie daʒes þe mon beot in chirche to lokien swa þe sunne dei. beo heo dalneominde of heofene riches blisse” (“Whoever looks well on Sunday and on the other holy days that man must also be in church, then he shall participate in the heavenly kingdom’s bliss”).

The dictionary’s earliest example for “whomever” is from Arthour and Merlin (circa 1330): “Wom euer þat he hitt, Þe heued to þe chinne he slitt” (“Whomever he hit, he beheaded, to the chin he slit”). Arthurian legends can get gory at times.

So as you can see, “whomever” was indeed established in English before your time—quite a few centuries before.

As for the use of these terms today, you can find “whoso” and “whomso” in contemporary dictionaries, but they’re usually labeled “archaic,” while “whosoever” and “whomsoever” are generally described as formal versions of “whoever” and “whomever.”

“Who,” of course, is still one of the most common pronouns in English, but “whom” and company are falling out of favor, and many usage writers now accept the use of “who” and “whoever” for “whom,” “whomever,” and “whomsoever” in speech and informal writing.

As Jeremy Butterfield puts it in Fowler’s Dictionary of Modern English Usage (4th ed.), “In practice, whom is in terminal decline and is increasingly replaced by who (or that), especially in conversational English, in which in most cases it would be inappropriately formal.”

Butterfield’s recommendation: “Despite exceptions, the best general rule is as follows: who will work perfectly well in conversation (except the most elevated kind) and in informal writing.” The main exception he notes is that “who” should not be used for “whom” right after a preposition.

Traditionally, as you know, “who” (like the Old English hwa) is a subject, and “whom” (like hwam) is an object. As Pat explains in Woe Is I, her grammar and usage book, “who does something (it’s a subject, like he), and whom has something done to it (it’s an object, like him).”

Pat recommends the traditional usage in formal writing, but she has a section in the new fourth edition of Woe Is I on how to be whom-less in conversation and informal writing:

A Cure for the Whom-Sick

Now for the good news. In almost all cases, you can use who instead of whom in conversation or in informal writing—personal letters, casual memos, emails, and texts.

Sure, it’s not a hundred percent correct, and I don’t recommend using it on formal occasions, but who is certainly less stuffy, especially at the beginning of a sentence or a clause: Who’s the letter from? Did I tell you who I saw at the movies? Who are you waiting to see? No matter who you invite, someone will be left out.

A note of caution: Who can sound grating if used for whom right after a preposition. You can get around this by putting who in front. From whom? becomes Who from? So when a colleague tells you he’s going on a Caribbean cruise and you ask, “Who with?” he’s more likely to question your discretion than your grammar.

[Note: The reader who sent us this question responded, “Your example involving a Caribbean cruise seems fraught with danger in these pan(dem)icky times. If a colleague were to tell me that, my first instinct would be to ask, ‘Who would dare?’ ”]

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Writing

When negatives collide

Q: I often encounter a sentence such as “I wouldn’t be surprised if she didn’t steal the necklace,” when it actually means the opposite—the speaker or writer wouldn’t be surprised if she DID steal it. Is there a term for this (a type of double negative, maybe)? And how did it come to be so widespread?

A: We’ve seen several expressions for this kind of construction. Terms used by linguists today include “expletive negation,” in which “expletive” means redundant; “negative concord,” for multiple negatives intended to express a single negative meaning; and, more simply, “overnegation.”

Yes, it’s also been called a “double negative,” the term H. L. Mencken used for it more than 80 years ago. Like linguists today, Mencken didn’t find this particular specimen odd or unusual. As he wrote in The American Language (4th ed., 1936), “ ‘I wouldn’t be surprised if it didn’t rain’ is almost Standard American.”

The linguist Mark Liberman discussed this usage—“wouldn’t be surprised” followed by another negative—on the Language Log in 2009. He called it a “species of apparent overnegation” along the lines of “fail to miss” and “cannot underestimate.” (More on those two later.)

Of course, what appears to be an overnegation may not be so. For instance, if everyone but you is predicting rain, you might very well respond with “I wouldn’t be surprised if it didn’t rain” (i.e., you wouldn’t be surprised if it failed to rain). No overnegation there, just two negatives used literally, nothing redundant.

But the usage we’re discussing is a genuine redundancy with no literal intent. And it’s a type of redundancy that’s very common, especially in spoken English. Yet it seldom causes confusion. People generally interpret those dueling negative clauses just as the writer or speaker intends.

You’re a good example of this. While you noticed the redundancy there (“I wouldn’t be surprised if she didn’t steal the necklace”), you correctly interpreted the writer’s meaning (if she did steal it). And no doubt most people would interpret it that way, whether they encountered the sentence in writing or in speech. Why is this?

In the case of written English, our guess is that readers interpreting the writer’s intent take their cues not only from the surrounding context but also from their own past experience. They’re used to seeing this construction and don’t automatically interpret it literally.

In the case of spoken English, where the usage is more common, listeners have the added advantage of vocal cues. Take these two sentences, which are identical except for the different underlined stresses. A listener would interpret them as having opposite meanings:

(1) “I wouldn’t be surprised if he didn’t win” = I wouldn’t be surprised if he won.

(2) “I wouldn’t be surprised if he didn’t win” = I wouldn’t be surprised if he lost.

In #1, the redundant or overnegated example, the speaker emphasizes the verb and whizzes past the superfluous second negative (“didn’t”). But in #2, the literal example, the speaker emphasizes the second negative, so there’s no doubt that it’s intentional and not redundant.

Language types have been commenting on the overnegated “wouldn’t be surprised” usage since the 19th century.

On the Language Log, Liberman cites this entry from “Some Peculiarities of Speech in Mississippi,” a dissertation written by Hubert Anthony Shands in 1891 and published in 1893: “Wouldn’t be surprised if it didn’t. This expression is frequently used by all classes in the place of wouldn’t be surprised if it did.”

The usage wasn’t peculiar to Mississippi, though. In old newspaper databases, we’ve found 19th-century examples from other parts of the country.

These two 1859 sightings, the earliest we’ve seen, appeared in a humorous story, written in dialect, from the May 7, 1859, issue of the Columbia Spy, a Pennsylvania newspaper:

“ ‘There’s been so much hard swearin’ on that book’ (pointing to Logan’s Bible) ‘I wouldn’t be surprised if the truth was not pretty considerably ranshacked outen it.’ ”

“ ‘I wouldn’t be surprised if you wa’nt vain arter this.’ ”

This example is from newspaper serving the twin cities of Bristol in Virginia and Tennessee: “I wouldn’t be surprised if some of them didn’t run away after all without paying their bills.” (The Bristol News,  Feb. 8, 1876.)

And here’s one from the Midwest: “The business interests of Salina feel the weight of their power, and we wouldn’t be surprised if even Nature did not pause for a moment and measure their colossal proportions.” (The Saline County Journal in Salina, Kansas, Jan. 25, 1877.)

As mentioned above, there are other varieties of overnegation besides the “wouldn’t be surprised” variety. Here are some of the more common ones, along with their intended interpretations.

“You can’t fail to miss it” = You can’t miss it

“We can’t underestimate” = We can’t overestimate

“Nothing is too trivial to ignore” = Nothing is too trivial to consider

“I don’t deny that she doesn’t have some good qualities” = I don’t deny that she does have some good qualities

“We don’t doubt that it’s not dangerous” = We don’t doubt that it is dangerous

As we’ve said, even readers or listeners who notice the excess negativity will understand the intended meaning.

The Dutch linguist Wim van der Wurff uses the term “expletive negation” for usages of this kind. As he explains, the first clause “involves a verb or noun with the meaning ‘fear,’ ‘forbid,’ ‘prohibit,’ ‘hinder,’ ‘prevent,’ ‘avoid,’ ‘deny,’ ‘refuse,’ ‘doubt’ or another predicate with some kind of negative meaning.” What follows is a subordinate clause with “a negative marker” that’s “semantically redundant, or expletive.”

He gives an example from a letter written by Charles Darwin: “It never occurred to me to doubt that your work would not advance our common object in the highest degree.” (From Negation in the History of English, edited by Ingrid Tieken-Boon Van Ostade and others.)

Historical linguists have shown that this sort of overnegation exists in a great many languages and in fact was a common usage in Old English and early Middle English.

“Negative concord has been a native grammatical construction since the time of Alfred, at least,” Daniel W. Noland writes, referring to the 9th-century Saxon king (“A Diachronic Survey of English Negative Concord,” American Speech, summer 1991).

But after the Middle Ages, the use of overnegation in English began to fall off, at least in the writings that have been handed down. Little by little, from around the late 15th to the 18th century, multiple negations became less frequent until they finally came to be considered unacceptable. Why?

Don’t point to the grammarians. It seems that this transition happened naturally, not because people started to object on logical or grammatical grounds.

In her monograph A History of English Negation (2004), the Italian linguist Gabriella Mazzon says the claim “that multiple negation was excluded from the standard as a consequence of the grammarians’ attacks is not correct, since the phenomenon had been on its way out of this variety [i.e., standard English] for some time already.”

As for today, Noland says in his American Speech paper, this type of overnegation “still has a marginal status even in standard English.”

We wouldn’t be surprised!

Help support the Grammarphobia Blog with your donation. And check out our books about the English language and more.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

To internet, or not to internet?

Q: I saw this in a New York Times article the other day: “The Virus Changed the Way We Internet.” And this was the tagline of a recent Bayer TV commercial: “This is why we science.” Am I just an old fogy or can any noun be turned into a verb these days?

A: You won’t find the verbs “internet” or “science” in standard dictionaries, but there’s a case to be made for the verbing of the noun “internet.” In fact, the verb showed up in print just a year after the noun, though not in the sense you’re asking about.

When the noun “internet” first appeared in 1975, it referred to “a computer network comprising or connecting a number of smaller networks,” according to the Oxford English Dictionary. When the verb appeared in 1976, it meant “to connect by means of a computer network.”

The usual sense of the noun now—a global computer network that allows users around the world to communicate and share information—evolved over the 1980s and ’90s. The verb took on the sense you’re asking about—to use the internet—in the 1990s.

Here are the two earliest OED citations for the verb used in that way: “A number of providers want you to Internet to their services” (Globe & Mail, Toronto, May 13, 1994) … “I didn’t sleep, I didn’t eat. I just internetted” (Associated Press, Aug. 21, 1994).

Oxford doesn’t include a usage label that would suggest the verb is anything other than standard English. However, none of the 10 standard dictionaries that we regularly consult have an entry for “internet” as a verb. (The collaborative Wiktionary includes it as an “informal” verb meaning to use the internet, and offers this example: “Having no idea what that means, I am internetting like mad.”)

As for the verb “science,” we couldn’t find an entry for it in either the OED or standard dictionaries. However, Oxford and four standard dictionaries include the adjective “scienced” as a rare or archaic usage.

Oxford describes the adjective as “now rare” when used to mean “knowledgeable, learned; skilled or trained in a specified profession or pursuit; (in later use also) adopting a scientific approach.” It says the term is “now somewhat archaic” when used in the sense of “well versed or trained in boxing.”

(Wiktionary includes the “colloquial, humorous” use of the verb “science,” meaning “to use science to solve a problem.” It also includes the adjective “scienced,” meaning “knowledgeable, learned; skilled or trained in a specified profession or pursuit.” It doesn’t cite any examples.)

Speaking for ourselves, we aren’t likely to use “internet” or “science” as a verb, at least not yet. Neither usage is widespread enough. However, we see nothing wrong in principle with the verbing of nouns. In a 2016 post, we defended it as process that dates back to the early days of English.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Expression Language Linguistics Usage Writing

Sticking in a knife with a smile

Q: I have recently heard two instances of someone prefacing a criticism by saying, “I am telling you this lovingly.” It sounds to me like sticking in a knife with a smile. It’s similar to prefacing a remark with “clearly,” an indication that things may not be all that clear. Any thoughts about this?

A: We haven’t yet noticed “lovingly” used to criticise with a smile. But like you, we’re bugged by deceptive preludes to faultfinding.

As you know, these introductory remarks are often followed by the word “but” and the critical statement. Some of the more common ones: “I don’t want to criticize, but …,” “I hate to be the one to tell you, but …,” “Don’t take this the wrong way, but …,” and “I don’t want to hurt your feelings, but ….”

These “contrary-to-fact phrases” have been called “false fronts,” “wishwashers,” “but heads,” and “lying qualifiers,” according to the lexicographer Erin McKean, as we noted in a 2012 post.

McKean says the object of these opening remarks is “to preemptively deny a charge that has yet to be made, with a kind of ‘best offense is a good defense’ strategy” (Boston Globe, Nov. 14, 2010).

“This technique,” she notes, “has a distinguished relative in classical rhetoric: the device of procatalepsis, in which the speaker brings up and immediately refutes the anticipated objections of his or her hearer.”

Once you start looking for these deceptive introductions, McKean says, “you see them everywhere, and you see how much they reveal about the speaker. When someone says ‘It’s not about the money, but …,’ it’s almost always about the money. If you hear ‘It really doesn’t matter to me, but …,’ odds are it does matter, and quite a bit.”

“ ‘No offense, but …’ and ‘Don’t take this the wrong way, but …’ are both warning flags, guaranteed to precede statements that are offensive, insulting, or both,” she adds. “ ‘I don’t mean to be rude, but …’ invariably signals the advent of breathtaking, blatant, write-in-to-Miss-Manners-style rudeness. (And when someone starts out by saying ‘Promise me you won’t get mad, but …’ you might as well go ahead and start getting mad.)”

McKean doesn’t mention the use of “clearly” at the beginning of a sentence, but she discusses a few similar sentence adverbs: “Someone who begins a sentence with ‘Confidentially’ is nearly always betraying a confidence; someone who starts out ‘Frankly,’ or ‘Honestly,’ ‘To be (completely) honest with you,’ or ‘Let me give it to you straight’ brings to mind Ralph Waldo Emerson’s quip: ‘The louder he talked of his honor, the faster we counted our spoons.’ ”

We should also mention a 2013 post of ours about “Just sayin’,” an expression that follows a critical comment: “ ‘You might look for a new hair stylist. Just sayin’.”

Why do people use deceptive phrases in criticizing others? McKean suggests that “our real need for these phrases may be rooted in something closer to self-delusion. We’d all like to believe we aren’t being spiteful, nosy or less than forthcoming. To proclaim our innocence in this way is to assert that we are, indeed, innocent.”

However, we think that many of us—including the two of us—use these sneaky expressions simply because we don’t feel comfortable criticizing others, even when criticism may be warranted. Unfortunately, a sneaky criticism often stings more than one that’s plainspoken.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Writing

Now I am become Death

Q: I recently read a reference to J. Robert Oppenheimer’s comment about the first test of an atomic bomb: “Now I am become Death, the destroyer of worlds.” I assume that “I am become” is an old usage. How would it be expressed in modern English?

A: That quotation illustrates an archaic English verb construction that’s now found chiefly in literary, poetic, or religious writings. This is the use of forms of “be” in place of “have” as an auxiliary verb in compound tenses: “The prince is [or was] arrived” instead of “The prince has [or had] arrived.”

The passage you ask about, “I am become Death,” is a present-perfect construction equivalent to “I have become Death.” (We’ll have more to say later about Oppenheimer and his quotation from the Bhagavad Gita.)

As we wrote on the blog in 2015, Lincoln’s Gettysburg Address has a well-known example of this usage: “We are met on a great battle-field.” Another familiar use is from the Bible: “He is risen” (King James Version, Matthew 28:6). And Mark Twain uses “I am grown old” in his Autobiography (in a passage first published serially in 1907). All of those are in the present-perfect tense.

Though usages like this were rare in Old English, they became quite frequent during the early Modern English period—roughly from the late 1400s to the mid-1600s, according to The Origins and Development of the English Language (4th ed., 1992), by Thomas Pyles and John Algeo.

The verbs affected were mostly intransitive (that is, without objects) and involved movement and change. The Oxford English Dictionary mentions “verbs of motion such as come, go, rise, set, fall, arrive, depart, grow, etc.”

The dictionary’s citations from the mid-1400s include “So may þat boy be fledde” (“That boy may well be fled”) and “In euell tyme ben oure enmyes entred” (“Our enemies are entered in evil times”).

In Modern English (mid-17th century onward), this auxiliary “be” faded from ordinary English and was largely replaced by “have.” So by Lincoln’s time, the auxiliary “be” was considered poetic or literary. You can see why if you look again at the examples above.

Lincoln used “we are met” to lend his speech a gravity and stateliness that wouldn’t be conveyed by the usual present-perfect (“we have met”). “He is risen” is nobler and more elevated than the usual present perfect (“He has risen”). And Twain’s poetic “I am grown old” is weightier and more solemn than the prosaic version (“I have grown old”).

Apart from matters of tone, the auxiliary “be,” especially in the present perfect, conveys a slightly different meaning than the auxiliary “have.” It emphasizes a state or condition that’s true in the present, not merely an act completed in the past.

As Oxford says, this use of “be” expresses “a condition or state attained at the time of speaking, rather than the action of reaching it, e.g. ‘the sun is set,’ ‘our guests are gone,’ ‘Babylon is fallen,’ ‘the children are all grown up.’ ”

Even today verbs are sometimes conjugated with “be” when they represent states or conditions. A modern speaker might easily say, “The kids were [vs. had] grown long before we retired,” or “By noon the workmen were [vs. had] gone,” or “Is [vs. has] she very much changed?”

In older English, those participles (“grown,” “gone,” “changed”) would have been recognized as verbs (“grow,” “go,” “change”) conjugated in the present perfect with the auxiliary “be.” Many such examples are interpreted as such in the OED. However, in current English they can also be analyzed as participial adjectives modifying a subject, with “be” as the principal verb.

In its entry for the verb “grow,” for example, Oxford has this explanation: “In early use always conjugated with be, and still so conjugated when a state or result is implied.” And in the case of “gone,” the dictionary says that its adjectival use “developed out of the perfect construction with be as auxiliary, reinterpreted as main verb with participial adjective.”

We can never write enough about the word “be.” As David Crystal says, “If we take its eight elements together—be, am, are, is, was, were, being, been—it turns out to be the most frequent item in English, after the” (The Story of Be, 2017).

And a word that’s in constant, heavy use for 1,500 years undergoes a lot of transformations. It’s entitled to be complicated, and no doubt further complications are still to come. To use an expression first recorded in the 1600s, miracles are not ceased.

As for Oppenheimer’s comment, various versions have appeared since he witnessed the atomic test at Alamogordo, NM, on July 16, 1945. You can hear his words in The Decision to Drop the Bomb, a 1965 NBC documentary:

“We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita: Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, ‘Now I am become Death, the destroyer of worlds.’ I suppose we all thought that, one way or another.”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

A ‘they’ by any other name

Q: You have defended the singular “they” when it applies to an unknown person of unknown gender. OK. But how about for a known person of unknown gender? A recent news article that said “they were fired” caused me to search back and forth to find who else was fired. A waste of time.

A: We have indeed defended the use of “they” in the singular for an unknown person—an individual usually represented by an indefinite pronoun (“someone,” “everybody,” “no one,” etc.). Some examples: “If anyone calls, they can reach me at home” … “Nobody expects their best friend to betray them” … “Everyone’s looking out for themselves.”

As we’ve said on the blog, this singular use of “they” and its forms (“them,” “their,” “themselves”) for an indefinite, unknown somebody-or-other is more than 700 years old.

You’re asking about a very different usage, one that we’ve also discussed. As we wrote in a later post, this singular “they” refers to a known person who doesn’t identify as either male or female and prefers “they” to “he” or “she.” Some examples: “Robin loves their new job as sales manager” … “Toby says they’ve become a vegetarian.”

This use of “they” for a known person who’s nonbinary and doesn’t conform to the usual gender distinctions is very recent, only about a decade old.

When we wrote about the nonbinary “they” three years ago, we noted that only one standard dictionary had recognized the usage. The American Heritage Dictionary of the English Language included (and still does) this definition within its entry for “they”: “Used as a singular personal pronoun for someone who does not identify as either male or female.”

American Heritage doesn’t label the usage as nonstandard but adds a cautionary note: “The recent use of singular they for a known person who identifies as neither male nor female remains controversial.”

Since then, a couple of other standard dictionaries have accepted the usage, but without reservation.

Merriam-Webster’s entry for “they” was updated in September 2019 to include this definition: “used to refer to a single person whose gender identity is nonbinary.”

A British dictionary, Macmillan, now has a similar definition: “used as a singular pronoun by and about people who identify as non-binary.” Macmillan’s example: “The singer has come out as non-binary and asked to be addressed by the pronouns they/them.”

Neither dictionary has any kind of warning label or cautionary note. Other dictionaries, however, are more conservative on the subject, merely observing in usage notes that the nonbinary “they” is out there, but not including it among the standard definitions of “they.”

For instance, Dictionary.com (based on Random House Unabridged) says in a usage note that the use of “they” and its forms “to refer to a single clearly specified, known, or named person is uncommon and likely to be noticed and criticized. … Even so, use of they, their, and them is increasingly found in contexts where the antecedent is a gender-nonconforming individual or one who does not identify as male or female.”

And Lexico (the former Oxford Dictionaries Online) has this in a usage note: “In a more recent development, they is now being used to refer to specific individuals (as in Alex is bringing their laptop). Like the gender-neutral honorific Mx, the singular they is preferred by some individuals who identify as neither male nor female.”

The Oxford English Dictionary, an etymological dictionary based on historical evidence, added the binary use of “they” and its forms in an October 2019 update.

This is now among the OED’s definitions of “they”: “Used with reference to a person whose sense of personal identity does not correspond to conventional sex and gender distinctions, and who has typically asked to be referred to as they (rather than as he or she).”

The dictionary’s earliest example is from a Twitter post (by @thebutchcaucus, July 11, 2009): “What about they/them/theirs? #genderqueer #pronouns.” Oxford also has two later citations:

“Asher thought they were the only nonbinary person at school until a couple weeks ago” (the Harvard-Westlake Chronicle, Los Angeles, Sept. 25, 2013).

“In 2016, they got a role on Orange Is the New Black as a wisecracking white supremacist” (from a profile of Asia Kate Dillon on the Cut, a blog published by New York magazine, June 3, 2019).

We agree with you that this usage can confuse a reader. When a writer uses “they” in an article, we’re sometimes left to wonder how many people are meant.

But a careful writer can overcome this problem. The use of “they” in that last OED citation (“they got a role”) is not confusing because it links the pronoun with a single role  And elsewhere in the article, the author, Gabriella Paiella, took pains to be clear (“they’re arguably Hollywood’s most famous nonbinary actor, one whose star turn came on an unlikely television series”).

As we noted in our nonbinary “they” post, “Clarity is just as important as sensitivity. Be sure to make clear when ‘they’ refers to only one person and when it refers to several people.” We also noted that “when ‘they’ is the subject of a verb, the verb is always plural, even in reference to a single person: ‘Robin says they are coming to the lunch meeting, so order them a sandwich.’ ”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Expression Language Linguistics Usage Writing

Complementary remarks

Q: I teach writing to foreign students and was asked a question that I just cannot answer. Both of these sentences are normal English: “I was happy being alone” … “I was happy to be alone.” But only the first of these is normal: “I became [or got] lonely being alone” … “I became [or got] lonely to be alone.” What’s wrong with that last sentence?

A: Your question has to do with adjectives and their complements—that is, the words or phrases that complete them. In this case, you’re attempting to complement the adjectives “happy” and “lonely” with two different phrases: one formed with a gerund or “-ing” participle (“being alone”), and the other with a “to” infinitive (“to be alone”).

As any native speaker of English would know immediately, the “happy” sentences work with both complements, but the “lonely” sentences don’t. “I got lonely to be alone” doesn’t sound like normal English.

This is because “lonely” is not among adjectives that can invariably be complemented by an infinitive. If you replace “lonely” with “afraid,” the original examples work: “I became [or got] afraid being alone” … “I became [or got] afraid to be alone.”

We wish we could tell you that there’s a predictable pattern here—that certain types of adjectives can always be complemented by both participles and infinitives, while other types are always restricted to one or the other.

Unfortunately, no clear pattern emerges. Different adjectives simply act differently in different contexts.

For instance, with another subject and another verb, those “lonely” sentences work with both complements: “It is lonely being alone” … “It is lonely to be alone.” (There “it” is a dummy subject; the real subject is the complement: “Being alone is lonely” … “To be alone is lonely.”)

While we can’t give you a rule about all this, we can make a few broad observations.

Dozens of evaluative adjectives (like “educational,” “interesting,” “lovely,” “pleasant,” etc.) can be used with either “to” infinitives or “-ing” participles if the subject is a dummy “it” and the verb is a form of “be” (like “is,” “was,” “might have been,” and so on). With adjectives like these, the complements are pretty much interchangeable: “It was lovely to see you” … “It was lovely seeing you.”

Many adjectives that modify a subject, and that have to do with the subject’s attitude or capabilities, are often complemented by infinitives. These include “able,” “afraid,” “anxious,” “bound,” “delighted,” “determined,” “eager,” “happy,” “hesitant,” “liable,” “likely,” “quick,” “reluctant,” and “unwilling.” (Example: “The pianist was delighted to perform.”)

Some of those adjectives can also be complemented by “-ing” participles if a preposition is added, like “about” (as in “delighted about performing”), or “in” (“quick in replying”).

Still other adjectives, ones that refer to the experiencing or doing of something rather than to the thing itself, can be complemented by “to” infinitives. In a sentence like “This piece is difficult to perform,” the adjective “difficult” refers more to the performing than to the piece. These adjectives include “boring,” “delicious,” “difficult,” “easy,” “enjoyable,” “hard,” “impossible,” “tough,” and “tiresome.”

Adjectives that are usually complemented by “-ing” participles are much less numerous than the other kind. Among them are “busy,” “pointless,” “useless,” “worth,” and “worthwhile.” For instance, we can say, “She’s busy eating,” but not “She’s busy to eat.

As for “busy,” notice what happens when we modify the adjective with “too”—both complements work: “She’s too busy eating” … “She’s too busy to eat.” Completely different meanings! This is because “too busy eating” implies a missing element—“… to do [something else].”

Some adjectives that are usually complemented by infinitives—like “absurd,” “annoying,” “awkward,” “fortunate,” “happy,” “logical,” “odd,” and “sad”—can be complemented with participles as well.

Here a point should be made. Sometimes the choice of adjective complement—infinitive or participle—makes no difference in meaning, especially if the subject is the dummy “it.” (Examples: “It’s exhausting to cook for twenty” … “It’s exhausting cooking for twenty.”)

But sometimes a different complement produces a different meaning. “He was happy to carry your suitcase” does not mean “He was happy carrying your suitcase.” Similarly, “I became afraid to be alone” is not the same as “I became afraid being alone.”

This is also true of verbs with these complements or objects. For instance, “I stopped to think” does not mean “I stopped thinking,” and “I remembered to call” does not mean “I remembered calling.” We’ve written several posts, most recently in 2019, about verbs that can have infinitives or gerunds or both as their objects or complements.

With verbs, too, linguists have found no clear pattern that could help a foreign student predict which types work with gerunds, or with infinitives, or with both. As we wrote in 2014, there are only broad outlines that don’t work reliably in all cases.

You can find more on this subject in The Cambridge Grammar of the English Language, by Rodney Huddleston and Geoffrey K. Pullum (pp. 1246, 1259), and A Comprehensive Grammar of the English Language, by Randolph Quirk et al. (pp. 1224, 1230-31, 1392-93).

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Usage Word origin Writing

Not a man but felt this terror

Q: I have a question about the strange use of “but” in the following letter of Emerson to Carlyle: “Not a reading man but has a draft of a new Community in his waistcoat pocket.” I see no modern definition of “but” that fits here. Is the usage archaic?

A: Yes, Ralph Waldo Emerson’s use of “but” is archaic in that sentence, but the usage is still occasionally seen in contemporary historical novels.

The sentence is from a letter Emerson wrote to Thomas Carlyle on Oct. 30, 1840. In it, Emerson refers to the plans of American social reformers to set up utopian communities inspired by the ideas of the French social theorist Charles Fourier.

The passage is especially confusing because it has principal and subordinate clauses with elliptical, or missing, subjects. The “but” is being used to replace a missing pronoun (the subject) in the subordinate clause and to make the clause negative.

Here’s the sentence with all the missing or substitute parts in place: “There is not a reading man who has not a draft of a new Community in his waistcoat pocket.”

As the Oxford English Dictionary explains, “but” is being used here “with the pronominal subject or object of the subordinate clause unexpressed, so that but acts as a negative relative: that … not, who … not (e.g. Not a man but felt this terror, i.e. there was not a man who did not feel this terror, they all felt this terror). Now archaic and rare.”

The earliest OED example of the usage is from a medieval romance: “There be none othir there that knowe me, but wold be glad to wite me do wele” (“There are none there that know me who would not gladly expect me to act well”). From The Three Kings’ Sons, circa 1500. Frederick James Furnivall, who edited the manuscript in 1895 for the Early English Text Society, suggested that David Aubert, a French calligrapher for the Duke of Burgundy, may have been the author.

The most recent Oxford example for this use of “but” is from a 20th-century historical novel for children:

“There is scarce one among us but knows the fells as a man knows his own kale-garth” (“There is scarce one among us who doesn’t know the hills as a man knows his own cabbage garden”). From The Shield Ring, 1956, by Rosemary Sutcliff.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the blog by email

Enter your email address to subscribe to the blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Language Linguistics Pronunciation Usage Word origin

Ethos, logos, pathos

Q: A friend and I were recently discussing “ethos,” “logos,” and “pathos.” Having studied classical Greek, I asserted they should be pronounced as the ancients did: eth-ahs, lah-gahs, and pa-thahs. My friend said English has adopted the words so the commonly used pronunciations of eth-ohs, loh-gohs, and pay-thohs are now acceptable. Any help?

A: As you know, ἦθος (“ethos”), λόγος (“logos”), and πάθος (“pathos”) are in Aristotle’s ῥητορική (Rhetoric), a treatise on the art of persuasion. In the work, he uses “ethos” (character), “logos” (reason), and “pathos” (emotion) in describing the ways a speaker can appeal to an audience. (The classical Greek terms have several other meanings, which we’ll discuss later.)

When English adopted the terms in the 16th and 17th centuries, they began taking on new senses. Here are the usual English meanings now: “ethos,” the spirit of a person, community, culture, or era; “logos,” reason, the word of God, or Jesus in the Trinity; “pathos,” pity or sympathy as well as a quality or experience that evokes them.

So how, you ask, should an English speaker pronounce these Anglicized words?

In referring to the Rhetoric and other ancient texts, we’d use reconstructed classical Greek pronunciations (EH-thahs, LAH-gahs, PAH-thahs), though there’s some doubt as to how Aristotle and others actually pronounced the terms. But in their modern English senses, we’d use standard English pronunciations for “ethos,” “logos,” and “pathos.”

As it turns out, the 10 online standard dictionaries we regularly consult list a variety of acceptable English pronunciations that include the reconstructed ones:

  • EE-thohs, EE-thahs, EH-thohs, or EH-thahs;
  • LOH-gohs, LOH-gahs, or LAH-gahs;
  • PAY-thohs, PAY-thahs, PAY-thaws, PAH-thohs, or PAH-thahs.

(Our preferences would be EE-thohs, LOH-gohs, and PAY-thohs for the modern senses, though these aren’t terms we use every day in conversation.)

When English adopts a word from another language, the spelling, pronunciation, meaning, number, or function of the loanword often changes—if not at once, then over the years. This shouldn’t be surprising, since English itself changes over time. The Old English spoken by the Anglo-Saxons is barely recognizable now to speakers of modern English.

Similarly, the Attic dialect used by Aeschylus (circa 525-455 BC) differed from the Attic of Aristotle (384-322 BC), the Doric dialect of Pindar (c. 518-438 BC), the Aeolic of Sappho (c. 630-570 BC), and the Ionic of the eighth-century BC Homeric epics, the Iliad and the Odyssey.

You were probably taught a reconstructed generic Attic pronunciation of the fifth century BC. The reconstruction originated with Erasmus in the early 16th century and was updated by historical linguists in the 19th and 20th centuries. The linguists considered such things as the meter in poetry, the way animal sounds were written, the spelling of Greek loanwords in Latin, usage in medieval and modern Greek, and the prehistoric Indo-European roots of the language.

But Attic, the dialect of classical Greek spoken in the Athens area, wasn’t generic—it was alive and evolving. And to use a fifth-century BC Attic reconstruction for all classical Greek spoken from the eighth to the fourth centuries BC is like using a generic Boston pronunciation of the 19th century for the English spoken in Alabama, New York, Ohio, and Maine from the 18th to the 21st centuries.

As for the etymology, English borrowed “ethos” from the classical Latin ēthos, which borrowed it in turn from ancient Greek ἦθος, according to the Oxford English Dictionary. In Latin, the word meant character or the depiction of character. In Greek, it meant custom, usage, disposition, character, or the delineation of character in rhetoric.

When “ethos” first showed up in English in the late 17th century, the OED says, it referred to “character or characterization as revealed in action or its representation.” The first Oxford example is from Theatrum Poetarum (1675), by the English writer Edward Phillips:

“As for the Ethos … I shall only leave it to consideration whether the use of the Chorus … would not … advance then diminish the present.” Some scholars believe that the poet John Milton, an uncle who educated Phillips, contributed to the work, which is a list of major poets with critical commentary.

In the mid-19th century, according to the OED, “ethos” came to mean “the characteristic spirit of a people, community, culture, or era as manifested in its attitudes and aspirations; the prevailing character of an institution or system.”

The first citation is from Confessions of an Apostate, an 1842 novel by Anne Flinders: “ ‘A sentiment as true as it is beautiful,’ I replied, ‘like the “austere beauty of the Catholic Ethos,” which we now see in perfection.’ ”

English adopted “logos” in the late 16th century from λόγος in classical Greek,  where it meant word, speech, discourse, or reason. The OED’s first English citation uses it as “a title of the Second Person of the Trinity,” or Jesus:

“We cal him Logos, which some translate Word or Speech, and othersome Reason” (from A Woorke Concerning the Trewnesse of the Christian Religion, a 1587 translation by Philip Sidney and Arthur Golding of a work by the French Protestant writer Philippe de Mornay).

The OED says modern writers use the term “untranslated in historical expositions of ancient philosophical speculation, and in discussions of the doctrine of the Trinity in its philosophical aspects.”

English got “pathos” in the late 16th century from the Greek πάθος, which meant suffering, feeling, emotion, passion, or an emotional style or treatment. In English, it first meant “an expression or utterance that evokes sadness or sympathy,” a usage that Oxford describes as rare today.

The dictionary’s earliest English example is from The Shepheardes Calender (1579), the first major poetical work by the Elizabethan writer Edmund Spenser: “And with, A very Poeticall pathos.” (The original 1579 poem uses παθός, but a 1591 version published during Spenser’s lifetime uses “pathos.”)

In the mid-17th century, according to the OED, the term took on the modern sense of “a quality which evokes pity, sadness, or tenderness; the power of exciting pity; affecting character or influence.” The first citation is from “Of Dramatic Poesie,” a 1668 essay by John Dryden: “There is a certain gayety in their Comedies, and Pathos in their more serious Playes.”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

Shrink, shrank, shrunk

Q: Is it OK to use “shrunk” as the past tense of “shrink,” as in Honey, I Shrunk the Kids?

A: Yes, it’s OK if you’re American, like that 1989 Disney film. However, British dictionaries are divided over the usage.

As we wrote in 2010, most standard American dictionaries recognize either “shrank” or “shrunk” as a legitimate past tense of “shrink.” So as far back as nine years ago, a sentence like “His trousers shrunk in the laundry” was widely accepted as standard in the US.

These were the recommended American forms: “shrink” as the present tense; “shrank” or “shrunk” as the past tense; “shrunk” or “shrunken” as the past participle (the form used in perfect tenses, requiring an auxiliary like “have” or “had”).

Today, acceptance of the past tense “shrunk” is even more pronounced, as we found in checking the 10 standard American and British dictionaries we usually consult.

All five of the American and three out of the five British dictionaries now accept “shrunk” as well as “shrank.” (One of those last three, Cambridge, qualified its acceptance by saying that “shrunk” is standard in the US but not in the UK.)

Only two holdouts insist on “shrank” alone as the past tense, the British dictionaries Longman and Lexico (formerly Oxford Dictionaries Online). They accept “shrunk” solely as a past participle.

Despite the increasing respectability of the past tense “shrunk,” it’s apparently regarded by some as casual or informal.

Merriam-Webster’s Dictionary of English Usage says that while “shrunk” is “undoubtedly standard” in the past tense, “shrank” is the usual preference in written English. (As we’ll show later, “shrunk” is widely preferred in common usage, if not in edited writing.)

However, we see no reason to avoid “shrunk,” even in formal writing. The Oxford English Dictionary, an etymological dictionary based on historical evidence, says “shrunk” has been used this way since the 1300s. It apparently fell out of favor—at least in written English—sometime in the 19th century and became respectable again in the latter half of the 20th.

The fact that modern lexicographers have come around to accepting “shrunk” is not an indication that standards are slipping or that English is becoming degraded. On the contrary, this development echoes a pattern seen with other verbs of that kind. Here’s the story.

The verb “shrink” was first recorded around the year 1000, as scrincan in Old English. It was inherited from other Germanic languages, with cousins in Middle Dutch (schrinken), Swedish (skrynkato), and Norwegian (skrekka, skrøkka).

In Anglo-Saxon days “shrink” had two past-tense forms—“shrank” (scranc) in the singular and “shrunk” (scruncon) in the plural—along with the past participle “shrunken” (gescruncen). So originally (and we’ll use the modern spellings here), the past-tense vowel changed only when the verb shifted from singular to plural, as in “I shrank” vs. “we shrunk.”

But in the 14th century, the dictionary says, the originally plural past tense “shrunk” began appearing with a singular subject (as in “I shrunk,” “he shrunk”). The dictionary’s earliest example is dated circa 1374:

“Sche constreynede and schronk hir seluen lyche to þe comune mesure of men” (“She contracted and shrunk herself to the common measure of men”). From Geoffrey Chaucer’s translation of Boethius’s De Consolatione Philosophiae.

This use of “shrunk,” the OED says, went on to become “frequent in the 15th cent.,” and was “the normal past tense in the 18th cent.”

Dictionaries of the time agree. A New Dictionary of the English Language (William Kenrick, 1733) and A General Dictionary of the English Language (Thomas Sheridan, 1780) both prefer “shrunk” over “shrank” as the past tense. They use the same illustration—“I shrunk, or shrank”—treating “shrank” as a secondary variant.

The preference for “shrunk” persisted among some writers well into the 19th century, as these OED citations show:

“Wherever he went, the enemy shrunk before him” (Washington Irving, A History of New York, 1809) … “Isaac shrunk together, and was silent” (Sir Walter Scott, Ivanhoe, 1819) … “She shrunk back from his grasp” (Scott’s novel Kenilworth, 1821) … “Opinions, which he never shrunk from expressing” (Edward Peacock’s novel Narcissa Brendon, 1891).

But in the meantime “shrank” was also being used, and during the 19th century its popularity gradually revived in written English. Soon it came to be regarded as the better choice.

By the early 20th century, textbooks and usage guides were recommending “shrank” as the proper past-tense form. Henry Fowler, in A Dictionary of Modern English Usage (1926), said “shrunk” had become archaic. (He was wrong, as we now know. Far from being archaic, “shrunk” had stubbornly persisted in common use.)

Here a question arises. If “shrunk” was the normal past tense in the 18th century, why did commentators in the early 20th century suggest that “shrank” was better?

Apparently arbiters of the language felt that forms of “shrink”—the present, past, and perfect tenses—should conform with those of similar verbs:  “drink/drank/drunk,” “sink/sank/sunk,” “swim/swam/swum,” and so on. They felt that the legitimate past tenses should be spelled with “a,” the past participles with “u,” and the distinction preserved.

But they overlooked the fact that many similar verbs had adopted “u” in the past tense with no objections. These all belong to a class that in Old English had “i” as the present-tense vowel and had two past-tense vowels: “a” in the singular (“I shrank,” “he shrank”) and “u” in the plural (“we shrunk,” “they shrunk”).

Examples of verbs like this include “cling,” “spin,” “swing,” and “wring.” By the 18th century, they had abandoned the old past tenses spelled with “a” (“clang,” “span,” “swang,” “wrang”) and adopted “u” forms identical to their past participles (“clung,” “spun,” “swung,” “wrung”).

The linguist Harold B. Allen has described “shrink” as “typical” of that class—Old English verbs that “in moving toward a single form for past and participle have popularly used the vowel common to both” (The English Journal, February 1957).

Unlike those other verbs, however, “shrink” was arrested in the process. Instead of dropping its “a” form completely, it has kept both past tenses, “shrank” and “shrunk.” (The same is true of the verbs “spring” and “stink,” which have retained both of their old past tense forms, “sprang/sprung” and “stank/stunk.”)

As we mentioned above, “shrunk” is the past tense favored in common usage. More than 60 years ago, Allen wrote that although textbooks listed “shrank” as the proper past tense, “shrunk” was more popular.

“The findings of the fieldwork for The Linguistic Atlas of the Upper Midwest,” he wrote, “indicate that 86.5% of all informants responding to this item use shrunk as the preterit [past tense],” he said. And there was no evidence of a “small educated minority clinging to a favored shrank.

The preference for “shrunk,” he said, was “nearly the same in all three groups: 89% of the uneducated, 89% of the high school graduates, and 86% of the college graduates.” Though preferences were divided, he wrote, “the general dominance of shrunk is certain, despite the contrary statements of the textbooks.”

A final word about “shrunken,” which dictionaries still list alongside “shrunk” as a past participle. Today it’s “rarely employed in conjugation with the verb ‘to have,’ ” the OED says. There, too, “shrunk” has become the popular choice (as in “The trousers have shrunk”), and “shrunken” is seen mostly as a participial adjective (“the shrunken trousers”).

The same thing has happened with the verb “drink.” The usual past participle is now “drunk” (as in “he had drunk the poison”), while the old past participle “drunken” is now used only as an adjective.

But as for its past tense, “drink” has held on to “drank” in modern English, and a usage like “he drunk the poison” is not considered standard.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Usage Word origin Writing

Like more, only more so

Q: I’m seeing “more so” or “moreso” where I would expect “more.” Am I suffering from the usual recency illusion? Can I change it to “more” when editing? I sometimes have trouble knowing whether a language change is far enough along to indulge it.

A: The two-word phrase “more so” is standard English and showed up nearly three centuries ago. You can find it in two of James Madison’s essays in The Federalist Papers and in Jane Austen’s novel Emma.

The one-word version “moreso” has been around for almost two centuries, though it’s not accepted by any modern standard dictionary. The Oxford English Dictionary, an etymological reference, says it’s mainly an American usage.

The OED says “more so” (as well as “moreso”) is derived from the earlier use of “more” with “ellipsis of the word or sentence modified.” That is, it comes from the use of “more” by itself to modify missing words, as in “I found the first act delightful and the second act even more.” (Here “delightful” is missing after “more” but understood.)

The earliest Oxford example for this elliptical “more” usage, which we’ll expand here, is from a Middle English translation of a 13th-century French treatise on morality:

“He ssolde by wel perfect and yblissed ine þise wordle and more ine þe oþre” (“He shall be morally pure and blessed in this world and more in the other”; from Ayenbite of Inwyt, a 1340 translation by the Benedictine monk Michel of Northgate of La Somme des Vices et des Vertus, 1279, by Laurentius Gallus).

Today, the OED says in a December 2002 update to its online third edition, the usage is seen “frequently with anaphoric so” in the phrase “more so (also, chiefly U.S., moreso).” An anaphoric term refers back to a word or words used earlier, as in “I saw the film and so did she.”

The dictionary’s first citation for “more so” is from an early 18th-century treatise by the Irish philosopher George Berkeley: “This is so plain that nothing can be more so” (A Defence of Free-Thinking in Mathematics, 1735). Berkeley, California, was named after the philosopher, who was also the Anglican bishop of Cloyne, Ireland.

The next Oxford example is from a Federalist essay in which Madison discusses the size of districts that choose senators: “Those of Massachusetts are larger than will be necessary for that purpose. And those of New-York still more so” (Federalist No. 57, “The Alleged Tendency of the New Plan to Elevate the Few at the Expense of the Many Considered in Connection with Representation,” Feb. 19, 1788).

In the OED’s citation from Emma, published in 1815, Emma and Mr. Knightley are discussing Harriet’s initial rejection of Mr. Martin: “ ‘I only want to know that Mr. Martin is not very, very bitterly disappointed.’  ‘A man cannot be more so,’ was his short, full answer.”

The one-word version “moreso” soon appeared in both the US and the UK. The earliest British example that we’ve seen is from a clinical lecture on amputation delivered at St. Thomas’s Hospital, London, Nov. 25, 1823:

“In all these cases, it is of infinite importance to be prompt in your decision, moreso almost than in any other cases to be met with in the practice of the profession” (from an 1826 collection of surgical and clinical lectures published by the Lancet).

The earliest American example we’ve found is from an Indiana newspaper: “Cure for the Tooth ache—This is one of the most vexatious of the ills that flesh (or rather nerves) is heir to. The following simple prescription can do no injury, & from actual experiment, we know it to be highly efficacious, moreso than any specific the dread of cold iron ever induced the sufferer to” (Western Sun & General Advertiser, Vincennes, April 29, 1826).

A few months later, the one-word spelling appeared in the published text of a Fourth of July speech at the University of Vermont in Burlington. Here’s the relevant passage from the speech by George W. Benedict, a professor of natural philosophy and chemistry at the university:

“Much has been said of the ingratitude of popular governments. That in those of ancient times, the very individuals to whom they were under the greatest obligations were as liable as others, sometimes apparently moreso, the victims of sudden resentment or the objects of a cold, unfeeling neglect, is doubtless true.”

The OED’s only example for “moreso” is from a late 20th-century book published in Glasgow: “Anyone perceived as being different from society’s norms was a potential target—no-one moreso than the local wise-woman” (Scottish Myths and Customs, 1997, by Carol P. Shaw).

However, the dictionary does have a hyphenated example from the 19th century: “The English servant was dressed like his master, but ‘more-so’ ” (The Golden Butterfly, an 1876 novel by the English writers Walter Besant and Samuel James Rice).

The linguist Arnold Zwicky notes in a May 30, 2005, post on the Language Log that “more” could replace “more so” or “moreso” in all of the OED citations, though the anaphoric versions (those with “so”) may add contrast or emphasis:

“The choice between one variant and the other is a stylistic one. One relevant effect is that, in general, explicit anaphora, as in more so, tends to be seen as more emphatic or contrastive than zero anaphora, as in plain more.”

In the 21st century, people seem to be using the one-word “moreso” in several new nonstandard senses. For example, Zwicky points out that “moreso” is now being used as a simple emphatic version of “more,” without referring back to a word or words used earlier: “alternating more and moreso have been reinterpreted as mere plain and emphatic counterparts, with no necessary anaphoricity.”

Here’s a recent example from an NPR book review of Brynne Rebele-Henry’s Orpheus Girl, an updated version of the myth of Orpheus and Eurydice (Oct. 13, 2019): “Moreso than Hades’s mythic underworld of old, this camp is Actual Hell (and all the trigger warnings that go with that).”

In another innovative reinterpretation, Zwicky says in his 2005 post, “moreso” is being used as a sentence adverb “without any specific standard of comparison implicated.”

It means “moreover” or “furthermore” in this recent sighting on Amazon.com: “Moreso, infants and preschoolers do not have the ability to express feelings of sadness in apt language” (from a description of How to Detect and Help Children Overcome Depression, 2019, by J. T. Mike).

And  “moreso” is being used in the sense of “rather” in this example: “Scientist Kirstie Jones-Williams, who will be helping to train and guide the volunteer researchers, says the goal of the program isn’t to create more scientists, but moreso global ambassadors on the dangers of pollution and more” (from a Sept. 25, 2019, report on NBC Connecticut about a trip to Antarctica).

We’ve occasionally seen the two-word “more so” used in such creative ways too, perhaps influenced by the newer uses of “moreso.” The phrase is a sentence adverb meaning “more importantly” in this query about the hip-hop career of the former Pittsburgh Steelers wide receiver Antonio Brown:

“But we have to know, if Brown starts releasing music, will you listen? More so, will you buy it? Let us know” (an item that appeared Oct. 16, 2019, on Instagram from USA Today’s Steelers Wire).

Although lexicographers are undoubtedly aware of the evolution of “moreso” in the 21st century, none of these new senses have made it into either the OED or the 10 standard dictionaries we regularly consult. Webster’s New World, the only standard dictionary to take note of “moreso,” merely labels it a “disputed” spelling of “more so.” The online collaborative Wiktionary say it’s a “nonstandard” spelling of the phrase.

Getting back to your question, are you suffering from the recency illusion? Well, perhaps a bit. The term, coined by Zwicky, refers to the belief that things you recently notice are in fact recent. Yes, the anaphoric use of “more so” and “moreso” has been around for centuries, but “moreso,” with its new senses, seems to have increased in popularity in recent years.

Historically, “moreso” has been relatively rare in relation to “more so,” according to Google’s Ngram viewer, which compares words and phrases in digitized books published through 2008. However, recent searches with the more up-to-date iWeb corpus, a database of 14 billion words from 22 million web pages, suggest that “moreso” sightings may now be on the rise. Here’s what we found: “moreso,” 8,022 hits; “more so,” 107,837.

Should you change “moreso” or “more so” to “more” when editing? That depends.

Since “moreso” isn’t standard English, we’d change it to an appropriate standard term, depending on the sense—“more,” “more so,” “moreover,” “rather,” and so on.

As for “more so,” we’d leave it alone if it’s being used anaphorically. Otherwise, we’d change it to an appropriate standard term.

But as you note in your question, the English language is evolving. If you ask us about this in a few years, we may have a different answer.

Help support the Grammarphobia Blog with your donation. And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a novel.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you’re a subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Usage Word origin Writing

Can not, cannot, and can’t

Q: Can you please dwell in some detail on why “can not” is now usually written as “cannot”? Is there a linguistic reason for this uncontracted form? Or is it just one of those irregularities that cannot be accounted for?

A: When the usage showed up in Old English, the language of the Anglo-Saxons, it was two words.

One of the oldest examples in the Oxford English Dictionary is from the epic poem Beowulf, perhaps written as early as the 700s: “men ne cunnon” (“men can not”).

And here’s an expanded version that offers context as well as a sense of the Anglo-Saxon poetry:

“ac se æglæca etende wæs, / deorc deaþscua duguþe ond geogoþe, / seomade ond syrede; sinnihte heold / mistige moras; men ne cunnon / hwyder helrunan hwyrftum scriþað” (“all were in peril; warriors young and old were hunted down by that dark shadow of death that lurked night after night on the misty moors; men on their watches can not know where these fiends from hell will walk”).

The combined form “cannot” showed up in the Middle English period (1150 to 1450), along with various other spellings: cannat, cannatte, cannouȝt, connat, connott, conot, conott, cannote, connot, and cannott.

The earliest OED example with the modern spelling is from Cursor Mundi, an anonymous Middle English poem that the dictionary dates at sometime before 1325: “And þou þat he deed fore cannot sorus be” (“And thou that he [Jesus] died for cannot be sorrowful”).

In contemporary English, both “cannot” and “can not” are acceptable, though they’re generally used in different ways. The combined form, as you point out, is more common (Lexico, formerly Oxford Dictionaries Online, says it’s three times as common in the Oxford English Corpus).

Here’s an excerpt from the new, fourth edition of Woe Is I, Pat’s grammar and usage book, on how the two terms, as well as the contraction “can’t,” are generally used today:

CAN NOT / CANNOT / CAN’T. Usually, you can’t go wrong with a one-word version—can’t in speech or casual writing, cannot in formal writing. The two-word version, can not, is for when you want to be emphatic (Maybe you can hit high C, but I certainly can not), or when not is part of another expression, like “not only . . . but also” (I can not only hit high C, but also break a glass while doing it). Then there’s can’t not, as in The diva’s husband can’t not go to the opera.

Getting back to your question, why is “cannot” more popular than “can not”? We believe the compound is more common because the two-word phrase may be ambiguous.

Consider this sentence: “You can not go to the party.” It could mean either “You’re unable to go” or “You don’t have to go.” However, the sentence has only the first meaning if you replace “can not” with “cannot” (or the contraction “can’t”).

In The Cambridge Grammar of the English Language (2002), Rodney Huddleston and Geoffrey K. Pullum say that “You can’t/cannot answer their letters” means “It is not possible or permitted for you to answer their letters,” while “You can not answer their letters” means “You are permitted not to answer their letters.”

In speech, Huddleston and Pullum write, any ambiguity is cleared up by emphasis and rhythm: “In this use, the not will characteristically be stressed and prosodically associated with answer rather than with can by means of a very slight break separating it from the unstressed can.” The authors add that “this construction is fairly rare, and sounds somewhat contrived.”

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language. For a change of pace, read Chapter 1 of Swan Song, a comic novel.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Grammar Language Linguistics Usage Writing

Pat reviews 4 language books

Read Pat in the New York Times Book Review on four new books about the English language.

 

Categories
English English language Expression Grammar Language Linguistics Usage Writing

Making sense of mixing tenses

Q: I mixed tenses in two news items I wrote about a legal decision. In the original, I wrote, “the judge ruled such passenger fees are constitutional.” After a settlement months later, I wrote, “he said such fees were legal.” Both seem right, but I’m not sure why I used the present tense in the first and the past in the second.

A: Both seem right to us too, even though you combined the tenses differently. The first verb in each passage is in the past tense, but the tense of the second verb varies. As we’ll explain, this mixing of tenses is allowed.

The problem you raise—how to use tenses in a sequence—is particularly common among journalists, who are often required to use what The Cambridge Grammar of the English Language calls “indirect reported speech.”

This construction is used to report what somebody said, but not in a direct quote. The principal verb in your examples is in the past tense (“the judge ruled” … “he said”), but then you’re faced with the problem of what tense to use in the verbs that follow.

As we wrote in a 2015 post, the following tenses need not necessarily be identical to the first; in some cases the choice is optional.

For instance, even when the second verb expresses something that is still true (those fees are still legal now), a writer may prefer to echo the past tense of the first verb. In fact, the default choice here is the past tense; the present tense may be used, but it’s not required.

In explaining how this works, the Cambridge Grammar begins with this quotation spoken by a woman named Jill: “I have too many commitments.”

Her “original speech,” the book says, may be reported indirectly as either “Jill said she has too many commitments” or “Jill said she had too many commitments.”

“The two reports do not have the same meaning,” write the authors, Rodney Huddleston and Geoffrey K. Pullum, “but in many contexts the difference between them will be of no pragmatic significance.”

So when would the difference matter? One factor that might make a writer choose one tense over the other is the time elapsed between the original speech and the reporting of it. Did Jill say this last year or five minutes ago?

In a sentence like “Jill said she had/has a headache,” the authors say, “Jill’s utterance needs to have been quite recent for has to be appropriate.”

In the case you raise, the original version is closer in time to the judge’s ruling, and the present tense is reasonable: “ruled that such passenger fees are constitutional.” But your follow-up story came much later, which may be why the past tense seemed better to you: “he said such fees were legal.”

In a post that we wrote in 2012, we note that the simple past tense takes in a lot of territory—the very distant as well as the very recent past. A verb like “said” can imply a statement made moments, years, or centuries ago—about situations long dead or eternally true. So the verbs that follow can be challenging.

As the Cambridge Grammar explains, there are no “rules” for this. But in our opinion, if an experienced writer like you thinks the tense in a subordinate clause is reasonable and logical, it probably is.

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

Hear Pat on Iowa Public Radio

She’ll be on Talk of Iowa today from 10 to 11 AM Central time (11 to 12 Eastern) to discuss the wonders of adjectives, and to take questions from callers.

Help support the Grammarphobia Blog with your donation
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Expression Grammar Language Linguistics Phrase origin Usage Word origin Writing

Are you down on “up”?

Q: How did “heat up” replace “heat” in referring to heating food? And why has the equally awful “early on” become so popular?

A: “Heat up” hasn’t replaced “heat” in the kitchen, but the use of the phrasal verb in this sense has apparently increased in popularity in recent years while the use of the simple verb has decreased.

A search with Google’s Ngram Viewer, which compares phrases in digitized books, indicates that “heat the soup” was still more popular than “heat up the soup” as of 2008 (the latest searchable date), though the gap between them narrowed dramatically after the mid-1980s.

However, we haven’t found any standard dictionary or usage guide that considers “heat up” any less standard than “heat” in the cooking sense.

Merriam-Webster online defines the phrasal verb as “to cause (something) to become warm or hot,” and gives this example: “Could you heat up the vegetables, please?”

You seem to think that “heat up” is redundant. We disagree.

As you probably know, “up” is an adverb as well as a preposition. In the phrasal verb “heat up,” it’s an adverb that reinforces the meaning of the verb. (A phrasal verb consists of a verb plus one or more linguistic elements, usually an adverb or a preposition.)

In a 2012 post entitled “Uppity Language,” we quote the Oxford English Dictionary as saying the adverb “up” in a phrasal verb can express “to or towards a state of completion or finality,” a sense that frequently serves “to emphasize the import of the verb.”

The OED, an etymological dictionary based on historical evidence, doesn’t mention “heat up” in that sense, but it cites “eat up,” “swallow up,” “boil up,” “beat up,” “dry up,” “finish up,” “heal up,” and many other phrasal verbs in which “up” is used to express bringing something to fruition, especially for emphasis.

Our impression is that people may also feel that it’s more informal to “heat up” food than simply “heat” it, though dictionaries don’t make that distinction. The phrasal verb “hot up” is used similarly in British English as well as in the American South and South Midland, and dictionaries generally regard that usage as informal, colloquial, or slang.

We also feel that people may tend to use “heat up” for reheating food that’s already cooked, and “heat” by itself for heating food that’s prepared from scratch. An Ngram search got well over a hundred hits for “heat up the leftovers,” but none for “heat the leftovers.” However, we haven’t found any dictionaries that make this distinction either.

In addition to its food sense, “heat up” can also mean “to become more active, intense, or angry,” according to Merriam-Webster online, which cites these examples: “Their conversation started to heat up” …. “Competition between the two companies is heating up.”

And the adverb “up” can have many other meanings in phrasal verbs: from a lower level (“pick up,” “lift up”), out of the ground (“dig up,” “sprout up”), on one’s feet (“get up,” “stand up”), separate or sever (“break up,” “tear up”), and so on.

When the verb “heat” appeared in Old English (spelled hǽtan, haten, hatten, etc.), it was intransitive (without an object) and meant to become hot. The earliest citation in the Oxford English Dictionary is from a Latin-Old English entry in the Epinal Glossary, which the OED dates at sometime before 700: “Calentes, haetendae.”

The first OED citation for the verb used transitively (with an object) to mean make (something) hot is from Old English Leechdoms, a collection of medical remedies dating from around 1000: “hæt scenc fulne wines” (“heat a cup full of wine”).

As far as we can tell, the phrasal verb “heat up” appeared in the second half of the 19th century, though not in its cooking sense. The earliest example we’ve seen is from an April 9, 1878, report by the US Patent Office about an invention in which a system of pipes “is employed to heat up the feedwater of a steam-boiler.”

A lecture in London a few years later touches on cooking: “Now a Bunsen burner will roast meat very well, provided that the products of combustion are not poured straight on to whatever is being cooked; the flame must be used to heat up the walls of the roaster, and the radiant heat from the walls must roast the meat.” (The talk on the use of coal gas was given on Dec. 15, 1884, and published in the Journal of the Society of Arts, Jan. 9, 1885.)

The earliest example we’ve seen for “heat up” used in the precise sense you’re asking about is from a recipe for shrimp puree in Mrs. Roundell’s Practical Cookery Book (1898), by Mrs. Charles Roundell (Julia Anne Elizabeth Roundell):

“bring to the boil, skimming off any scum that may rise, then cool, and pass all through the sieve into another stewpan, stir in the shrimps that were reserved for garnish and heat up.”

As for the adverbial phrase “early on,” it’s been used regularly since the mid-18th century to mean “at an initial or early stage,” according to the OED. The dictionary also cites examples of the variant “earlier on” from the mid-19th century.

Oxford’s earliest example of “early on” is from a 1759 book about tropical diseases by the English physician William Hillary: “When I am called so early on in the Disease … I can strictly pursue it” (from Observations on the Changes of the Air, and the Concomitant Epidemical Diseases in the Island of Barbados).

And the first “earlier on” example is from the Manchester Guardian, April 21, 1841: “It took place earlier on in the year.”

You’re right that “early on” has grown in popularity lately, though “earlier on” has remained relatively stable, according to a comparison of the phrases in the Ngram Viewer.

However, we don’t see why the usage bothers you. The four online standard dictionaries we’ve consulted (Merriam-Webster, American Heritage, Oxford, and Longman), list it without comment—that is, as standard English.

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Usage Word origin Writing

Why foxes have fur, horses hair

Q: Why do we say some animals have “hair” while others have “fur”?

A: All mammals have hair—dogs, cats, foxes, pigs, gerbils, horses, and people. Even dolphins have a few whiskers early in their lives. Scientifically speaking, there’s no difference between hair and fur.

“This is all the same material,” Dr. Nancy Simmons, a mammalogist with the American Museum of Natural History, said in a 2001 interview with Scientific American. “Hair and fur are the same thing.”

She added that there are many norms for hair length, and that different kinds of hair can have different names, such as a cat’s whiskers and a porcupine’s quills.

Well, science is one thing but common English usage is another. Most of us do have different ideas about what to call “hair” and what to call “fur.”

For example, we regard humans as having “hair,” not “fur.” And we use “hair” for what grows on livestock with thick, leathery hides—horses, cattle, and pigs.

But we generally use “fur” for the thick, dense covering on animals like cats, dogs, rabbits, foxes, bears, raccoons, beavers, and so on.

Why do some animals have fur and others hair? The answer lies in the origins of the noun “fur,” which began life as an item of apparel.

In medieval England, “fur” meant “a trimming or lining for a garment, made of the dressed coat of certain animals,” according to the Oxford English Dictionary.

The source, the dictionary suggests, is the Old French verb forrer, which originally meant to sheathe or encase, then “developed the sense ‘to line,’ and ‘to line or trim with fur.’ ”

When the word “fur” first entered English, it was a verb that meant to line, trim, or cover a garment with animal hair. The earliest OED use is from Kyng Alisaunder, a Middle English romance about Alexander the Great, composed in the late 1200s or early 1300s:

“The kyng dude of [put on] his robe, furred with meneuere.” (The last word is “miniver,” the white winter pelt of a certain squirrel.)

The noun followed. Its first known use is from The Romaunt of the Rose, an English translation (from 1366 or earlier) of an Old French poem. The relevant passage refers to a coat “Furred with no menivere, But with a furre rough of here [hair].”

The noun’s meaning gradually evolved over the 14th and 15th centuries. From the sense of a lining or trimming, “fur” came to mean the material used to make it. Soon it also meant entire garments made of this material, as well as the coats of the animals themselves.

Oxford defines that last sense of “fur” this way: “The short, fine, soft hair of certain animals (as the sable, ermine, beaver, otter, bear, etc.) growing thick upon the skin, and distinguished from the ordinary hair, which is longer and coarser. Formerly also, the wool of sheep” [now obsolete].

Note that this definition establishes the distinction between the special hair we call “fur” (short, fine, soft), and “ordinary hair” (longer, coarser).

The dictionary’s earliest citation is a reference to sheep as bearing “furres blake and whyte” (circa 1430). The first non-sheep example was recorded in the following century, a reference to the “furre” of wolves (Edmund Spenser, The Shepheardes Calender, 1579).

From the 17th century on, examples are plentiful. Shakespeare writes of “This night wherin … The Lyon, and the belly-pinched Wolfe Keepe their furre dry” (King Lear, 1608). And Alexander Pope writes of “the strength of Bulls, the Fur of Bears” (An Essay on Man, 1733).

But a mid-18th-century example in the OED stands out—at least for our purposes—because it underscores that “fur” was valued because it was soft and warm: “Leave the Hair on Skins, where the Fleece or Fir is soft and warm, as Beaver, Otter, &c.” (From An Account of a Voyage for the Discovery of a North-west Passage, 1748, written by the ship’s clerk.)

Elsewhere in the account, the author notes that deer or caribou skins were “cleared of the Hair” to make use of the skin as leather.

As for “hair,” it’s a much older word than “fur” and came into English from Germanic sources instead of French.

Here’s the OED definition: “One of the numerous fine and generally cylindrical filaments that grow from the skin or integument of animals, esp. of most mammals, of which they form the characteristic coat.”

The word was spelled in Old English as her or hær, Oxford says, and was first recorded before the year 800 in a Latin-Old English glossary: “Pilus, her.” (In Latin pilus is a single hair and pili is the plural.)

By around the year 1000, “hair” was also used as a mass or collective noun, defined in the OED as “the aggregate of hairs growing on the skin of an animal: spec. that growing naturally upon the human head.”

In summary, most of us think of “fur” as soft, cuddly, warm, and dense. We don’t regard “hair” in quite the same way (even though it technically includes “fur”). “Hair,” in other words, covers a lot more bases.

But in practice, English speakers use the words “hair” and “fur” inconsistently. People often regard some animals, especially their pets, as having both “fur” and “hair.”

They may refer to Bowser’s coat as “fur,” but use the word “hair” for what he leaves on clothes and furniture. And when he gets tangles, they may say that either his “hair” or his “fur” is matted and needs combing out.

Furthermore (no pun intended), two different people might describe the same cat or dog differently—as having “hair” or “fur,” as being “hairy” or “furry,” and (particularly in the case of the cat) as throwing up a “hairball” or a “furball.” They simply perceive the animal’s coat differently.

Our guess is that people base their choice of words on what they perceive as the thickness, density, or length of a pet’s coat. The heavy, dense coat of a Chow dog or a Persian cat is likely to be called “fur.” And the short, light coat of a sleek greyhound or a Cornish Rex is likely to be called “hair.”

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Etymology Expression Grammar Language Linguistics Phrase origin Pronunciation Punctuation Spelling Style Usage Word origin Writing

A new ‘Woe Is I’ for our times

[This week Penguin Random House published a new, fourth edition of Patricia T. O’Conner’s bestselling grammar and usage classic Woe Is I: The Grammarphobe’s Guide to Better English in Plain English. To mark the occasion, we’re sharing the Preface to the new edition.]

Some books can’t sit still. They get fidgety and restless, mumbling to themselves and elbowing their authors in the ribs. “It’s that time again,” they say. “I need some attention here.”

Books about English grammar and usage are especially prone to this kind of behavior. They’re never content with the status quo. That’s because English is not a stay-put language. It’s always changing—expanding here, shrinking there, trying on new things, casting off old ones. People no longer say things like “Forsooth, methinks that grog hath given me the flux!” No, time doesn’t stand still and neither does language.

So books about English need to change along with the language and those who use it. Welcome to the fourth edition of Woe Is I.

What’s new? Most of the changes are about individual words and how they’re used. New spellings, pronunciations, and meanings develop over time, and while many of these don’t stick around, some become standard English. This is why your mom’s dictionary, no matter how fat and impressive-looking, is not an adequate guide to standard English today. And this is why I periodically take a fresh look at what “better English” is and isn’t.

The book has been updated from cover to cover, but don’t expect a lot of earthshaking changes in grammar, the foundation of our language. We don’t ditch the fundamentals of grammar and start over every day, or even every generation. The things that make English seem so changeable have more to do with vocabulary and how it’s used than with the underlying grammar.

However, there are occasional shifts in what’s considered grammatically correct, and those are reflected here too. One example is the use of they, them, and their for an unknown somebody-or-other, as in “Somebody forgot their umbrella”—once shunned but now acceptable. Another has to do with which versus that. Then there’s the use of “taller than me” in simple comparisons, instead of the ramrod-stiff “taller than I.” (See Chapters 1, 3, and 11.)

Despite the renovations, the philosophy of Woe Is I remains unchanged. English is a glorious invention, one that gives us endless possibilities for expressing ourselves. It’s practical, too. Grammar is there to help, to clear up ambiguities and prevent misunderstandings. Any “rule” of grammar that seems unnatural, or doesn’t make sense, or creates problems instead of solving them, probably isn’t a legitimate rule at all. (Check out Chapter 11.)

And, as the book’s whimsical title hints, it’s possible to be too “correct”— that is, so hung up about correctness that we go too far. While “Woe is I” may appear technically correct (and even that’s a matter of opinion), the lament “Woe is me” has been good English for generations. Only a pompous twit—or an author trying to make a point—would use “I” instead of “me” here. As you can see, English is nothing if not reasonable.

(To buy Woe Is I, visit your local bookstore or Amazon.com.)

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.

Categories
English English language Expression Grammar Language Linguistics Style Usage Writing

Who, me?

Q: In Michelle Obama’s memoir, Becoming, she uses this sentence to describe the sacrifices her parents made in raising her and her brother Craig: “We were their investment, me and Craig.” Surely that should be “Craig and I.”

A: Not necessarily. We would have written “Craig and I.” But the sentence as written is not incorrect. It’s informal, but not ungrammatical.

Here the compound (“me and Craig”) has no clear grammatical role. And as we wrote in 2016, a personal pronoun without a clear grammatical role—one that isn’t the subject or object of a sentence—is generally in the objective case.

In our previous post, we quoted the linguist Arnold Zwicky—the basic rule is “nominative for subjects of finite clauses, accusative otherwise.” In other words, when the pronoun has no distinctly defined role, the default choice is “me,” not “I.”

The Merriam-Webster Online Dictionary has this usage note: “I is now chiefly used as the subject of an immediately following verb. Me occurs in every other position.” The examples given include “Me too” … “You’re as big as me” … “It’s me” … “Who, me?”

“Almost all usage books recognize the legitimacy of me in these positions,” M-W says.

As we said, we think the compound “me and Craig” has no clear grammatical role. But digging deeper, we could interpret it as placed in apposition to (that is, as the equivalent of) the subject of the sentence: “we.” And technically, appositives should be in the same case, so the pronoun in apposition to “we” should be a subject pronoun: “I [not “me”] and Craig.”

That’s a legitimate argument, and if the author were aiming at a more formal style, she no doubt would have taken that route.

On the other hand, the same argument could be made against “Who, me?” Those two pronouns could be interpreted as appositives, but forcing them to match (“Whom, me?” or “Who, I?”) would be unnatural.

In short, the choice here is between formal and informal English (not “correct” versus “incorrect”), and the author chose the informal style.

By the way, as we wrote in 2012, the order in which the pronoun appears in a compound (as in “me and Craig” versus “Craig and me”) is irrelevant. There’s no grammatical rule that a first-person singular pronoun has to go last. Some people see a politeness issue here, but there’s no grammatical foundation for it.

That said, when the pronoun is “I,” it does seem to fall more naturally into the No. 2 slot. “Tom and I are going” seems to be a more natural word order than “I and Tom are going.” This is probably what’s responsible for the common (and erroneous) use of “I” when it’s clearly an object—as in “Want to come with Tom and I?”

Help support the Grammarphobia Blog with your donation.
And check out our books about the English language.

Subscribe to the Blog by email

Enter your email address to subscribe to the Blog by email. If you are an old subscriber and not getting posts, please subscribe again.