The Grammarphobia Blog

Like father, like son

Q: What is the history of the phrase “like father, like son”? Does it hark back to a time when this sort of parallel construction was common?

A: The expression “like father, like son” is an old English proverb with roots in classical Latin. Like many other English proverbs, it doesn’t conform to the usual syntax, the arrangement of words and phrases into sentences.

In “Proverbs,” an essay in the Encyclopaedia of the Linguistic Sciences, the philologist Neal R. Norrick explains that proverbs like the one you’re asking about don’t adhere to the traditional use of noun phrases and verb phrases.

“Many proverbs such as Like father, like son and The nearer the bone, the sweeter, the meat adhere to formulas, here like X, like Y and The X-er, the Y-er, which do not conform to customary NP + VP syntactic structure,” Norrick writes. “So special interpretative rules beyond regular compositional semantic principles are necessary to assign these proverbs even literal readings.”

Such literal readings, he says, “provide the basis on which figurative interpretations are determined.”

“One interpretative rule will relate the formula like X, like Y to the reading ‘Y is like X’ to derive for Like father, like son the interpretation ‘the son is like the father’; another rule related the formula The X-er, the Y-er to ‘Y is proportional to X’ to interpret The nearer the bone, the sweeter the meat as ‘the sweetness of the meat is proportional to the nearness of the bone’; and so on for other recurrent formulas.”

Norrick, who holds the chair of English philology at Saarland University in Saarbrücken, Germany, says other proverbs, like “once bitten, twice shy” and “sow the wind, reap the storm,” are “radically elliptical, rather than formulaic, as such.”

“They require expansion before they can receive grammatical analyses interpretable by regular compositional principles,” he adds. “This suggests a cognitive procedure in which a person constructs a complete paraphrase of the elliptical proverb, then assigns the interpretation derived from the paraphrase.”

Norrick’s analysis can be heavy going for lay readers, so we’ll simply say that proverbs are often idiomatic expressions that don’t necessarily conform to the traditional rules of English.

The McGraw-Hill Dictionary of American Idioms and Phrasal Verbs defines the proverb “like father, like son” this way: “Fathers and sons resemble each other, and sons tend to do what their fathers did before them.”

The American Heritage Dictionary of Idioms, which defines the adage as “In the same manner from generation to generation,” says, “This ancient proverb has been stated in English in slightly varying versions since the 1300s.”

American Heritage cites this 17th-century variation: “Like father, like son; like mother, like daughter,” from Bibliotheca Scholastica Instructissima (1616), a book of proverbs collected by the English theologian Thomas Draxe.

Two anonymous Latin sayings, Qualis pater, talis filius (“as the father, so the son”) and patris est filius (“he is his father’s son”), are cited as the source for the English proverb “like father, like son.”

However, a mother-daughter version appears in the Hebrew text of the Old Testament (Ezekiel 16:44): “As the mother, so her daughter.”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Why do the English drop aitches?

Q: Is there a linguistic relationship between the missing “h” sound in French and Eliza Doolittle’s aitch-dropping in Pygmalion and My Fair Lady?

A: The English have been dropping their aitches in speech and in spelling since Anglo-Saxon times, but the process accelerated as Old English gave way to Middle English in the 11th century.

Is French responsible for this “h”-dropping in English?

Well, Anglo-Norman, spoken by the Francophile upper classes in England for several hundred years after the Norman Conquest, is responsible for some of the “h” loss in Middle English, but not for Eliza’s cockney “h”-dropping.

Anglo-Norman, as well as Old French and Middle French, clearly influenced the absence of the “h” sound in some loanwords of Latin origin in Middle English, such as “honor,” “honest,” and “hour.”

But it’s uncertain whether Anglo-Norman, a Romance language formed from various French dialects, is responsible for any of the “h”-dropping in Middle English words of Anglo-Saxon origin.

One problem for linguists is determining how much of the “h”-dropping in Old English and Middle English writing reflected “h”-dropping in speech.

Some linguists have argued that the increase in “h”-dropping in Middle English texts was merely the result of errors by scribes who spoke Anglo-Norman, with its silent “h.”

But other linguists have said that the “h”-dropping in Middle English writing reflected “h”-dropping in speech, and that this was the result of the inherent weakness and instability of the phoneme, or unit of sound, represented by the letter “h.”

Today, “h”-dropping is associated with the cockney speech of working-class Londoners, but this loss of the “h” sound in words like “hammer,” “hat,” “house,” and “behind” is common in most regions of England, according to linguists.

In fact, “h” dropping is not unknown in Received Pronunciation, the standard British accent. In addition to dropping the “h” sound in the Gallic loanwords mentioned above, RP speakers used to drop it in “historic,” resulting in uses like “an ’istoric.”

RP speakers now pronounce all the letters of “historic,” but they’ve kept the indefinite article “an,” even though the article “a” would be standard before a word beginning with a sounded “h,” the phonetician John C. Wells writes in Accents of English (1982).

In A Course in Phonetics (1982), the phonetician Peter Ladefoged says “h” acts “like a consonant, but from an articulatory point of view is simply the voiceless counterpart of the following vowel.”

“It does not have a specific place of articulation,” he writes, “and its manner of articulation is the same as that of a vowel, only the state of the glottis is different.” (The glottis is made up of the vocal cords and the opening between them.)

As the linguist Larry Trask explains, “h” is “a very weak consonant, almost the last trace of anything we can call a consonant at all, and it disappears very easily.”

In classical times, Trask points out in a contribution to the Linguist List, the “h” sound “was completely gone in popular Latin speech by the first century BC, though it may have been retained for a while by a few pedants.”

“The Romance languages sometimes continue to write this long-lost /h/ in their orthographies,” he adds, “but this is purely for old times’ sake.”

However, the “h” sound was alive and well in Old English, according to linguists who have reconstructed Anglo-Saxon speech based on things like the rhyme in verse, the spelling of Latin loanwords, and related words in other Germanic languages.

The letter “h” had several pronunciations in Old English, which was spoken from about the 5th through the 11th centuries:

● In front of vowels, “h” sounded much as it does today.

● In front of consonants, it had a breathy sound.

● After a vowel pronounced at the front of the mouth (like “e” or “i”), “h” sounded like the “ch” in the German ich.

● After a vowel pronounced at the back of the mouth (like “a” or “o”), it sounded like the “ch” in the Scottish loch.

The use of “h” before consonants at the beginning of words began dying out in Old English and Middle English texts, according to citations in the Oxford English Dictionary.

For example, the noun “ring” (the finger ornament), was hringae, hringiae, etc. in early Old English, but came to be spelled ringce, ryngc, ring, and so on in later Old English.

The noun “nut” (the seed) was originally hnut- or hnute- (in compounds) in Anglo-Saxon writing, and then nut-, nute, etc., in later Old English.

The adjective “loud” was hlúd in Old English and then lud(e), loude, lowd(e), and so on in Middle English.

The “h”-dropping in Old English texts presumably reflected the loss of the “h” sound in speech, according to phoneticians, linguists who specialize in phonetics.

However, scholars have debated the cause of the “h” loss in Middle English writing.

The 19th-century philologist Walter William Skeat attributed the loss of the letter “h” in Middle English writing to spelling errors by Anglo-Norman scribes.

But James Milroy, a 20th-century linguist, believed the scribes were representing the “h”-dropping in speech.

Milroy, who exhaustively studied “h”-dropping in England, writes in the Cambridge History of the English Language that in certain regions of medieval England “the syllable initial [h] was not present, or only variably present,” in speech.

Trask, a professor of linguistics at the University of Sussex, raises an interesting point on the Linguist List about contemporary “h” dropping in working-class speech in England.

Although the “h” sound in words of Anglo-Saxon origin (like “hair,” “heart,” “harm,” and “hit”) is “completely gone in the vernacular speech of almost all of England,” Trask writes, there’s no sign of such “h”-dropping in North America.

(The “h”-less US pronunciation of “herb” is not an American version of cockney “h”-dropping. It’s the original pronunciation in Middle English, when the Old French loanword was usually spelled “erbe.” As the OED notes, in British speech “the h was mute until the 19th cent.”)

Why is cockney-style “h”-dropping common among the English, but unknown among Americans?

In Accents of English, Wells, a professor emeritus at University College London, suggests that the American colonists didn’t take such “h”-dropping with them to the New World because they left before its widespread appearance in England.

“The fact that H dropping is unknown in North America strongly suggests that it arose in England only well after the American colonies were founded,” he writes.

Although “h”-dropping did occur in Old English and Middle English, as we’ve said, it apparently wasn’t common enough in England to get the attention of language commentators and novelists until the latter half of the 18th century.

In Talking Proper (1995), Lynda Mugglestone, an Oxford historian of the English language, says the first language writer to complain about “h”-dropping was the actor-educator Thomas Sheridan.

In A Course of Lectures on Elocution (1762), Sheridan criticizes “the omission of the aspirate in many words by some, and in most by others.”

And in Propriety Ascertained in Her Picture (1786), a pronunciation and spelling guide, James Elphinston condemns the “lowliness” and “impropriety” of pronunciations like “uman,” “umor,” and “umbel” (for “human,” “humor,” and “humble”).

Later, Lindley Murray’s influential English Grammar (1795) describes the “h” sound as a requirement for “educated” speech, and blames “the negligence of tutors” and “the inattention of pupils” for its loss.

As for fiction, Winifred Jenkins, a maid in Tobias Smollett’s last novel, The Expedition of Humphry Clinker (1771), drops her aitches on and off, referring to “heart” as “art,” and “harm” as “arm.”

By the mid-19th century, working-class characters routinely dropped their aitches in novels. As Uriah Heep says in David Copperfield (1850):  “I am well aware that I am the umblest person going.”

(Although “humble” was the standard spelling of the word in Dickens’s day, its original spelling in Middle English was “umble.”)

We can’t conclude this discussion of “h”-dropping without mentioning the many Old English words that began with “hw” but now begin with “wh,” including hwæt (“what”), hwanne (“when”), hwǽr, (“where”), hwæs (“whose”), hwā (“who”), hwí (“why”), hwelc (“which”), hwæðer (“whether”), and so on.

The OED says the “normal Old English spelling hw was generally preserved in early Middle English,” and the “modern spelling wh is found first in regular use in the Ormulum,” a 12th-century religious work in which whillc is used for “which.”

“In Old English the pronunciation symbolized by hw was probably in the earliest periods a voiced bilabial consonant preceded by a breath,” according to the dictionary. (A voiced bilabial consonant is one in which the vocal cords vibrate and the air flow is restricted by the lips.)

Interestingly, the words that began with “hw” in Old English have given us two types of “wh” words today: those in which the “w” sound predominates (“why,” “where,” “when,” etc.) and those in which the “h” sound predominates (“who,” “whole,” “whose”).

In case you’re wondering, “whore” was originally spelled hóre in Old English, and retained its “h” pronunciation when the “wh” spelling of the word arose in the 16th century.

An 1830 edition of Walker’s Critical Pronouncing Dictionary gives two pronunciations, “höör, or höre,” and adds: “If there can be a polite pronunciation of this vulgar word, it is the first of these, rhyming with poor.”

If you’d like to read more, we’ve written several posts about “herb” and “historic,” including Herbal remedies in 2009 and Historic article in 2012.

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

On chicks and chickens

Q: In a book about exotic chickens, I read that linguistic purists say “chicken” is plural for “chick,” akin to “children” and “child” or “oxen” and “ox.” What say you?

A: That book is wrong. The word “chicken” is singular and has been since Old English. As we’ll explain, the “-en” in “chicken” was originally a diminutive, not a plural ending.

Only in rural dialects, mostly in the 19th century in the southwest of England, has “chicken” ever been used as a plural for “chick.”

We found this explanation in an 1895 grammar book: “Chicken is not a plural word, though it is used as such in country districts” (from English Grammar for Beginners, by Alfred S. West).

The Oxford English Dictionary quotes this regional definition: “Chicken, in Mid-Sussex used as the plural of chick” (from William Douglas Parish’s A Dictionary of the Sussex Dialect and Collection of Provincialisms, 1875).

In fact “chicken” dates from around 950 and perhaps as early as 700. But “chick” didn’t appear in writing until 1320, as an abbreviation for “chicken.”

“Chicken” (spelled cicen, ciken, and ciccen in Old English) is similar to words in other Germanic languages.

Chicken is a widespread Germanic word,” John Ayto writes in the Dictionary of Word Origins.

The OED notes versions in such Germanic languages as Dutch (kieken, kuiken), German (küchlein), Old Norse (kjúklingr), Swedish (kjukling), and Danish (kylling).

The ultimate source of these words, etymologists believe, is a prehistoric Proto-Germanic word reconstructed as kiukinan (or kiukinam).

This ancient word was “imitative, like Old English cocc, of the sound of the bird,” according to the Chambers Dictionary of Etymology.

The ancient kiukinan was formed, Ayto says, by adding a diminutive to the root keuk-. And some etymologists, according to the OED, suggest that this root was a form of kuk-, the source of  “cock.”

“If that is so,” Ayto comments, “a chicken would amount etymologically to a ‘little cock’ (and historically the term has been applied to young fowl, although nowadays it tends to be the general word, regardless of age).”

The diminutive notion makes sense, considering that the earliest meaning of “chicken” in the OED is “the young of the domestic fowl.”

This sense of the word was first recorded in the Lindisfarne Gospels, an illuminated manuscript that the OED dates from around 950 (though some scholars trace it to as early as circa 700). 

Here’s a relatively reader-friendly 1526 citation: “He … cherissheth vs, as the egle her byrdes: the broode henne her cheykyns” (from William Bonde’s treatise The Pylgrimage of Perfection).

In the early 19th century, “chicken” came to mean “a domestic fowl of any age,” the OED says, although the abbreviation “chick” kept its older meaning: a young bird.

However, “chicken” has retained its youthful associations in some senses.

Since the early 18th century, according to the OED,  it’s been used in writing to mean “a youthful person: one young and inexperienced.” Thus “no chicken,” the dictionary says, can mean “no longer young.”

The OED’s earliest citation for this sense is from Richard Steele, writing in the Spectator in 1711: “You ought to consider you are now past a Chicken; this Humour, which was well enough in a Girl, is insufferable in one of your Motherly Character.”

This later citation shows how “no chicken” was (and sometimes still is) used: “He must have been well forward in years—or at all events, as they say, no chicken” (from Edward Walford’s Tales of Our Great Families, 1877).

In case you’re wondering, “spring chicken” referred to a young fowl when the phrase first appeared in the late 18th century.

A “spring chicken,” the OED says, simply meant “a small chicken (esp. a roasting bird)” or more specifically “one aged between eleven and fourteen weeks.”

The dictionary’s earliest example of the phrase used literally is from a 1770 entry in the diary of an English clergyman, James Woodforde: “We had for dinner … three nice Spring Chicken rosted.”

The figurative use of “spring chicken” to mean a young person—and of “no spring chicken” for someone not so young—was an American invention, the OED says.

Oxford’s earliest citation is from a 1910 issue of the National Police Gazette: “She wasn’t a Spring chicken, by any means, yet she wasn’t old.”

But we’ve found several earlier examples, and they aren’t all American. The first two are from boys’ adventure novels written by a Scot and published in London:

“I’m going on for fifty. That ain’t a spring chicken” (from Wild Life in the Land of the Giants, by Gordon Stables, 1888).

“And you wouldn’t be wrong in calling her ‘old’ either. My mither’s no’ a spring chicken, but—she’s a marvel. Ay, mither’s a marvel” (from Our Home in the Silver West, by Gordon Stables, 1891).

“He ain’t no spring chicken, Bert ain’t” (from A Little Norsk, a novel by the American writer Hamlin Garland, 1892).

“I was no spring chicken in the ways of the world and the awful abysses of human degradation” (from “The Pen,” a short story of prison life by Jack London, Cosmopolitan, 1907).

Finally, we should mention that the slang use of “chick” for a girl or young woman showed up in the US in the early 20th century, according to citations in the OED.

The dictionary’s earliest example is from Sinclair Lewis’s 1927 novel Elmer Gantry: He didn’t want to marry this brainless little fluffy chick.”

Help support the Grammarphobia Blog with your donation. And check out our books about the English language.

The Grammarphobia Blog

When push comes to shove

Q: “When push comes to shove” does not come up in my QPB Encyclopedia of Word and Phrase Origins. What do you have to say about the evolution of this phrase?

A: The expression “when (or if) push comes to shove” originated in 19th-century African-American usage, according to the Oxford English Dictionary.

The OED labels it colloquial—more likely to be found in speech than in formal writing—and says it means “when action must back up words” or “if or when one must commit oneself to an action or decision.”

People generally talk about a problem before finally doing something about it. So think of talking as the “push” and acting as the “shove.”

The expression wasn’t recorded until the 1890s, according to OED citations, but no doubt it was used conversationally for years before it ever showed up in print.

Oxford gives a hint of the reasoning behind the saying in this 1873 citation from Thomas De Witt Talmage, writing in the United Methodist Free Churches’ Magazine:

“The proposed improvement is about to fail, when Push comes up behind it and gives it a shove, and Pull goes in front and lays into the traces; and, lo! the enterprise advances, the goal is reached!”

A version of the expression that used “pinch” instead of “push” appeared in a February 1897 issue of a Georgia newspaper, the Macon Telegraph:

“But, ‘if pinch comes to shove’ as old Sol … was wont to say, will these gentlemen put on the habilaments of war and prove ‘more than a match’ for British ironclads or Spanish machetes?”

The same newspaper printed the more familiar version in February 1898: “When ‘push comes to shove’ will editors of the Yellow Kid organs enlist?”

A prominent African-American newspaper, the Chicago Defender, printed the expression in 1924 (“what Uncle Sam can do if push comes to shove”), and in a 1948 piece by the poet Langston Hughes:

“Civilizations, like clocks, have a way of running down—only to be replaced by new versions. One can always buy another clock, or even tell time by the sun, if push comes to shove.”

While the expression originated in the United States, it’s not unknown elsewhere. The OED’s citations include examples from Canada and Scotland:

“If push comes to shove, make good the threat.” (From an Alberta newspaper, the Calgary Herald, 1970.)

“I can see you taking legal advice on your position so that you’ll know what to do if push comes to shove, but you’ll try to work things out first.” (From the Sunday Post, Glasgow, 1997.)

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

The 3 “ch” sounds: sh, tch, k

Q: Do English words with “ch” pronounced as sh (e.g., “Chicago,” “chute”) generally have French origins?

A: The short answer is yes—but there’s more to the story.

As you know, there are three ways to pronounce the letter combination “ch” in English.

It can sound like k (as in “chasm” or “school”), like sh (as in “charade” or “brochure”), and like tch (as in “champion” and “child”).

The “ch” words with the k sound are derived from classical Greek, while the “ch” words with the sh sound come from modern French.

Most of the “ch” words with the tch sound come from Old English and are Germanic in origin (like “child,” “church,” and “each”).

However, some tch-sound words (such as “chase,” “challenge,” and “chance”) are derived from Old French, where “ch” was pronounced tch.

The “ch” letter combination didn’t exist in Old English, which used the letter “c” for both k and tch sounds, according to the Oxford English Dictionary.

After the Norman Conquest, Middle English scribes introduced the Gallic “ch” spelling. It was used in words from Old French that were already spelled with “ch,” as well in Old English words pronounced with tch and formerly spelled with “c.”

“French spelling habits were applied to native English vocabulary,” the American Heritage Guide to Contemporary Usage and Style says, “and the word spelled cild in Old English, for instance, came to be spelled child in Middle and Modern English.”

Interestingly, the “ch” letter combination pronounced tch in Old French later came to be pronounced sh in modern French. But the English words with “ch” that came from Old French tended to retain the earlier tch pronunciation.

Finally, US place names in which “ch” is pronounced sh (like “Chicago” and “Michigan”) generally come from French versions of American Indian names.

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

How traditional is a tradition?

Q: I recall reading that a tradition is a custom passed from one generation to the next. But I often hear people referring to customs (esp. within families) that are typically only a few years old, as in “We traditionally have pizza on Christmas Eve.”

A: How traditional is a tradition? Most of the standard dictionaries we’ve checked say “tradition” can refer to a long-established custom as well as one passed on from generation to generation.

However, “tradition” did indeed have the generational sense when the noun showed up in Middle English in the late 1300s.

At that time, according to the Oxford English Dictionary, it meant  a “belief, statement, custom, etc., handed down by non-written means (esp. word of mouth, or practice) from generation to generation.”

By the late 1500s, the dictionary says, the word was being used for “any practice or custom which is generally accepted and has been established for some time within a society, social group, etc. (in later use not necessarily one passed down from generation to generation).”

It’s unclear from the OED citations exactly when a “tradition” came to mean any long-established custom, “not necessarily one passed down from generation to generation.”

It obviously occurred sometime between the dates for the oldest and newest examples of the word in the dictionary.

Here’s the OED’s earliest example: “Throw a way respect, / Tradition, forme, and ceremonious duetie,” from Shakespeare’s Richard II (1597).

And here’s the most recent: “The release in 1998 of The McGarrigle Hour … established an intermittent tradition of hootenanny-style get-togethers,” from the Jan. 20, 2010, issue of the Independent (London).

English borrowed the word from Anglo-Norman and Middle French, where a tradicion or tradition referred to the handing over of an object or the transmitting of an idea.

The Latin source of the word is the verb trādere (to hand over, deliver, or entrust), but the ultimate source is the reconstructed Indo-European root dō- (to give).

Why did a verb meaning to hand over or give inspire the noun “tradition”? Because etymologically, a tradition is something passed on, given, handed down.

Interestingly, “tradition” once meant a betrayal, but that sense is now considered obsolete or archaic.

When used in this negative way, the OED explains, “tradition” referred to “the action or an act of surrendering a person into the power of another; betrayal.”

The dictionary notes that the term was also used in the early Christian church in reference to the “surrender of sacred books and vessels to the Roman authorities in times of persecution, esp. during the persecution under the emperor Diocletian in the early 4th cent. a.d.”

However, the OED doesn’t have any Old English or Middle English citations for “tradition” used in the sense of surrendering a person or a sacred book, which suggests that the dictionary is referring here to the classical Latin or late Latin ancestors of “tradition.”

In classical Latin, a trāditor is a “traitor, betrayer,” according to Oxford, and in late Latin it’s a “person who hands over sacred books to their persecutors.”

And, yes, trādere (to hand over, give, entrust) is the classical Latin source of both “tradition” and “traitor.”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Why do we con-VICT a CON-vict?

Q: Why do words such as “refuse” and “project” have one pronunciation as a verb and another as a noun?

A: The usual pattern with these pairs is that the noun is accented on the first syllable while the verb is accented on the second, as with CON-vict (n.) and con-VICT (v.), REC-ord (n.) and re-CORD (v.).

This is a long-established convention of English pronunciation, one that 18th-century lexicographers commented on.

Samuel Johnson, in A Dictionary of the English Language (1755), had this to say about such two-syllable pairs:

“Of disyllables, which are at once nouns and verbs, the verb has commonly the accent on the latter, and the noun on the former syllable.”

He gave several examples, including con-TRACT (v.) and CON-tract (n.).

“This rule has many exceptions,” Johnson added. “Though verbs seldom have their accent on the former, yet nouns often have it on the latter syllable,” he said, as with de-LIGHT and per-FUME.

There are scores (we’ve seen lists with more than 150) of these two-syllable pairs in English. They’re often called heteronyms or heterophones, a subject we wrote about in a 2012 post.

Obviously, there’s an advantage in having different pronunciations. The speaker can distinguish one word from the other and avoid ambiguity, an advantage that we don’t have in written English. (A linguist would say the differing pronunciations serve to “disambiguate” the words.)

Occasionally, as with the noun “record,” the accent varied in early pronouncing dictionaries, and only later did the first-syllable stress become the norm.

Johnson, in the entry for “record” in his 1755 dictionary, was on the fence: “The accent of the noun is indifferently on either syllable; of the verb always on the last.”

Thomas Sheridan, in A General Dictionary of the English Language (1780), stressed only the second syllable of the noun (re-CORD).

And John Walker, in A Critical Pronouncing Dictionary and Expositor of the English Language (1791), stressed the first syllable of the noun (REC-ord).

Walker noted that “the noun record was anciently, as well as at present, pronounced with the accent either on the first or second syllable,” but he urged speakers to accent the first.

Accenting the second syllable, he said, “is overturning one of the most settled analogies of our language, and … it would be to the advantage of pronunciation to lean to the obvious analogy in disyllable nouns and verbs of the same form.”

The convention of accenting the nouns and verbs differently, Walker said, “seems an instinctive effort in the language … to compensate in some measure for the want of different terminations for these different parts of speech.”

In the case of “record,” Walker’s advice was somewhat slow to take hold. As the Oxford English Dictionary notes, “Examples of stress on the second syllable can still be found in verse in the 19th cent.”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Phobias, inside and out

Q: If people who spend all their time inside suffer from “agoraphobia,” do people who spend all (or much) of their time outside suffer from “claustrophobia”?

A: If “agoraphobia” is defined as fear of open spaces and “claustrophobia” as fear of closed spaces, then the two words would be opposites.

Those are the most common definitions in standard dictionaries, but some dictionaries have expanded on them to make the meanings overlap to a considerable degree.

Cambridge Dictionaries Online, for example, has the usual definitions, with “agoraphobia” defined as “fear of going outside and being in open spaces or public places” and “claustrophobia” as “fear of being in closed spaces.”

The online Oxford Dictionaries, however, defines “agoraphobia” as “extreme or irrational fear of crowded spaces or enclosed public places,” and “claustrophobia” as “extreme or irrational fear of confined places.”

We don’t see all that much difference between those Oxford definitions: “crowded spaces or enclosed public places” could well be described as “confined places.”

The Oxford English Dictionary (a different entity from Oxford Dictionaries online) expands the definition of “agoraphobia” further to include fear “of leaving one’s own home.”

The OED defines “agoraphobia” as “fear of entering open or crowded places, of leaving one’s own home, or of being in places from which escape is difficult.” It defines “claustrophobia” as “a morbid dread of confined places.”

So what do the two terms really mean? With dictionaries at odds, it’s your call. Pick whichever dictionary definition you’re comfortable with.

Getting back to your question, we might use those terms loosely to describe pathological fears that would keep people inside (“agoraphobia”) or outside (“claustrophobia”).

The noun “agoraphobia” was borrowed from the German agoraphobie, a term coined by Carl Friedrich Otto Westphal in 1871, according to the OED. The word appeared later that year in the British journal Clinic:

“Agorophobia [sic].—With this name Westphal denotes a neuropathetic affection which he has recently occasionally encountered. Its most essential symptom, is a most acute anxiety or fear, experienced in open places, long passages, theatres, concert saloons, etc., with no other cerebral disturbance.”

Westphal originally conceived of “agoraphobia” as simply the fear of large open spaces, though the word soon acquired wider meanings in psychiatric terminology.

The German psychiatrist formed it from the Greek agora (a public open space or marketplace) and –phobia (fear of).

“Claustrophobia” also has classical roots. It was formed from the Latin claustrum (confined space), the source of “cloister,” according to the OED.

The  noun was coined by an English-born French medical professor, Benjamin Ball, in his article “On Claustrophobia,” published in the British Medical Journal in September 1879.

It’s interesting that in his paper, which was published shortly afterward in Paris under the title “De la Claustrophobie,” Ball compared the two disorders.

He characterized “claustrophobia” as “a state of mind in which there was a morbid fear of closed spaces … apparently different from, but in reality similar to, agoraphobia or the dread of open spaces.”

One last point. The pronunciation of “agoraphobia” has evolved in recent years for many speakers, with the secondary accent moving from the first syllable (AG-or-a-PHO-bi-a) to the second (a-GOR-a-PHO-bi-a).

The American Heritage Dictionary of the English Language (5th ed.) says in a usage note that the “variant has quickly gained acceptance” and is now accepted by almost three-quarters of its usage panel.

American Heritage now accepts both pronunciations. However, five of the other standard dictionaries we’ve checked list only the traditional pronunciation (AG-or-a-PHO-bi-a).

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

A risky preposition

Q: I see both “risk of” and “risk for” regularly, particularly in the health context. “Risk for cancer,” “risk of dying prematurely,” etc. How do you know when to use “of” or “for”? Are both acceptable?

A: There’s no clear answer here. Both “risk of” and “risk for” are used by educated writers, and many of them—medical writers in particular—seem to use the two interchangeably.

In searches of scholarly databases, we found scores of books and articles in which both “risk of” and “risk for”—or “at risk of” and “at risk for”—appear in otherwise identical phrases.

Some examples: “assessing risk of violence” and “assessing risk for violence” (2010) … “at high risk of death” and “at high risk for death” (2001) … “risk for dementia” and “risk of dementia” (1999) … “at risk of falling” and “at risk for falling” (1998) … “at risk for school failure” and “at risk of school failure” (1989) … “the risk of reinfection” and “the risk for reinfection” (1986).

We have the impression that in some cases the writer (or editor) alternated the pattern merely for the sake of variety.

Scholarly usage aside, people in general tend to prefer “risk of” to “risk for,” whether or not the phrase is preceded by “at.” Google hits for “at risk of” outnumber “at risk for” by almost two to one.

If there’s a pattern here, it may have to do with the noun or noun phrase that follows “of” or “for” and whether it represents the danger itself or whatever is in danger.

We’ve concluded that both “risk of” and “risk for” are common when the object of the preposition is the noun or noun phrase for the danger—the disease or other misfortune.

But “risk of” is more popular, especially when the object is a gerund (an “-ing” word), as in “Climbers run the risk of falling” … “He spoke up at the risk of sounding foolish.”

The Oxford English Dictionary’s entry for “risk” has many citations, from the 1660s to the present, in which “risk of” precedes the noun or noun phrase for the hazard or misfortune.

A sampling: “an heavy Risk of wickedness” (1660) … “the Risque of being hang’d” (1697) … “the Risque of an Insult” (1740) … “the risk of flooding” (1934) … “great risk of wildfire” (2003).

In fact, within its “risk” entry the OED has no citations at all for “risk for.” However, elsewhere in the dictionary are numerous examples of “risk for,” all from the 20th century or later and almost all from medical writing.

So it would appear that “risk for” is a relatively recent usage, at least in the sense that we’re discussing. (We’re ruling out constructions like “he ran a risk for her sake” or “he put his life at risk for his country.”)

On the other hand, when the “risk” phrase precedes the thing at risk, not the hazard or misfortune, we generally find “risk to” (sometimes “risk for”), as in “Strong chemicals are a risk to (or for) nail salon workers” … “Pollution poses risks to (or for) the environment.”

Oxford has many examples in which “risk to” precedes what’s in danger: “at great risk to himself” (1805) … “at risk to their lives” (1905) … “a risk to others” (1979) … “at grave risk to his career” (2002) … “a risk to himself and others” (2002).

In 2011 the linguist Mark Liberman wrote an article on the Language Log in rebuttal to a reader who insisted that “at risk for cancer” is grammatically incorrect.

In his article, which he filed under “Prescriptivist Poppycock,” Liberman suggested the reader’s peeve was an “individual quirk.”

A couple of comments suggested that “at risk for” became established largely because of its use in epidemiology. Another noted, “Once ‘at risk’ becomes an expression that stands on its own, it becomes quite natural to use ‘for’ to specify what they are at risk for (eh, of).”

The noun “risk” first appeared in written English in the 17th century, according to OED citations.

Its ancestors were recorded in medieval Italian (rischio) and post-classical Latin (resicum, risicum, etc.), but can’t be traced back further than the mid-1100s (as Oxford puts it, “further etymology uncertain and disputed”).

The noun came into Middle French in the 16th century as risque, meaning “danger or inconvenience, predictable or otherwise,” the OED says. And English speakers borrowed the word from French in the following century.

The first known example in writing is from The Wise Vieillard, or Old Man, an anonymous 1621 translation of a work by the French theologian Simon Goulard:

“The couetous [covetous] Marchant to runne vpon all hazards and risques for a handfull of yellow earth.”

The OED notes that the noun appears “freq. with of.” The earliest such example is from John Sadler’s mock-utopian work Olbia (1660), in a reference to “an heavy Risk of wickedness.”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Strove Monday

Q: A recent editorial in the Washington Post says many of Donald Trump’s “rivals have strove to mimic him.” Shouldn’t that be “have striven”?

A: “Strove” or “strived” is the past tense of the verb “strive.” The past participle (used with forms of “have”) is “striven” or “strived.”

So the Post’s editorial writers should have said Trump’s rivals “have striven” or “have strived” to mimic him.

Although the use of “strove” as a past participle has been around for several hundred years, it’s not considered standard English today.

As the Oxford English Dictionary explains, “strove” appeared as a past participle “in the 17th cent., and remained somewhat common down to the middle of the 19th cent., but is now confined to illiterate use.”

The OED compares this use of “strove” to “stroven,” which appeared as a past participle in the 16th and 17th centuries. (The dictionary’s last citation for “stroven” is from the early 1600s.)

When the verb “strive” showed up in Middle English in the 13th century, it meant to be in a state of hostility.

The English word was adapted from the Old French estriver (to quarrel or contend). The OED says the French verb is “of disputed origin,” but it’s “commonly believed to be of Germanic etymology.”

The OED has a questionable citation for the verb from the Ancrene Riwle (circa 1225), an anonymous guide for monastic women. The earliest definite example is a 1297 entry in The Chronicle of Robert of Gloucester: “he striuede wiþ his wiue” (“he strived with his wife”).

Meanwhile, “strive” took on the sense of to contend or carry on a conflict. The OED’s earliest example, dated around 1290, is from The South English Legendary, a Middle English collection of writings about biblical and other religious figures:

“And striuede for holi churche aȝen þe kinge and his” (“And strived for holy church against the king and his”).

The verb took on its usual modern sense (to endeavor or exert much energy) in the 14th century. The earliest Oxford example is from the the Wycliffe Bible of 1384, an English translation from Latin:

“And therfore we stryuen [L. contendimus] whether absent, whether present, for to plese him.”

As for those two past tenses, “strove” appeared somewhat earlier than “strived,” according to the OED, but “strived” would “be normal for a verb adopted < French, and has always been more frequent of the two.”

In Google searches, however, “he strove” appears three times as often as “he strived.” The six standard dictionaries we’ve checked include  both past tenses, though “strove” is always listed first.

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

When “less” is “minus”

Q: Is it OK to use the phrase “less than” when teaching numeracy in elementary school? Example: “What is one less than five?” I suspect that many children confuse “less than” (meaning “smaller than”) with “less” (meaning “minus”).

A: We’ve written several times on the blog about “less” vs. “fewer,” including posts in 2014 and 2010, but there’s more to be said about “less.”

The word “less” has had many meanings since it showed up in Old English in the ninth century, and one of the oldest, dating back to Anglo-Saxon times, involves its use as “minus” in subtraction.

However, an even older meaning—the oldest example for “less” in the Oxford English Dictionary—is “fewer,” a usage that was acceptable for hundreds of years but is frowned on today.

The “fewer” sense of “less,” which the OED observes is “freq. found but generally regarded as incorrect,” first showed up in writing in King Ælfred’s Old English translation (circa 888) of Boethius’s De Consolatione Philosophiae:

“Swa mid læs worda swa mid ma, swæðer we hit gereccan magon” (“So we may prove it with less words as with more, whichever of the two”).

Over the years, as we’ve said, “less” took on other senses, including “to a smaller extent” (c. 900), as in “none the less”; “inferior” (c. 950), as in “no less a person”; “not so great an extent” (c. 1000), as in “less time to eat”;  and “a smaller amount” (c. 1330), as in “less money.”

The use of “less” to mean “minus,” the OED explains, indicates “that the number or quantity indicated is to be subtracted from a larger one mentioned or implied.”

This sense of “less” first showed up in writing in the Anglo-Saxon Chronicle, an Old English work believed to have been updated regularly from the late 9th to the mid-12th centuries.

Here’s the citation, which the OED says was written sometime before 1160: “He rixode twa læs .xxx. geara” (“He ruled for 30 years less two”).

In early writing, “less” followed the number being subtracted (as in “twa læs” above), but it now precedes the number (as in “less two”), according to the OED.

All the modern examples in the dictionary show the unsubtracted quantity (the “minuend”) followed by “less” and then the amount to be subtracted (the “subtrahend”).

This modern example is from the March 25, 1930, issue of the Times (London): “A full year’s dividend on the Preference Shares, less tax, absorbing £16,800.”

The latest OED example of the usage is from  the Sept. 2, 1972, issue of the Times (London): “Cost of paint … Less VAT input tax … £500.”

We also checked six standard dictionaries and all their examples show “less” by itself following the minuend and preceding the subtrahend. Here’s an example from The American Heritage Dictionary of the English Language (5th ed.): “Five less two is three.”

Getting back to your question, is it OK for a teacher in primary school to ask pupils, “What is one less than five?”

When we were learning subtraction in elementary school many moons ago, our teachers would have said “What is five less one?” (or “five minus one” or perhaps even “five take way one”).

We think “five less one” or “five minus one” is the simplest and clearest way of expressing “5 – 1” in words, because the words follow the order of the numerals and the minus sign. And the use of “less than” here might lead to confusion between the minus sign (–) and the “less than” sign (<).

A perfect example of such confusion can be seen in an Aug. 19, 2001, question to the Math Forum, a website sponsored by Drexel University in Philadelphia:

“Why is this expression driving me crazy when at first it seems so simple: ‘three less than a number’? I believe it is x – 3 but I am being challenged that it is 3 – x.”

The response by the Math Forum’s staff includes this comment: “Many people get confused by this sort of expression, because they expect to translate directly from English to Mathish, word for word. Then ‘three (3) less than (–) a number (x)’ would seem to be ‘3 – x.’ But it isn’t. What’s even more confusing is that ‘3 less a number’ does mean ‘3 – x’ because ‘less’ as a preposition means the same as ‘minus.’ ”

We’d be wary of using “less than” in teaching subtraction to young children. But our online searches suggest that elementary school teachers generally distinguish between the use of “less” and “less than.”

From looking at educational websites that discuss basic subtraction, our sense is that the traditional wording (“five minus one” or “five less one”) is used in speaking about actual subtraction. The “less than” wording is used to compare two numbers, rather than to subtract one from the other (“four is one less than five” or “one less than five is four”).

Despite the possible confusion between “less” and “less than” in teaching subtraction, educators have been using “less than” for comparisons for nearly two centuries, according to our searches of online databases.

Here’s an example from A Manual of Instruction for Infants’ Schools, an 1829 book by William Wilson, the vicar of Walthamstow in northeast London: “Four are one less than five; four are two less than six; four are three less than seven, &c.”

And this example is from A Manual of Elementary Instruction for Schools and Normal Classes (1862), by Edward Austin Sheldon, M. E. M. Jones, and Hermann Krüsi: “The class may repeat, ‘Five is one more than four; four is one less than five.’ ”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Apostrophic illnesses

Q: I’m a physician who’s irritated by the increasing tendency for writers to omit the apostrophe in a disease named for a person, as in “Parkinson disease.” I resist this, and write “Parkinson’s disease,” which I think is correct.

A: You’re in an unfortunate position here. As a doctor, you’re caught between the recommended usage in the medical profession and standard usage everywhere else.

The AMA Manual of Style (10th ed.), for example, recommends dropping the ’s in such diseases, as does the 27th edition of Stedman’s Medical Dictionary.

Although Dorland’s Illustrated Medical Dictionary (30th ed.) says the ’s “is becoming increasingly less common,” it includes some diseases with the ending and some without to “reflect this ongoing change in usage.”

However, Merriam-Webster’s Medical Dictionary, which is intended for a broader audience, generally considers the ’s versions the usual forms, though it sometimes includes the stripped-down forms as acceptable variants.

As for common usage, the six standard dictionaries we’ve checked usually list only the ’s versions for these terms, though bare versions are sometimes given as acceptable or equal variants.

Merriam-Webster’s Collegiate Dictionary (11th ed.), for example, lists only “Parkinson’s” while The American Heritage Dictionary of the English Language (5th ed.) gives “Parkinson’s” as more common, but includes “Parkinson” as an acceptable variant.

The American Medical Association’s style guide acknowledges that the issue is still somewhat controversial, but says that the use of the ’s in medical eponyms, the technical term for things named after people, is a thing of the past.

“There is some continuing debate over the use of the possessive form for eponyms, but a transition toward the nonpossessive form has taken place,” the AMA guide says.

The AMA editors recommend dropping the ’s to represent “the adjectival and descriptive, rather than possessive, sense of eponyms” and to “promote clarity and consistence in scientific writing.”

We take issue here with the AMA editors. Technically, the ’s here is not possessive but genitive. As we’ve written before on our blog, genitives show associations and relationships much broader than ownership.

In a genitive construction like “last night’s mashed potatoes,” we’re not talking about ownership. The ’s here means “associated with” or “related to,” not “possessed by.”

Nevertheless, the misconception persists. The National Down Syndrome Society, in its Preferred Language Guide, gives this explanation for opposing the ’s:

“Down syndrome is named for the English physician John Langdon Down, who characterized the condition, but did not have it. An ‘apostrophe s’ connotes ownership or possession.”

In fact, the AMA stylebook cites the Down Syndrome Society’s language guide in support of its belief that a transition toward non-genitive eponyms has taken place:

“A major step toward preference for the nonpossessive form occurred when the National Down Syndrome Society advocated the use of Down syndrome, rather than Down’s syndrome, arguing that the syndrome does not actually belong to anyone.”

Other critics argue against medical eponyms whether they have apostrophes or not, saying the names may credit the wrong people or are out of date.

Victor A. McKusick, for example, says in Mendelian Inheritance in Man (11th ed.) that “often the person whose name is used was not the first to describe the condition … or did not describe the full syndrome as it has subsequently become known.”

Although “Down syndrome” is now more common than “Down’s syndrome” and standard dictionaries prefer the shorter form, most other medical eponyms still have the ’s in dictionary entries.

Of the 11 eponyms we’ve checked, “Alzheimer’s,” “Addison’s,” “Parkinson’s,” “Bright’s,” “Crohn’s,” “Hansen’s,” “Hodgkin’s,” and “Raynaud’s” diseases usually have the ’s. Only “Down,” “Munchhausen,” and “Tourette” syndromes are usually bare.

In fact, searches with Google’s Ngram viewer indicate that medical eponyms with ’s are overwhelmingly more popular in books than the stripped-down versions.

However, medical toponyms (diseases named after a place) don’t have apostrophes. For example, “Rocky Mountain spotted fever” or “Lyme disease” (named for Lyme, CT).

Note that the capitalized name in a medical eponym or toponym is traditionally followed by a lowercase generic term, as in “Lou Gehrig’s disease” or “West Nile virus.”

The old tradition of naming diseases or parts of the body for their discoverers dates back to the use of Latin medical terms.

An example is tuba Fallopii for the structures first described by the 16th-century anatomist Gabriele Falloppio, also known by his Latin name, Fallopius. Today we say “fallopian tubes,” which many standard dictionaries give with a lowercase “f.”

Since you are a physician, you may be interested in an excellent article we came across on the history of medical eponyms.

John H. Dirckx, a doctor who has written frequently about the language of medicine, says such terms “are cherished by most physicians who have a sense of history.”

Besides, he writes in a 2001 issue of the journal Panace@, they “are often embraced as a pleasant relief from polysyllabic terms derived from classical languages.”

They also have a “value as euphemisms,” he adds. A term like “Hansen’s disease,” for example, is a welcome replacement for “leprosy” and all that it conveys.

As for the ’s, he writes, “Some of the arguments offered by editors and others to justify exclusion of the genitive from eponyms are simply ludicrous.” (He mentions the objections we noted above, that the person didn’t have the disease or possess it.)

Such critics, Dr. Dirckx writes, “display ignorance of linguistics, a superficial and mechanistic view of language, disdain for tradition, and, sometimes, the arrogance of authority.”

He concludes, probably with tongue in cheek: “Will even the homely lay term Adam’s apple (nuez, prominentia laryngea) eventually come under the universal ban?”

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.

The Grammarphobia Blog

Quite frankly

Q: What is the origin of the phrase “quite Frankly,” and why do I brace myself when somebody begins a sentence with it?

A: Why do you brace yourself? Because “quite frankly,” which means “in an honest, open, or candid manner,” is often used to introduce an opinion that might not be welcome.

The phrase itself is relatively new, showing up in the 19th century, but the words “quite” and “frankly” are quite old, dating back to the Middle Ages.

Before going on, we should mention that there’s no reason to capitalize the “f” of “frankly” (as you’ve done), though it’s ultimately derived from a proper noun in medieval Latin.

“Frankly” is an adverbial form of the adjective “frank,” which Middle English got from franc in Old French around 1300. At that time, both the English and French adjectives meant free.

The French in turn got the word franc from francus, a medieval Latin word used as an adjective for free and as a noun for a member of the Frankish tribes that conquered Roman Gaul and gave France its name.

The Oxford English Dictionary says it’s “usually believed that the Franks were named from their national weapon,” the javelin, which is frankon in reconstructed prehistoric Germanic.

So how did the Latin word for a member of a Germanic tribe come to mean free in French and English?

After the Franks conquered Gaul in the fifth century, “full political freedom was granted only to ethnic Franks or those of the subjugated Celts who were specifically brought under their protection,” according to John Ayto’s Dictionary of Word Origins.

“Hence, franc came to be used as an adjective meaning ‘free’—a sense it retained when English acquired it from Old French,” Ayto writes.

The OED notes confusion as far back as the Middle Ages over which came first, the use of the Latin francus for an ethnic Frank or in the sense of free:

“The notion that the ethnic name is derived from the adjective meaning ‘free’ was already current in the 10th century; but the real relation between the words seems to be the reverse of this.”

Ayto explains that the “free” sense of the adjective “frank” in English “gradually progressed semantically via ‘liberal, generous’ and ‘open’ to ‘candid.’ ”

The “candid” sense of “frank” and “frankly” showed up in the 1500s, according to citations in the OED.

We’ve already discussed the adverb “quite” on the blog, noting that it was an intensifier (meaning completely or to the utmost degree) when it showed up in Middle English around 1300 or perhaps earlier.

In the early 19th century, English speakers began using it as a “moderating adverb” as well, meaning somewhat, rather, relatively, and so on.

In the phrase “quite frankly,” the word “quite” is being used as an intensifier to emphasize “frankly.”

So while the adverb “frankly” by itself means “honestly, openly, or candidly,” the adverbial phrase “quite frankly” says the same thing more emphatically.

Like “quite frankly,” the word “frankly” is often “used for emphasizing that what you are about to say is your honest opinion, even though the person you are talking to might not like it,” according to the online Macmillan Dictionary.

The phrase “quite frankly” can be used adverbially in two different ways:

(1) It can modify a particular verb, as in “He spoke quite frankly about his past” or “The doctors said quite frankly that it was hopeless.”

(2) It can modify the entire sentence or clause that follows, as in “Quite frankly, I was happy to see them go,” or “I returned the dress because, quite frankly, it was too expensive.”

Generally, when “quite frankly” appears at the beginning of a sentence or clause as in #2, it’s being used as what’s called a sentence adverb. (We wrote about sentence adverbs in a 2011 post.)

The OED doesn’t have an entry for “quite frankly,” but we’ve found examples of the phrase dating back to the early 1800s.

The earliest example we found in searches of online databases uses the phrase simply to modify an individual verb.

In the citation, from Rebuilding a Lost Faith (1826), John L. Stoddard writes that some Anglican clergymen take oaths to accept the faith’s doctrines, and then reject their literal meaning:

“Such clergymen, however, say quite frankly: — ‘The Thirty-Nine Articles and the Prayer-Book do not mean what you think they mean.’ ”

The use of “quite frankly” as a sentence adverb didn’t emerge until many decades later.

The earliest example we found is from an anonymous poem, “To Maud,” published in Punch on Feb. 17, 1894:

“Here’s a Valentine for you—lace, tinsel, and satin,
With Cupids all over it up to such tricks;
There’s gauze in profusion, and, oh, it is pat in
The language of love!—for it cost three-and-six.
Quite frankly I wouldn’t be thought to defend it
(Though I swear that I bought it as perfectly new);
And the reason, in fact, why I happen to send it,
Is to have an excuse for a letter—to you.”

And here’s a less romantic example, from “My Methods in Breeding Poultry,” a 1900 pamphlet by Henry P. McKean: “Quite frankly, I am a great believer in Mr. Darwin’s little phrase, ‘Like begets like.’ ”

We’ve also found several examples dating from the 1860s of sentences and clauses beginning “to speak quite frankly.” The writers used the longer phrase much like a sentence adverb, to modify everything that followed.

Was this the forerunner of the sentence adverb “quite frankly”? Perhaps. Quite frankly, we can’t say for sure.

We should mention that “quite” is used to modify many sentence adverbs besides “frankly.” The OED has a citation for “quite seriously” used this way as early as 1872, and we found one for “quite honestly” from 1893.

Help support the Grammarphobia Blog with your donation.
And check out
our books about the English language.