(n.) a linguistic phenomenon in which different forms of the same word derive from different etymological roots
Why does something that’s good become better when it’s more good than something else, and the best when it’s the most good of all? Why not gooder and goodest? And why is it that I am something, but you are something, and yesterday we all were something else? The answer is suppletion.
Suppletion is a linguistic phenomenon in which different forms or subforms of the same word derive from different etymological roots. That being said, the term suppletion only really applies to different inflected forms of the same word—and if you want to get really technical about it, the different forms aren’t just ‘different’, they’re properly said to not be cognates.
So, linguists, skip ahead a few paragraphs. For the sake of everyone else, here’s what all that actually means.
HERE COMES THE SCIENCE BIT
Cognates are simply words that share the same etymology. Pasta and paste. Coffee and café. Assault, insult, exult, desultory and somersault are all cognates, as they all come from saltus, a Latin word for a jump. And words like neonatal, nascent and innate are all cognates of—well, cognate, because they all derive from (g)nasci, a Latin word meaning to be born.
As for inflection, it’s easy for us English speakers to dismiss it as nothing more than the –s we tag onto words to change their number, like cats and dogs. But inflection is much more of an umbrella term, under which the likes of verb conjugations, noun declensions, adjective comparisons, and all sorts of other lexical alterations relating to grammatical categories like tense, voice, person, and mood are found.
English is a so-called analytic language, meaning that it doesn’t make particularly extensive use of inflection, and tends instead to rely on word order to communicate meaning—which is why The dog chased the man is different in meaning to The man chased the dog, despite the words themselves being identical. Nevertheless, in English the –s, –ed and –ing forms of verbs are all inflected forms, as are the comparative and superlative –er and –est forms of adjectives, some pronoun declensions, and, yes, the –s we tag onto the end of plurals. (By comparison some other languages, known as synthetic languages, utilise much more complex systems of inflection than ours.)
HERE COMES ANOTHER SCIENCE BIT
Linguists, keep skipping. Everyone else, brace yourselves—we’ve still got some ground rules to lay down here.
Suppletion occurs when one form (or several forms) of a word that would ordinarily be created by one of these regular inflections is instead taken from a completely different root. This in turn creates what’s called an irregular paradigm.
A paradigm is essentially the pattern by which a word is inflected in different grammatical contexts—so the full paradigm of the word walk in English would include the likes of walks, walking, walked and walker. A word with an irregular paradigm doesn’t follow the expected pattern for a word of its type, meaning its various subforms can’t be predicted—like good, better and best, or bad, worse and worst.
Irregularities like these cause all kinds of problems for language learners, as their rule-breaking derivatives have to be learned individually. Someone learning English, for instance, might know that the regular paradigm for adjectives involves tagging an –er or an –est onto the end of the root to create comparatives and superlatives. Hot, hotter, hottest. Cold, colder, coldest. Confusing, confusinger, confusingest. If a learner were not aware that good had an irregular paradigm, however, they might unknowingly try to use words like gooder and goodest in a misguided attempt to follow the more usual system.
That’s not to say that blunders like these always end up creating ungrammatical forms, of course. Gooder and goodest might not be standard English words—but take a word like person. Its regular plural form, persons, is still a perfectly acceptable word, it’s just that it tends only to be used in a handful of fairly formal contexts in our language, like legal documents. In more casual English, there might be one person waiting for a bus, but there are a dozen people on it when it finally turns up. Person and people are an example of suppletion, as they don’t share the same etymological root—it’s just that one of those suppletive forms, people, happens to coexist alongside a perfectly regular non-suppletive form, persons, set aside for a handful of contexts.
Nor, for that matter, are all inflectional irregularities examples of suppletion. The fact that children, mice and geese are the plurals of child, mouse and goose—or the fact that come becomes came, swim becomes swam and sing becomes sung—are not examples of suppletion, as these forms, as curious and rule-bending as they may be, derive from the same etymological roots. If the plural of mouse were cats, for instance, then we’d be dealing with something more suppletive in nature.
So, if that’s what is and is not suppletion, there’s still an elephant in the English class here: why do we have suppletive forms at all?
Generally speaking, there are actually a few ways in which a word can end up with suppletive derivatives. Sometimes, the paradigm of a word—like walk, walks, walking, walked, walker—has gaps in it (known as lacunas) where a word simply hasn’t been used with enough frequency in a certain form to establish what that particular form of it should be.
Say that no one had ever walked anywhere before, and so our language had never needed a specific word meaning ‘someone who walks’. That would give us a gap, or a lacuna, in the word’s paradigm: walk, walks, walking, walked, [ x ].
Ordinarily in English, the way to create the ‘someone who’ form of a verb (i.e. the agent noun) is to add –er onto the end of it, giving us walker from walk. But imagine if walking suddenly became popular, and everyone who enjoyed walking decided instead to call themselves a sétáló (using, for some reason, the completely unrelated Hungarian word for someone who walks). If that word were to be parachuted in and fill in the lacuna in our language, it would create a suppletion—and clearly give us an irregular paradigm overall: walk, walks, walking, walked, sétáló.
Yes, okay, this is a fairly facetious example. But as silly as it may seem, it nevertheless shows how a lack of established paradigmatic content in a language can prompt these gaps to be filled in a haphazard, first-come-first-served way, with words that don’t adhere to the regular rules. Normally, of course, irregularities like these aren’t just made up out of thin air, and nor are they chosen without any rhyme nor reason; realistically, there’s little point in English speakers randomly adopting a Hungarian word when we have a perfectly robust vocabulary in place to fill the gap instead.
But sometimes, that robust vocabulary itself can be part of the problem.
SAME TREE, DIFFERENT ROOTS
The fact that languages evolve organically from one another, and bounce off other languages that they come into contact with regionally, means that their vocabularies often end up not only tolerating words derived from different roots, but multiple versions of the same word derived from different roots too. This situation can spark suppletion when some inflected forms of a word are lifted from one source, and different forms of the same word from another.
Good, for instance, comes from the Old English god, which in turn has its roots in an ancient Proto-Germanic stem, godaz. Better and best, on the other hand, come from another Old English word, betera, which derives from a different Germanic stem, batizô. In this instance, the root form of the adjective, good, was taken from one direction, while both the comparative (better) and superlative (best) forms were taken from another, giving us an irregular paradigm overall. Same word tree in English, different roots etymologically.
The same goes for bad (which is perhaps from another Proto-Germanic stem, baidijaną) vs. worse and worst (from Old English wyrsa). Likewise, one and two (which ultimately come from the Proto-Indo-European numerals oino and dwuo) have nothing in common with their ordinal forms, first and second (which come from PIE pre-isto and sekw-ondo). And the verb be is just one colossal mishmash, with the B-forms be, being and been coming from Proto-Germanic beuną (via Old English beon); the W-forms was and were coming from Proto-Germanic wesaną (via Old English wesen); am coming from Proto-Germanic izm– (via Old English eom); and are coming from Proto-Germanic ar (via Old English earun).
Not that it’s just English that displays such inconsistency, of course. Take a look at this:
That’s a fairly truncated conjugation of just one French verb—aller, meaning ‘to go’.
Most French verb paradigms behave fairly predictably, with the first half of the root surviving throughout all its derivatives, and only the inflection attached to the end of it altering. The same table for a perfectly regular verb like manger, meaning ‘to eat’, for instance, would be filled with an endless chain of immediately similar-looking words: mange, manges, mangeons, mangez, mangent, mangeais, mangeait, mangions, mangiez, mangeaient, mangerai, mangeras, mangera, mangerons, mangere, mangeront, mangeant and mangé.
With aller, however, we clearly have a lot more variation. Part of the root form, all–, survives in the imperfect tense, j’allais (‘I was going’), and in both the present and past participles, allant and allé (‘going’, and ‘went’). But in the present tense, only the nous and vous forms, allons and allez, (‘we go’ and ‘you go’) appear similar, with all the others beginning with V. While in the future tense, the all– element is jettisoned altogether in favour of a completely different list of words beginning with I.
So what’s going on? Again, it’s suppletion. The all– forms here come from the Latin verb ambulare, meaning ‘to walk’. The v– forms come from Latin vadere, meaning ‘to rush’ (or, more loosely, ‘to go’). And the i– forms come from Latin ire, meaning ‘to move’ (or again, more loosely, ‘to go’). Three entirely different roots, each supplying different forms of the same single word, aller:
OK ... BUT WHY?
Despite inconsistencies like these, languages on the whole—and the English language in particular—tend to be driven by an inexorable zeal for ever-greater simplicity. So why do they tolerate such confusing, irregular patterns and paradigms?
Oddly, at least part of the answer here is that they don’t. Or, at least, they don’t any longer.
If we were to create a language from scratch today, it’s highly unlikely we would intentionally introduce irregularities and manufacture words and word forms that break the rules. Instead, we would create a set of rules, and then build our words and derivatives around them.
In reality, however, languages don’t behave like that. They grow organically and messily. Rules emerge, but don’t catch on everywhere, nor at the same time. Or, they establish themselves too late to be applied to every word, and are never thought to be retrospectively applied to words that have already become well-established.
Notice how all the examples we’ve dealt with in this entire blog are ancient, basic words: good, bad, go, be, one, two. We’re right back in the mists of etymological time here, dealing not just with our language’s Germanic roots, but their roots in Proto-Germanic, and their roots in turn in Proto-Indo-European. No one back then was consciously thinking about creating a regular language that would survive the test of time and make immutably perfect sense—and nor were they concerned with actively codifying and formalizing these nascent rules to make sure they all worked in the same way. The Proto-Indo-Europeans were just speaking and communicating with one another, picking up words where they could, inventing them when they couldn’t, and filling in gaps in their vocabulary as best as possible—even if that meant selecting words that, in hindsight, look inconsistent, and still act inconsistently today.
Take good vs. best. To us, it makes little sense as to why two forms of the same word should be so clearly unrelated. But etymologically, there’s a hint as to why this suppletion emerged in that batizô gave Old English another word, bot, meaning ‘remedy’ or ‘improvement’ (a word that clings on to existence in our language today only in the stock expression to boot). Better, etymologically, could ultimately be said not just to mean ‘more good’, but rather ‘improved’ or ‘restored’. As it developed its vocabulary, English (or one of its ancestors) needed a word to fill the ‘more good’ gap in its paradigm, and better proved a readily-available stopgap.
Elsewhere, one and two become first and second because in their number system the Proto-Indo-Europeans didn’t see a point in connecting the cardinal one and two to their ordinal equivalents first and second. Instead, they simply had one word stem, pre-isto, that essentially meant ‘foremost’ or ‘frontmost’, and another, sekw-ondo, that meant ‘following’, or ‘subordinate’; it was these that developed into first and second. When the need for more precise ordinal numbers emerged much later in the development of our language, first and second stepped up to fill in the gap—despite their dissimilarity to one and two.
Hindsight is everything, then. It’s easy for us to look at suppletives as annoyances and irregularities that would be better off discarded and replaced with more formalized, rule-following forms. But instead, perhaps we should be looking at them as tantalizing evidence of how our languages have developed and interacted over the centuries—as well as also their speakers’ ability to maintain them even despite their awkwardness.