Ben Zimmer in The New York Times:
We like to think that modern fiction, particularly American fiction, is free from the artificial stylistic pretensions of the past. Richard Bridgman expressed a common view in his 1966 book “The Colloquial Style in America.” “Whereas in the 19th century a very real distinction could be made between the vernacular and standard diction as they were used in prose,” Bridgman wrote, “in the 20th century the vernacular had virtually become standard.” Thanks to such pioneers as Mark Twain, Stephen Crane, Gertrude Stein and Ernest Hemingway, the story goes, ornate classicism was replaced by a straight-talking vox populi.
Now in the 21st century, with sophisticated text-crunching tools at our disposal, it is possible to put Bridgman’s theory to the test. Has a vernacular style become the standard for the typical fiction writer? Or is literary language still a distinct and peculiar beast?
Scholars in the growing field of digital humanities can tackle this question by analyzing enormous numbers of texts at once. When books and other written documents are gathered into an electronic corpus, one “subcorpus” can be compared with another: all the digitized fiction, for instance, can be stacked up against other genres of writing, like news reports, academic papers or blog posts.
One such research enterprise is the Corpus of Contemporary American English, or COCA, which brings together 425 million words of text from the past two decades, with equally large samples drawn from fiction, popular magazines, newspapers, academic texts and transcripts of spoken English. The fiction samples cover short stories and plays in literary magazines, along with the first chapters of hundreds of novels from major publishers. The compiler of COCA, Mark Davies at Brigham Young University, has designed a freely available online interface that can respond to queries about how contemporary language is used. Even grammatical questions are fair game, since every word in the corpus has been tagged with a part of speech.