Showing posts with label Grammar. Show all posts
Showing posts with label Grammar. Show all posts

Sunday, November 8, 2015

Grammar: Eventually the brain opts for the easy route

The grammar of languages keeps reorganizing itself. A prime example of this is the omission of case endings in the transition from Latin to Italian. And in some instances, case systems are remodeled entirely -- such as in the transition from Sanskrit to Hindi, which has completely new grammatical cases.

Simplifications found in all languages

An international team of researchers headed by linguist Balthasar Bickel from the University of Zurich conducted statistical analyses of the case systems in more than 600 languages and recorded the changes over time. They then tested these adaptations experimentally in test subjects, measuring the brain flows that become active during language comprehension. The scientists were therefore able to demonstrate that the brain activity is stronger for complex case constructions than for simple ones.

"Certain case constructions tax the brain more, which is why they are eventually omitted from languages all over the world -- independently of the structural properties of the languages or socio-historical factors," explains Bickel, a professor of general linguistics at the University of Zurich. In other words, biological processes are also instrumental in grammatical changes. "Our findings pave the way for further studies on the origin and development of human language and a better understanding of speech disorders."
_________________
Reference:

Science Daily. 2015. “Grammar: Eventually the brain opts for the easy route”. Science Daily. Posted: August 13, 2015. Available online: http://www.sciencedaily.com/releases/2015/08/150813092822.htm

Sunday, December 21, 2014

Links between grammar, rhythm explored by researchers

A child's ability to distinguish musical rhythm is related to his or her capacity for understanding grammar, according to a recent study from a researcher at the Vanderbilt Kennedy Center.

Reyna Gordon, Ph.D., a research fellow in the Department of Otolaryngology and at the Vanderbilt Kennedy Center, is the lead author of the study that was published online recently in the journalDevelopmental Science. She notes that the study is the first of its kind to show an association between musical rhythm and grammar.

Though Gordon emphasizes that more research will be necessary to determine how to apply the knowledge, she looks forward to the possibilities of using musical education to improve grammar skills. For example, rhythm could be taken into account when measuring grammar in children with language disorders.

"This may help us predict who would be the best candidate for particular types of therapy or who's responding the best," she said. "Is it the child with the weakest rhythm that needs the most help or is it the child that starts out with better rhythm that will then benefit the most?"

Gordon studied 25 typically developing 6-year-olds, first testing them with a standardized test of music aptitude. A computer program prompted the children to judge if two melodies -- either identical or slightly different -- were the same or different. Next, the children played a computer game that the research team developed called a beat-based assessment. The children watched a cartoon character play two rhythms, then had to determine whether a third rhythm was played by "Sammy Same" or "Doggy Different."

To measure the children's grammar skills, they were shown a variety of photographs and asked questions about them. They were measured on the grammatical accuracy of their answers, such as competence in using the past tense. Though the grammatical and musical tests were quite different, Gordon found that children who did well on one kind tended to do well on the other, regardless of IQ, music experience and socioeconomic status.

To explain the findings, Gordon suggested first considering the similarities between speech and music -- for example, they each contain rhythm.

In grammar, children's minds must sort the sounds they hear into words, phrases and sentences and the rhythm of speech helps them to do so. In music, rhythmic sequences give structure to musical phrases and help listeners figure out how to move to the beat. Perhaps children who are better at detecting variations in music timing are also better at detecting variations in speech and therefore have an advantage in learning language, she suggested.

Gordon is passionate about music education, which has declined nationally over the last few decades. She hopes her research may help reverse the trend.

"I've been thinking a lot about this idea ... Is music necessary?" Gordon said. "Those of us in the field of music cognition, we know -- it does have a unique role in brain development."

Ron Eavey, M.D., chair of the Department of Otolaryngology, commented about the importance of music research -- especially in Nashville.

"We live in Music City," said Eavey, director of the Bill Wilkerson Center and Guy M. Maness Professor of Otolaryngology. "Why is music appealing? We need to delve beyond peripheral organs into fundamental neuroscience."
_________________
Science Daily. 2014. “Links between grammar, rhythm explored by researchers”. Science Daily. Posted: November 5, 2014. Available online: http://www.sciencedaily.com/releases/2014/11/141105101238.htm

Friday, May 9, 2014

Why grammar isn't cool – and why that may be about to change

A 15-year-old boy made headlines last week after writing a passionate letter of complaint to Tesco regarding bad grammar on its bottles of orange juice. Tesco claimed it used the "most tastiest" oranges, rather than "tastiest", "most tasty" or "distinctly average".

The fact it was deemed newsworthy shows how rare it is to see enthusiastic pedantry at such a young age (especially if there's no strong family history of it). But before any grammar enthusiasts get excited, he admitted language was not the only motivation – he expected some Tesco vouchers for his ordeal.

Grammar rarely makes headlines, and when it does it's often due to conflict over something the size of an apostrophe. But there's a much greater issue that needs addressing. We complain that children cannot construct a sentence as they used to, but this nostalgic attitude towards literacy abilities has always been around. What we need to focus on is grammar's reputation among the young.

Last month I attended a talk on grammar. In the weeks leading up to it I told a few people and their reactions ranged from laughter to looks of disappointment to disbelief. It didn't get much better at the talk, where the discussion often steered towards the fact that students find grammar boring.

We are supposedly most receptive to learning a second language in childhood. But when it comes to grammar, it's difficult to imagine a typical group of 10-year-olds debating whether or not to precede a gerund with a possessive noun or pronoun.

It's a challenge for anything to be accepted as "cool" among younger generations, but we'd need to worry less about the future of society if grammar could finally earn some street cred.

Its current sorry state can be ascribed to several reasons. The first and possibly most insidious barrier to grammar's image is the trail of fear left behind by old-fashioned grammarians and their pedantic followers. Instead of explanations and advice, grammatical errors are often corrected with scorn and ancient rules. This can project a sense of inadequacy that isn't conducive to learning, and perpetuates the misconception that grammar is black and white, right or wrong.

I don't entirely blame them – the pleasure of finding a typo is unbeatable – but pedants should confine such self-righteous pleasures to the privacy of the home. For the unconfident learner, the best advice was given by William Strunk Jr, author of The Elements of Style, who is alleged to have said: "If you don't know how to pronounce a word, say it loud."

Grammar's second barrier is the argument between prescriptivists and descriptivists, and the confusion this causes. I was taught never to put a comma after "and", but what if I went to the shops with my parents, a sheep and a goat?

Outdated grammar rules are offputting when they create a barrier to clear communication. If I were to sneakily split an infinitive, would I not be understood? Grammar is instinctive. I never understood what it meant to enclose parenthetic phrases in commas, probably because it sounds too confusing, but I know to do it.

The third hindrance to grammar is its reputation. When we think of grammar we picture dusty textbooks, evil teachers holding canes and dry lesson plans. But grammar is colourful, and its ability to completely change the meaning of a sentence is fascinating.

The good news is that there have been a few small "cool" victories recently. YouTube channel jacksfilms regularly uploads Your Grammar Sucks videos for its 1.3 million subscribers. Perhaps the premise – laughing at grammatical errors – is one we should be steering away from, but it puts grammar in the spotlight.

Another example is the small victory for the word "selfie", named Word of the Year last year by Oxford Dictionaries. A modern word that adds clarity in its own, self-obsessed way caught the attention of younger generations. If they can be excited about a word, grammar can't be far behind.

Not everyone thinks grammar is doomed. Bas Aarts, professor of English linguistics at University College London, believes we are experiencing a grammar renaissance.

"Things have changed in recent years. Grammar was perceived as boring, but it was taught prescriptively and put people off. Language develops the way it wants to develop, and no amount of prescriptiveness will help. A lot of people who are against splitting the infinitive can't even explain why."

Aarts says the enjoyment of grammar depends on how it is taught. "There is a renewed interest in grammar, partly because of improved teaching, partly due to some very successful books on language."

To test the grammar renaissance theory, I asked a class of primary school children to describe grammar in one word. Three said "interesting", three said "helpful" and one said "boring". I also asked a class of year 8 pupils: nine described it as "confusing", two said "good" and the rest ranged from "useless" to "brilliant". In another secondary school, the teacher said that, in his class, almost everyone said it was boring or dull, and a few said "pointless".

The way we view grammar is subjective, and, as it turns out, the way we view how everyone else views grammar is also subjective. Perhaps grammar-lovers are just too uncool to know what's cool.

But I do know anything trying to be cool is automatically uncool, and grammar shouldn't have to try.
________________
References:

Brown, Jessica. 2014. “Why grammar isn't cool – and why that may be about to change”. The Guardian. Posted: March 21, 2014. Available online: http://www.theguardian.com/media/mind-your-language/2014/mar/21/mind-your-language-cool-grammar

Saturday, December 21, 2013

Grammatical structures as a window into the past

New world atlas of colonial-era languages reveals massive traces of African and Pacific source languages

A new large-scale database and atlas of key structural properties of mixed languages from the Americas, Africa and Asia-Pacific has been published by researchers at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, in a joint project with colleagues at the University of Gießen and the University of Zurich, and involving a consortium of over 80 other researchers from around the world. These languages mostly arose as a result of colonial contacts between European traders and colonizers and indigenous and slave populations. The Atlas of Pidgin and Creole Language Structures, published by Oxford University Press and as a free online publication, contains in-depth comparable information on syntactic and phonological patterns of 76 languages. While most of these languages have words derived from the languages of the European (and sometimes Arab) colonizers, their grammatical patterns can often be traced back to the African and Pacific languages originally spoken by the indigenous populations, as the new atlas shows clearly.

Following the model of the highly successful World Atlas of Language Structures, the Leipzig team and their colleagues assembled a consortium of linguists who are specialists in 76 pidgin, creole and other languages arising from intensive language contact in the last few centuries. "Experts on understudied languages often work in isolation, but in order to see the bigger picture, we needed to bring their expertise together and create large-scale comparable datasets", explains Susanne Maria Michaelis of the Max Planck Institute for Evolutionary Anthropology. She and her colleagues worked with experts on 25 languages of the Americas, 25 African languages, and 26 Asia-Pacific languages over several years. The result is an atlas of 130 maps showing a selection of grammatical features, plus two dozen maps showing sociolinguistic information as well as a substantial number of maps on the kinds of sound segments used.

Many of the maps reveal striking similarities between Caribbean languages such as Jamaican and Haitian Creole and the languages spoken by the slaves who were forced to work for the European colonists from the 17th century. Since the great majority of slaves in the New World colonies were brought from Africa, the Caribbean languages in many ways resemble the African languages. "You cannot see this easily in the words, which typically sound like Spanish, French or English, but closer examination of grammatical patterns such as tense and aspect systems leads us directly to African and Asian languages", says Philippe Maurer of the University of Zurich. For example, in Jamaican, the past tense of action verbs requires no special tense marker, unlike in English: For 'The men dug the hole', Jamaican has "Di man-dem dig di huol". This pattern occurs widely in West African languages. A number of such African patterns can even be found in the vernacular English variety of African-Americans in the United States.

"Grammatical structures have the potential to preserve older historical states and thus to serve as a window into the human past, but they are also rather difficult to compare across languages", comments Martin Haspelmath of the Max Planck Institute for Evolutionary Anthropology. "Finding comparative concepts that allow experts coming from different research traditions to characterize their highly diverse languages in a comparable way has been a major challenge." But with the new database and the atlas built from it, researchers can now address a wide variety of questions more systematically.

While individual similarities between African languages and the languages spoken by the descendants of the slaves had long been noted, the Atlas of Pidgin and Creole Language Structures now provides far more systematic data on a much wider variety of structural features. "What is striking is that you see the influence of the indigenous languages also in Asia and the Pacific, areas which traditional creolists often neglected", says Susanne Maria Michaelis. For example, in the Portuguese creole variety of Sri Lanka, 'I like it' is literally 'To me it is liking', as in a typical South Asian language.
__________________________
References:

EurekAlert. 2013. “Grammatical structures as a window into the past”. EurekAlert. Posted: November 4, 2013. Available online: http://www.eurekalert.org/pub_releases/2013-11/m-gsa110413.php

Thursday, July 11, 2013

Grammar May Be Hidden in Toddler Babble

The little sounds and puffs of air that toddlers often inject into their baby babble may actually be subtle stand-ins for grammatical words, new research suggests.

For their study, Cristina Dye, a Newcastle University researcher in child language development, made recordings of tens of thousands of utterances of French-speaking children between 23 months and 37 months old.

Dye and her colleagues analyzed each sound the kids made and the context in which it was produced. The team said they documented a pattern of sounds and puffs of air that seemed to replace grammatical words in many cases. Their findings suggest that toddlers may properly use little words (as, a, an, can, is) sooner than thought.

"Many of the toddlers we studied made a small sound, a soft breath, or a pause, at exactly the place that a grammatical word would normally be uttered," Dye said in a statement.

"The fact that this sound was always produced in the correct place in the sentence leads us to believe that young children are knowledgeable of grammatical words. They are far more sophisticated in their grammatical competence than we ever understood."

Though Dye was studying French-speaking toddlers, she and her colleagues expect their findings to apply to other languages as well. She also thinks their results could have implications for understanding language delay in children.

"When children don't learn to speak normally it can lead to serious issues later in life," Dye said in a statement. "For example, those who have it are more likely to suffer from mental illness or be unemployed later in life. If we can understand what is 'normal' as early as possible then we can intervene sooner to help those children."

Previous research has shown that toddlers, before they articulate full sentences themselves, may be able to understand complex grammar. A 2011 study published in the journal Cognitive Science found that as early as 21 months, children could match made-up verbs with pictures that made sense grammatically. For example, if they were told "The rabbit is glorping the duck," they would point to a picture of a rabbit lifting a duck's leg rather than the duck lifting its leg on its own.

The new research on the French-speaking toddlers was detailed in the Journal of Linguistics.
__________________________
References:

Gannon, Megan. 2013. “Grammar May Be Hidden in Toddler Babble”. Live Science. Posted: June 17, 2013. Available online: http://www.livescience.com/37502-grammar-may-be-hidden-in-toddler-babble.html

Wednesday, June 26, 2013

Eating naartjies in the bioscope: a little guide to South African English

The vocabulary and grammar of spoken South African English are coated in a fine layer of Afrikaans dust. It's been there so long that most of us no longer notice

The first English lesson I ever gave was in a little language school in a sprawling Taiwanese city. The theme was Fruit, a subject about as straightforward as it gets for a native English speaker. Unless you're from South Africa.

To prepare, I flipped through the previous teacher's handmade flashcards and consulted my English guidebook for the names of the "exotic" fruits found in Asia – apple-shaped Chinese pears and otherworldly dragon fruit. But when I flipped over the card showing an innocuous-looking orange citrus fruit, my stomach dropped. Everyone I knew would call it a naartjie ("naah-chi"), and I suddenly realised that this wasn't actually an English word.

I'd heard of clementines and satsumas, but were either of these naartjie in English? I had to enlist the help of my bemused Chinese co-teacher, who told me "tangerine", and, later, "cantaloupe". (Spanspek, the Afrikaans word for "cantaloupe" that all South Africans use, is literally translated as "Spanish bacon", allegedly because a 19th-century Cape governor had a Spanish wife who always chose fresh fruit over a big English breakfast. Their mystified Afrikaans servants coined the word.)

After that first lesson, I had endless opportunities to marvel at how stealthily my mother tongue had colonised the Afrikaans lexicon. I would tell my students that I was holding thumbs for them (from the Afrikaans idiom duim vashou) before they wrote a level-check test. I asked my co-teacher if he could help me move my new couch in his bakkie (bak means "container": add another "k" and an "ie" to turn it into a diminutive, and you've got the affectionate Afrikaans name for a small truck). I texted my British friend asking her if I could borrow a pair of takkies (from tekkies, Afrikaans for trainers). When one of my kindergarten students went through a stage of eating beetles she'd found in the car park, my kneejerk reaction of disgust wasn't "yuck", but sis! (from sies).

Occasionally, the direct translations of Afrikaans prepositions slip out in the wrong context, such as when you used to go sleep by your friend's house when your parents went out to the bioscope (this now defunct English word survived here because of the Afrikaans bioskoop).

Of course, it makes sense for English to have been given a good lick here and there by other native tongues – our country does have 11 official languages. According to the results of the 2011 census as quoted in this Daily Maverick article by Rebecca Davis, English is the fourth most widely spoken mother tongue behind isiZulu, isiXhosa and Afrikaans.

It stands to reason that Afrikaans, which became the language of power when the National party took over in 1948, has influenced South African English more than any other. Afrikaans was used as a tool to suppress the masses throughout apartheid, itself an Afrikaans word that has been appropriated not only by South African English speakers, but in English the world over. As Davis writes, knowledge of Afrikaans was a barrier to entry into the civil service from the late 1940s, which meant that black people were frozen out of high-prestige positions as they were being schooled only in "indigenous" languages.

Fast-forward one generation, and you see the Soweto uprising breaking out on 16 June 1976 to protest against the Bantu Education Department's decree that the compulsory medium of instruction in local high schools is to be Afrikaans, specifically for subjects like maths and arithmetic. Hundreds of people, mostly high school students, were killed in violent confrontations with police that day.

Now, 19 years after our transition to democracy and after nearly two decades of English being the dominant medium of politics, the media, commerce and education, the shards of Afrikaans that were left behind in English still occasionally poke through.

So it's remarkably easy, even for an armchair etymologist, to write a litany of South African regionalisms that English has pilfered from Afrikaans. But there are two words that are foremost in my mind when I think about how Afrikaans has shaped the way I speak.

One is ja (with a soft "y"), meaning "yes", whose ubiquity might be attributed to its pronunciation. It's takes so little effort to say that it's basically an exhale.

The other is lekker, which is like "great", but better. To me, our cheerfully patriotic mantra "local is lekker" is true for everything from our biltong (if you've never sampled our best export, get on the South West Trains line from London Waterloo to Surbiton, get off anywhere between Clapham Junction and Raynes Park, and head for the nearest shop flying a South African flag: you can thank me later) to the way we speak. No matter how ambivalent we may be about our homeland, our hodgepodge potjie of English, subtly spiced with Afrikaans, is just that: ours.
__________________________
References:

Edwards, Michelle. 2013. “Eating naartjies in the bioscope: a little guide to South African English”. The Guardian. Posted: May 24, 2013. Available online: http://www.guardian.co.uk/media/mind-your-language/2013/may/24/mind-your-language-south-africa

Wednesday, June 12, 2013

Grammar errors? The brain detects them even when you are unaware

University of Oregon neuroscientists document unconscious processing of syntactic miscues that we miss

Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical errors with no awareness of doing so.

Participants in the study -- native-English speaking people, ages 18-30 –- had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP). This non-invasive technique allows for the capture of changes in brain electrical activity during an event. In this case, events were short sentences presented visually one word at a time.

Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as "We drank Lisa's brandy by the fire in the lobby," or "We drank Lisa's by brandy the fire in the lobby." A 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.

This approach, said lead author Laura Batterink, a postdoctoral researcher, provided a signature of whether awareness was at work during processing of the errors. "Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high," she said. "The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn't."

When tones appeared after grammatical errors, subjects detected 89 percent of the errors. In cases where subjects correctly declared errors in sentences, the researchers found a P600 effect, an ERP response in which the error is recognized and corrected on the fly to make sense of the sentence.

When the tones appear before the grammatical errors, subjects detected only 51 percent of them. The tone before the event, said co-author Helen J. Neville, who holds the UO's Robert and Beverly Lewis Endowed Chair in psychology, created a blink in their attention. The key to conscious awareness, she said, is based on whether or not a person can declare an error, and the tones disrupted participants' ability to declare the errors. But, even when the participants did not notice these errors, their brains responded to them, generating an early negative ERP response. These undetected errors also delayed participants' reaction times to the tones.

"Even when you don't pick up on a syntactic error your brain is still picking up on it," Batterink said. "There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly."

The study was published in the May 8 issue of the Journal of Neuroscience.

The brain processes syntactic information implicitly, in the absence of awareness, the authors concluded. "While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language."

It may be time to reconsider some teaching strategies, especially how adults are taught a second language, said Neville, a member of the UO's Institute of Neuroscience and director of the UO's Brain Development Lab.

Children, she noted, often pick up grammar rules implicitly through routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction. She likened such learning to "Jabberwocky," the nonsense poem introduced by writer Lewis Carroll in 1871 in "Through the Looking Glass," where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.

For a second language, she said, "Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does."
__________________________
References:

EurekAlert. 2013. “Grammar errors? The brain detects them even when you are unaware”. EurekAlert. Posted: May 13, 2013. Available online: http://www.eurekalert.org/pub_releases/2013-05/uoo-get051313.php

Monday, April 29, 2013

Nouns before verbs?

New research agenda could help shed light on early language, cognitive development

Researchers are digging deeper into whether infants' ability to learn new words is shaped by the particular language being acquired.

A new Northwestern University study cites a promising new research agenda aimed at bringing researchers closer to discovering the impact of different languages on early language and cognitive development.

For decades, researchers have asked why infants learn new nouns more rapidly and more easily than new verbs. Many researchers have asserted that the early advantage for learning nouns over verbs is a universal feature of human language.

In contrast, other researchers have argued that early noun-advantage is not a universal feature of human language but rather a consequence of the particular language being acquired.

Sandra Waxman, lead author of the study and Louis W. Menk Professor of Psychology at Northwestern, shows in her research that even before infants begin to produce many verbs in earnest, infants acquiring either noun-friendly or verb-friendly languages already appreciate the concepts underlying both noun and verb meaning.

In all languages examined to date, researchers see a robust ability to map nouns to objects, Waxman said, but when it comes to mapping verbs to events, infants' performance is less robust and more variable. Their ability to learn new verbs varied not only as a function of the native language being acquired, but also with the particular linguistic context in which the verb was presented.

Based on new evidence, a shift in the research agenda is necessary, according to Waxman and her colleagues.

"We now know that by 24 months infants acquiring distinctly different languages can successfully map novel nouns to objects and novel verbs to event categories," Waxman said. "It is essential that we shift the research focus to include infants at 24 months and younger, infants who are engaged in the very process of acquiring distinctly different native languages."

Waxman said the implications are clear. "Rather than characterizing languages as either 'noun friendly' or 'verb friendly,' it would be advantageous to adopt a more nuanced treatment of the syntactic, semantic, morphologic and pragmatic properties of each language and the consequences of these properties on infants' acquisition of linguistic structure and meaning."
__________________________
References:

EurekAlert. 2013. “Nouns before verbs?”. EurekAlert. Posted: March 25, 2013. Available online: http://www.eurekalert.org/pub_releases/2013-03/nu-nbv032513.php

Monday, March 25, 2013

Bilingual babies know their grammar by 7 months

Babies as young as seven months can distinguish between, and begin to learn, two languages with vastly different grammatical structures, according to new research from the University of British Columbia and Université Paris Descartes.

Published today in the journal Nature Communications and presented at the 2013 Annual Meeting of the American Association for the Advancement of Science (AAAS) in Boston, the study shows that infants in bilingual environments use pitch and duration cues to discriminate between languages – such as English and Japanese – with opposite word orders.

In English, a function word comes before a content word (the dog, his hat, with friends, for example) and the duration of the content word is longer, while in Japanese or Hindi, the order is reversed, and the pitch of the content word higher.

"By as early as seven months, babies are sensitive to these differences and use these as cues to tell the languages apart," says UBC psychologist Janet Werker, co-author of the study.

Previous research by Werker and Judit Gervain, a linguist at the Université Paris Descartes and co-author of the new study, showed that babies use frequency of words in speech to discern their significance.

"For example, in English the words 'the' and 'with' come up a lot more frequently than other words – they're essentially learning by counting," says Gervain. "But babies growing up bilingual need more than that, so they develop new strategies that monolingual babies don't necessarily need to use."

"If you speak two languages at home, don't be afraid, it's not a zero-sum game," says Werker. "Your baby is very equipped to keep these languages separate and they do so in remarkable ways."
__________________________
References:

EurekAlert. 2013. “Bilingual babies know their grammar by 7 months”. EurekAlert. Posted: February 14, 2013. Available online: http://www.eurekalert.org/pub_releases/2013-02/uobc-bbk021113.php

Monday, May 23, 2011

Artificial Grammar Reveals Inborn Language Sense, JHU Study Shows

Parents know the unparalleled joy and wonder of hearing a beloved child’s first words turn quickly into whole sentences and then babbling paragraphs. But how human children acquire language-which is so complex and has so many variations-remains largely a mystery. Fifty years ago, linguist and philosopher Noam Chomsky proposed an answer: Humans are able to learn language so quickly because some knowledge of grammar is hardwired into our brains. In other words, we know some of the most fundamental things about human language unconsciously at birth, without ever being taught.

Now, in a groundbreaking study, cognitive scientists at The Johns Hopkins University have confirmed a striking prediction of the controversial hypothesis that human beings are born with knowledge of certain syntactical rules that make learning human languages easier.

“This research shows clearly that learners are not blank slates; rather, their inherent biases, or preferences, influence what they will learn. Understanding how language is acquired is really the holy grail in linguistics,” said lead author Jennifer Culbertson, who worked as a doctoral student in Johns Hopkins’ Krieger School of Arts and Sciences under the guidance of Geraldine Legendre, a professor in the Department of Cognitive Science, and Paul Smolensky, a Krieger-Eisenhower Professor in the same department. (Culbertson is now a postdoctoral fellow at the University of Rochester.)

The study not only provides evidence remarkably consistent with Chomsky’s hypothesis but also introduces an interesting new approach to generating and testing other hypotheses aimed at answering some of the biggest questions concerning the language
learning process.

In the study, a small, green, cartoonish “alien informant” named Glermi taught participants, all of whom were English-speaking adults, an artificial nanolanguage named Verblog via a video game interface. In one experiment, for instance, Glermi displayed an unusual-looking blue alien object called a “slergena” on the screen and instructed the participants to say “geej slergena,” which in Verblog means “blue slergena.” Then participants saw three of those objects on the screen and were instructed to say “slergena glawb,” which means “slergenas three.”

Although the participants may not have consciously known this, many of the world’s languages use both of those word orders-that is, in many languages adjectives precede nouns, and in many nouns are followed by numerals. However, very rarely are both of these rules used together in the same human language, as they are in Verblog.

As a control, other groups were taught different made-up languages that matched Verblog in every way but used word order combinations that are commonly found in human languages.

Culbertson reasoned that if knowledge of certain properties of human grammars-such as where adjectives, nouns and numerals should occur-is hardwired into the human brain from birth, the participants tasked with learning alien Verblog would have a particularly difficult time, which is exactly what happened.

The adult learners who had had little to no exposure to languages with word orders different from those in English quite easily learned the artificial languages that had word orders commonly found in the world’s languages but failed to learn Verblog. It was clear that the learners’ brains “knew” in some sense that the Verblog word order was extremely unlikely, just as predicted by Chomsky a half-century ago.

The results are important for several reasons, according to Culbertson.

“Language is something that sets us apart from other species, and if we understand how children are able to quickly and efficiently learn language, despite its daunting complexity, then we will have gained fundamental knowledge about this unique faculty,” she said. “What this study suggests is that the problem of acquisition is made simpler by the fact that learners already know some important things about human languages-in this case, that certain words orders are likely to occur and others are not.”

This study was done with the support of a $3.2 million National Science Foundation grant called the Integrative Graduate Education and Research Traineeship grant, or IGERT, a unique initiative aimed at training doctoral students to tackle investigations from a multidisciplinary perspective.

According to Smolensky, the goal of the IGERT program in Johns Hopkins’ Cognitive Science Department is to overcome barriers that have long separated the way that different disciplines have tackled language research.

“Using this grant, we are training a generation of interdisciplinary language researchers who can bring together the now widely separated and often divergent bodies of research on language conducted from the perspectives of engineering, psychology and various types of linguistics,” said Smolensky, principal investigator for the department’s IGERT program.

Culbertson used tools from experimental psychology, cognitive science, linguistics and mathematics in designing and carrying out her study.

“The graduate training I received through the IGERT program at Johns Hopkins allowed me to synthesize ideas and approaches from a broad range of fields in order to develop a novel approach to a really classic question in the language sciences,” she said.
___________________
References:

Johns Hopkins. 2011. "Artificial Grammar Reveals Inborn Language Sense, JHU Study Shows". Johns Hopkins Media Release. Posted: May 12, 2011. Available online: http://releases.jhu.edu/2011/05/12/artificial-grammar-reveals-inborn-language-sense-jhu-study-shows/

Thursday, April 8, 2010

Languages use different parts of brain

The part of the brain that’s used to decode a sentence depends on the grammatical structure of the language it’s communicated in, a new study suggests.

Brain images showed that subtly different neural regions were activated when speakers of American Sign Language saw sentences that used two different kinds of grammar. The study, published online this week in Proceedings of the National Academy of Sciences, suggests neural structures that evolved for other cognitive tasks, like memory and analysis, may help humans flexibly use a variety of languages.

“We’re using and adapting the machinery we already have in our brains,” says study coauthor Aaron Newman of Dalhousie University in Halifax, Canada. “Obviously we’re doing something different [from other animals], because we’re able to learn language. But it’s not because some little black box evolved specially in our brain that does only language, and nothing else.”

Most spoken languages express relationships between the subject and object of a sentence — the “who did what to whom,” Newman says — in one of two ways. Some languages, like English, encode information in word order. “John gave flowers to Mary” means something different than “Mary gave flowers to John.” And “John flowers Mary to gave” doesn’t mean anything at all.

Other languages, like German or Russian, use “tags,” such as helping words or suffixes, that make words’ roles in the sentence clear. In German, for example, different forms of “the” carry information about who does what to whom. As long as words stay with their designated “the” they can be moved around German sentences much more flexibly than in English.

Like English, American Sign Language can convey meaning via the order in which signs appear. But altering the sign by, for instance, moving hands through space or signing on one side of the body, can add information like how often an action happens (“John gives flowers to Mary every day”) or how many objects there are (“John gave a dozen flowers to Mary”) without the need for extra words.

“In most spoken languages, if word order is a cue for who’s doing what to whom, it’s mandatory,” Newman says. But in ASL, which uses both word order and tags to encode grammar, “the tags are actually optional.”

The researchers showed 14 deaf signers who had learned ASL from birth a video of coauthor Ted Supalla, who is a professor of brain and cognitive sciences at the University of Rochester in New York and a native ASL signer, signing two versions of a set of sentences. One version used only word order to convey grammatical information, and the other added signed tags.

The sentences, which included “John’s grandmother feeds the monkey every morning” and “The prison warden says all juveniles will be pardoned tomorrow,” were carefully constructed to sound natural to fluent ASL signers and to mean the same thing regardless of which grammar structure they used.

Brain scans taken using functional MRI while participants watched the videos showed that overlapping but slightly different regions were activated by word-order sentences compared to word-tag sentences.

The regions that lit up only for word-order sentences are known to be involved with short-term memory. The regions activated by word tags are involved in procedural memory, the kind of memory that controls automatic tasks like riding a bicycle.

This could mean that listeners need to hold words in their short-term memory to understand word-order sentences, while processing word-tag sentences is more automatic, Newman says.

“It’s very elegant, probably nicer results than we could have hoped for,” he says.

“The story they’re telling makes sense to me,” says cognitive neuroscientist Karen Emmorey of San Diego State University, who also studies how the brain processes sign language. But, she adds, “it’ll be important to also look at spoken languages that differ in this property to see if these things they’re finding hold up.”
__________________
References:

Grossman, Lisa. 2010. "Languages use different parts of brain". Science News. Posted: April 5, 2010. Available online: http://www.sciencenews.org/view/generic/id/57944/title/Languages_use_different_parts_of_brain