Showing posts with label brain and language. Show all posts
Showing posts with label brain and language. Show all posts

Saturday, July 2, 2016

Why children confuse simple words

Imagine, for a moment, you are a parent trying to limit how much dessert your sugar-craving young children can eat.

"You can have cake or ice cream," you say, confident a clear parental guideline has been laid out.

But your children seem to ignore this firm ruling, and insist on having both cake and ice cream. Are they merely rebelling against a parental command? Perhaps. But they might be confusing "or" with "and," as children do at times, something studies have shown since the 1970s. What seems like a restriction to the parent sounds like an invitation to the child: Have both!

But why does this happen? Now a study by MIT linguistics professors and a team from Carleton University, based on an experiment with children between the ages of 3 and 6, proposes a new explanation, with a twist: In examining this apparent flaw, the researchers conclude that children deploy a more sophisticated mode of logical analysis than many experts have previously realized.

Indeed, say the linguists, children use almost entirely the same approach as adults when it comes to evaluating potentially ambiguous sentences, by testing and "strengthening" them into sentences with more precise meanings, when disjunction and conjunction ("or" and "and") are involved.

While using this common approach, however, children do not test how a sentence would change if "and" were directly substituted for "or." This more modest procedural problem is what leads to the confusion about cake and ice cream.

"Children seem to interpret disjunction like conjunction," observes Danny Fox, the Anshen-Chomsky Professor in Language and Thought at MIT and co-author of a paper detailing the study. However, Fox adds, although "it has been claimed children are very different from adults in the interpretation of logical words," the study's larger implication is almost the opposite -- namely that "the child is [otherwise] identical to the adult, but there is a very small parameter that distinguishes them."

Quirky as this finding seems, it confirms a specific prediction Fox and some other researchers had made, based on previous studies in formal semantics (the area of linguistics that investigates the logic of natural language use). As such, the study reinforces what we know about the procedures both children and adults deploy in "and/or" matters.

"There's a certain kind of computation we can now say both children and adults do," says Raj Singh PhD '08, an associate professor of cognitive science at Carleton University and the lead author of the new report.

The paper, "Children interpret disjunction as conjunction: Consequences for theories of implicature and child development," is being published in the journal Natural Language Semantics. The co-authors are Singh; Fox; Ken Wexler, emeritus professor of psychology and linguistics at MIT; Deepthi Kamawar, an associate professor of psychology at Carleton University; and Andrea Astle-Rahim, a recent PhD graduate from Carleton University.

What adults do: the two-step

To understand how children conflate "or" with "and," first consider how adults normally clarify what sentences mean. Suppose you have a dozen cookies in a jar on your desk at work, and go to a meeting. When you come back, a colleague tells you, "Marty ate some of the cookies."

Now suppose you find out that Marty actually ate all 12 cookies. The previous sentence -- "Marty ate some of the cookies" -- may still be true, but it would be more accurate to say, "Marty ate all of the cookies."

To make this evaluation, adults compute "scalar implicatures," a technical phrase for thinking about the implications of the logical relationship between a sentence and its alternatives. For "Marty ate some of the cookies," there is a two-step computation. The first step is to think through some alternatives, such as what happens if you substitute "all" for "some" (leading to "Marty ate all of the cookies"). The second step is to realize that this alternative spells out a specific new meaning -- that all 12 cookies have been eaten, not just a few of them.

We then realize the sentence "Marty ate some of the cookies" more accurately means: "Marty ate some, but not all, of the cookies." And now we have a "strengthened" version of the first sentence.

The same process applies to the sentence, "Jane ate cake or ice cream." The sentence is true if Jane ate one or the other, and still technically true if she ate both. But once we compute the scalar implicatures, we realize that "Jane ate cake or ice cream" is a "strengthened" way of saying she ate one or the other, but not both.

Fox has conducted extensive research over the last decade formalizing our computations of scalar implicatures and identifying areas where tiny differences in the logical "space of alternatives" can have far-reaching consequences. The current paper stems in part from work Singh pursued as a doctoral student collaborating with Fox at MIT.

Why "or" and "and" merge for children

The research team conducted the study's experiment by testing 59 English-speaking children and 26 adults in the Ottawa area. The children ranged in age from 3 years, 9 months, to 6 years, 4 months. The linguists gave the subjects a series of statements along with pictures, and asked them to say whether the statements were true or false.

For instance: The children were shown a picture with three boys holding an apple or a banana, along with the statement, "Every boy is holding an apple or a banana," and then asked to say if the statement was true or false. The children were asked to do this for a full range of scenarios -- such as one boy holding one type of fruit and two boys holding the other -- along with a varying set of "and/or" statements. The researchers repeated five sets of such trials, with the pictures changing each time.

The results suggest that children are computing scalar implicatures when they evaluate the statements -- but they largely do not substitute disjunctions and conjunctions when testing out the possible meaning of sentences, as adults do.

That means when children hear "cake or ice cream," they are generally not replacing "or" in the phrase with "and," to test what would happen. Without that contrast, the children still "strengthen" the meaning of "or," but they strengthen it to mean "and." Thus "or" and "and" can blur together for children.

"They [children] don't use 'cake and ice cream' as an alternative," Fox says. "As a result, 'cake or ice cream' is expected, if we are right about the nature of the computation, to become 'cake and ice cream' for the children."

And while we tend to think children are wrong to draw that conclusion, it is still the result of computing scalar implicatures -- it just happens that, as Singh observes, those computations create divergent outcomes for children and adults.

A universal process

The researchers say they agree with the need to examine that transition to the adult pattern of strengthening. In the meantime, they hope colleagues will consider the additional evidence the study provides about the formal logic underlying our language use.

"The computational system of language is actually telling us how to do certain kinds of thinking," Wexler suggests. "It isn't us just trying to [understand] things pragmatically."

Additionally, the scholars believe evidence from other languages besides English supports their conclusions. In both Walpiri, a language of indigenous Australians, and American Sign Language, there is a single connective word that functions as both "or" and "and" and appears subject to the strengthening process identified for children. And, Singh notes, linguists are now replicating the study's findings in French and Japanese.

In general, Fox observes, across languages, and for children and adults alike, "The remarkable logical fact is that when you take 'and' out of the space of alternatives, 'or,' becomes 'and.' This, of course, relies on the nature of the computation that we've postulated, and, hence, the results of the study provide confirmation of a form that I find rather exciting."

So, yes, your children may not understand what you mean about dessert. Or perhaps they are just being willful. But if they confuse "or" with "and," then they are not being childish -- at least not in the way you may think.
_________________
Reference:

EurekAlert. 2016. “Why children confuse simple words”. EurekAlert. Posted: May 23, 2016. Available online: http://www.eurekalert.org/pub_releases/2016-05/miot-wcc052316.php

Monday, June 27, 2016

Words, more words ... and statistics

To segment words, the brain could be using statistical methods

Have you ever racked your brains trying to make out even a single word of an uninterrupted flow of speech in a language you hardly know at all? It is naïve to think that in speech there is even the smallest of pauses between one word and the next (like the space we conventionally insert between words in writing): in actual fact, speech is almost always a continuous stream of sound. However, when we listen to our native language, word "segmentation" is an effortless process. What are, linguists wonder, the automatic cognitive mechanisms underlying this skill? Clearly, knowledge of the vocabulary helps: memory of the sound of the single words helps us to pick them out. However, many linguists argue, there are also automatic, subconscious "low-level" mechanisms that help us even when we do not recognise the words or when, as in the case of very young children, our knowledge of the language is still only rudimentary. These mechanisms, they think, rely on the statistical analysis of the frequency (estimated based on past experience) of the syllables in each language.

One indicator that could contribute to segmentation processes is "transitional probability" (TP), which provides an estimate of the likelihood of two syllables co-occurring in the same word, based on the frequency with which they are found associated in a given language. In practice, if every time I hear the syllable "TA" it is invariably followed by the syllable "DA," then the transitional probability for "DA," given "TA," is 1 (the highest). If, on the other hand, whenever I hear the syllable "BU" it is followed half of the time by the syllable "DI" and half of the time by "FI," then the transitional probability of "DI" (and "FI"), given "BU," is 0.5, and so forth. The cognitive system could be implicitly computing this value by relying on linguistic memory, from which it would derive the frequencies.

The study conducted by Amanda Saksida, research scientist at the International School for Advanced Studies (SISSA) in Trieste, with the collaboration of Alan Langus, SISSA research fellow, under the supervision of SISSA professor Marina Nespor, used TP to segment natural language, by using two different approaches.

Based on rhythm

Saksida's study is based on the work with corpora, that is, bodies of texts specifically collected for linguistic analysis. In the case at hand, the corpora consisted of transcriptions of the "linguistic sound environment" that infants are exposed to. "We wanted to have an example of the type of linguistic environment in which a child's language develops," explained Saksida, "We wondered whether a low-level mechanism such as transitional probability worked with real-life language cues, which are very different from the artificial cues normally used in the laboratory, which are more schematic and free of sources of 'noise'. Furthermore, the question was whether the same low-level cue is equally efficient in different languages." Saksida and colleagues used corpora of no less than 9 different languages, and to each they applied two different TP-based models.

First they calculated the TP values for each point of the language flow for all of the corpora, and then they "segmented" the flow using two different methods. The first was based on absolute thresholding: a certain fixed reference TP value was established below which a boundary was identified. The second method was based on relative thresholding: the boundaries corresponded to the locally lowest TP function.

In all cases, Saksida and colleagues found that transitional probability was an effective tool for segmentation (49% to 86% of words identified correctly) irrespective of the segmentation algorithm used, which confirms TP efficacy. Of note, while both models proved to be quite efficient, when one model was particularly successful with one language, the alternative model always performed significantly worse.

"This cross-linguistic difference suggests that each model is better suited than the other for certain languages and viceversa. We therefore conducted further analyses to understand what linguistic features correlated with the better performance of one model over the other," explains Saksida. The crucial dimension proved to be linguistic rhythm. "We can divide European languages into two large groups based on rhythm: stress-timed and syllable-timed." Stress-timed languages have fewer vowels and shorter words, and include English, Slovenian and German. Syllable-timed languages contain more vowels and longer words on average, and include Italian, Spanish and Finnish. The third rhythmic group of languages does not exist in Europe and is based on "morae" (a part of the syllable), such as Japanese. This group is known as "mora-timed" and contains even more vowels than syllable-timed languages.

The absolute threshold model proved to work best on stress-timed languages, whereas relative thresholding was better for the mora-timed ones. "It's therefore possible that the cognitive system learns to use the segmentation algorithm that is best suited to one's native language, and that this leads to difficulties segmenting languages belonging to another rhythmic category. Experimental studies will clearly be necessary to test this hypothesis. We know from the scientific literature that immediately after birth infants already use rhythmic information, and we think that the strategies used to choose the most appropriate segmentation could be one of the areas in which information about rhythm is most useful."

The study is in fact unable to say whether the cognitive system (of both adults and children) really uses this type of strategy. "Our study clearly confirms that this strategy works across a wide range of languages," concludes Saksida. "It will now serve as a guide for laboratory experiments."
_________________
Reference:

Science Daily. 2016. “Words, more words ... and statistics”. Science Daily. Posted: Available online: https://www.sciencedaily.com/releases/2016/05/160517131637.htm

Wednesday, May 18, 2016

Speaking two languages for the price of one

In everyday conversation, bilingual speakers often switch between languages mid-sentence with apparent ease, despite the fact that many studies suggest that language-switching should slow them down. New research suggests that consistency may allow bilingual speakers to avoid the costs that come with switching between languages, essentially allowing them to use two languages for the price of one.

The research is published in Psychological Science, a journal of the Association for Psychological Science.

"Our findings show that if bilinguals switch languages at the right times, they can do it without paying any cost," says study author Daniel Kleinman of the University of Illinois at Urbana-Champaign. "This goes against both popular belief and scientific wisdom that juggling two tasks should impair performance. But our results suggest that multi-tasking may be easier than it seems as long as people switch at the right times."

Kleinman and co-author Tamar Gollan of the University of California, San Diego speculated that people may show different outcomes in the lab than they do in everyday conversations because lab studies typically require bilingual speakers to switch languages on command and at times when those switches are likely to be inefficient. If bilingual speakers were allowed to choose a language for a particular object or concept and then stick with it, the researchers hypothesized, they might be able to switch between languages without slowing down.

In other words, consistently using English to say "dog" and Spanish to say "casa" over the course of a conversation that toggles between the two languages could eliminate the costs that typically come with language-switching.

Across two studies, a total of 171 bilingual university students completed a picture-naming task. The participants, who spoke English and Spanish fluently, were presented with a series of black-and-white drawings of objects organized in four separate blocks.

In one block, the participants were instructed to name each picture in whichever language was easier and to stick with that language every time that particular picture appeared. In another block, the participants were given a cue that told them which language to use in naming each picture. And in the remaining two blocks, the participants were instructed to use only English or only Spanish to name the objects displayed.

The results showed that consistency is key: Participants didn't slow down when switching languages between pictures as long as they consistently used the same language each time a particular picture appeared.

Switching languages between pictures noticeably slowed their response times, however, when they followed cues telling them which language to use to name each picture, or if they did not follow the instruction to be consistent about which language they used for each picture.

But additional findings suggest that bilingual speakers don't necessarily use consistency as a strategy on their own. When participants were free to choose which language to use, language-switching led to slower response times because most speakers didn't consistently associate each picture with a particular language.

These findings show that even experienced language-switchers have room for improvement.

"Although bilinguals have been switching between languages for their entire lives, the strategies they use to decide when to switch may vary depending on context," Kleinman explains. "While speakers may sometimes adopt switching strategies that incur costs, these studies show that all bilinguals can be redirected quickly and easily to switch for free."
_________________
Reference:

Science Daily. 2016. “Speaking two languages for the price of one”. Science Daily. Posted: April 7, 2016. Available online: https://www.sciencedaily.com/releases/2016/04/160407083739.htm

Monday, March 28, 2016

How learning languages translates into health benefits for society

The advantages of speaking a second language - for health and mental ability - are to come under the spotlight at an event at the AAAS annual meeting in Washington, DC.

Experts in bilingualism will examine how learning a second language at any age not only imparts knowledge and cultural understanding, but also improves thinking skills and mental agility. It can delay brain ageing and offset the initial symptoms of dementia.

During the symposium, researchers will examine how findings from bilingualism research are currently applied, and how they could best benefit society through education, policymaking and business. Experts will examine current research themes related to bilingualism from infancy to old age, and explore their implications for society.

Professor Antonella Sorace of the University of Edinburgh, who established and directs the Bilingualism Matters Centre, will focus on research on minority languages, such as Gaelic and Sardinian. She will discuss whether the benefits associated with minority languages are consistent with those of learning more prestigious languages.

Professor Sorace will be joined by researchers from San Diego State University, Pennsylvania State University, Concordia University, Nizam's Institute of Medical Sciences, the Chinese University of Hong Kong and the University of Connecticut.

The symposium, entitled 'Bilingualism Matters' is directly inspired by the Bilingualism Matters Centre at the University of Edinburgh, which is at the forefront of public engagement in this field and has a large international network. The event will take place from 1.30-4.30pm on Saturday 13 February in the Marshall Ballroom South, Marriot Wardman Park, Washington DC.

Professor Sorace, of the University of Edinburgh's School of Philosophy, Psychology and Language Sciences, said: "We are excited to reflect on Edinburgh's experiences in bilingualism as an international example of cutting-edge scientific research and public engagement, and to share the current state of research in this area and its relevance for the general public."
_________________
Reference:

EurekAlert. 2016. “How learning languages translates into health benefits for society”. EurekAlert. Posted: February 13, 2016. Available online: http://www.eurekalert.org/pub_releases/2016-02/uoe-hll020516.php

Sunday, February 28, 2016

Learning a second language may depend on the strength of brain's connections

Learning a second language is easier for some adults than others, and innate differences in how the various parts of the brain "talk" to one another may help explain why, according to a study published January 20 in the Journal of Neuroscience.

"These findings have implications for predicting language learning success and failure," said study author Xiaoqian Chai.

The various regions of our brains communicate with each other even when we are resting and aren't engaged in any specific tasks. The strength of these connections -- called resting-state connectivity -- varies from person to person, and differences have previously been linked to differences in behavior including language ability.

Led by Chai and Denise Klein, researchers at McGill University explored whether differences in resting-state connectivity relate to performance in a second language. To study this, the group at the Montreal Neurological Institute scanned the brains of 15 adult English speakers who were about to begin an intensive 12-week French course, and then tested their language abilities both before and after the course.

Using resting state functional magnetic resonance imaging (fMRI), the researchers examined the connectivity within the subjects' brains prior to the start of the French course. They looked at the strength of connections between various areas in the brain and two specific language regions: an area of the brain implicated in verbal fluency, the left anterior insula/frontal operculum (AI/FO), and an area active in reading, the visual word form area (VWFA).

The researchers tested the participants' verbal fluency and reading speed both prior to the course and after its completion. To test verbal fluency, the researchers gave subjects a prompt and asked them to speak for two minutes in French. The researchers counted the number of unique words that were used correctly. To test reading speed, the researchers had participants read French passages aloud, and they calculated the number of words read per minute.

Participants with stronger connections between the left AI/FO and an important region of the brain's language network called the left superior temporal gyrus showed greater improvement in the speaking test. Participants with greater connectivity between the VWFA and a different area of the left superior temporal gyrus language area in the left temporal lobe showed greater improvement in reading speed by the end of the 12-week course.

"The most interesting part of this finding is that the connectivity between the different areas was observed before learning," said Arturo Hernandez, a neuroscientist at the University of Houston who studies second-language learning and was not involved in the study. "This shows that some individuals may have a particular neuronal activity pattern that may lend itself to better learning of a second language."

However, that doesn't mean success at a second language is entirely predetermined by the brain's wiring. The brain is very plastic, meaning that it can be shaped by learning and experience, Chai said.

The study is "a first step to understanding individual differences in second language learning," she added. "In the long term it might help us to develop better methods for helping people to learn better."
_________________
Reference:

Science Daily. 2016. “Learning a second language may depend on the strength of brain's connections”. Science Daily. Posted: January 20, 2016. Available online: https://www.sciencedaily.com/releases/2016/01/160120202512.htm

Tuesday, October 27, 2015

How language gives your brain a break

Here's a quick task: Take a look at the sentences below and decide which is the most effective.

(1) "John threw out the old trash sitting in the kitchen."

(2) "John threw the old trash sitting in the kitchen out."

Either sentence is grammatically acceptable, but you probably found the first one to be more natural. Why? Perhaps because of the placement of the word "out," which seems to fit better in the middle of this word sequence than the end.

In technical terms, the first sentence has a shorter "dependency length" -- a shorter total distance, in words, between the crucial elements of a sentence. Now a new study of 37 languages by three MIT researchers has shown that most languages move toward "dependency length minimization" (DLM) in practice. That means language users have a global preference for more locally grouped dependent words, whenever possible.

"People want words that are related to each other in a sentence to be close together," says Richard Futrell, a PhD student in the Department of Brain and Cognitive Sciences at MIT, and a lead author of a new paper detailing the results. "There is this idea that the distance between grammatically related words in a sentence should be short, as a principle."

The paper, published this week in the Proceedings of the National Academy of Sciences, suggests people modify language in this way because it makes things simpler for our minds -- as speakers, listeners, and readers.

"When I'm talking to you, and you're trying to understand what I'm saying, you have to parse it, and figure out which words are related to each other," Futrell observes. "If there is a large amount of time between one word and another related word, that means you have to hold one of those words in memory, and that can be hard to do."

While the existence of DLM had previously been posited and identified in a couple of languages, this is the largest study of its kind to date.

"It was pretty interesting, because people had really only looked at it in one or two languages," says Edward Gibson, a professor of cognitive science and co-author of the paper. "We though it was probably true [more widely], but that's pretty important to show. ... We're not showing perfect optimization, but [DLM] is a factor that's involved."

From head to tail

To conduct the study, the researchers used four large databases of sentences that have been parsed grammatically: one from Charles University in Prague, one from Google, one from the Universal Dependencies Consortium (a new group of computational linguists), and a Chinese-language database from the Linguistic Dependencies Consortium at the University of Pennsylvania. The sentences are taken from published texts, and thus represent everyday language use.

To quantify the effect of placing related words closer to each other, the researchers compared the dependency lengths of the sentences to a couple of baselines for dependency length in each language. One baseline randomizes the distance between each "head" word in a sentence (such as "threw," above) and the "dependent" words (such as "out"). However, since some languages, including English, have relatively strict word-order rules, the researchers also used a second baseline that accounted for the effects of those word-order relationships.

In both cases, Futrell, Gibson, and co-author Kyle Mahowald found, the DLM tendency exists, to varying degrees, among languages. Italian appears to be highly optimized for short sentences; German, which has some notoriously indirect sentence constructions, is far less optimized, according to the analysis.

And the researchers also discovered that "head-final" languages such as Japanese, Korean, and Turkish, where the head word comes last, show less length minimization than is typical. This could be because these languages have extensive case-markings, which denote the function of a word (whether a noun is the subject, the direct object, and so on). The case markings would thus compensate for the potential confusion of the larger dependency lengths.

"It's possible, in languages where it's really obvious from the case marking where the word fits into the sentence, that might mean it's less important to keep the dependencies local," Futrell says.

Futrell, Gibson, and Mahowald readily note that the study leaves larger questions open: Does the DLM tendency occur primarily to help the production of language, its reception, a more strictly cognitive function, or all of the above?

"It could be for the speaker, the listener, or both," Gibson says. "It's very difficult to separate those."
_________________
Reference:

EurekAlert. 2015. “How language gives your brain a break”. EurekAlert. Posted: August 3, 2015. Available online: http://www.eurekalert.org/pub_releases/2015-08/miot-hlg080315.php

Friday, July 17, 2015

Say what? How the brain separates our ability to talk and write

Out loud, someone says, "The man is catching a fish." The same person then takes pen to paper and writes, "The men is catches a fish."

Although the human ability to write evolved from our ability to speak, writing and talking are now such independent systems in the brain that someone who can't write a grammatically correct sentence may be able say it aloud flawlessly, discovered a team led by Johns Hopkins University cognitive scientist Brenda Rapp.

In a paper published in the journal Psychological Science, Rapp's team found it's possible to damage the speaking part of the brain but leave the writing part unaffected -- and vice versa -- even when dealing with morphemes, the tiniest meaningful components of the language system including suffixes like "er," "ing" and "ed."

"Actually seeing people say one thing and -- at the same time -- write another is startling and surprising. We don't expect that we would produce different words in speech and writing," said Rapp, a professor in the Department of Cognitive Science in the university's Krieger School of Arts and Sciences. "It's as though there were two quasi-independent language systems in the brain."

The team wanted to understand how the brain organizes knowledge of written language -- reading and spelling -- since that there is a genetic blueprint for spoken language but not written. More specifically, they wanted to know if written language was dependent on spoken language in literate adults. If it was, then one would expect to see similar errors in speech and writing. If it wasn't, one might see that people don't necessarily write what they say.

The team, which included Simon Fischer-Baum of Rice University and Michele Miozzo of Columbia University, both cognitive scientists, studied five stroke victims with aphasia, or difficulty communicating. Four of them had difficulties writing sentences with the proper suffixes, but had few problems speaking the same sentences. The last individual had the opposite problem -- trouble with speaking but unaffected writing.

The researchers showed the individuals pictures and asked them to describe the action. One person would say, "The boy is walking," but write, "the boy is walked." Or another would say, "Dave is eating an apple" and then write, "Dave is eats an apple."

The findings reveal that writing and speaking are supported by different parts of the brain -- and not just in terms of motor control in the hand and mouth, but in the high-level aspects of word construction.

"We found that the brain is not just a 'dumb' machine that knows about letters and their order, but that it is 'smart' and sophisticated and knows about word parts and how they fit together," Rapp said. "When you damage the brain, you might damage certain morphemes but not others in writing but not speaking, or vice versa."

This understanding of how the adult brain differentiates word parts could help educators as they teach children to read and write, Rapp said. It could lead to better therapies for those suffering aphasia.
_________________
Reference:

Science Daily. 2015. “Say what? How the brain separates our ability to talk and write”. Science Daily. Posted: May 5, 2015. Available online: http://www.sciencedaily.com/releases/2015/05/150505112216.htm

Friday, June 12, 2015

Mapping language in the brain

The exchange of words, speaking and listening in conversation, may seem unremarkable for most people, but communicating with others is a challenge for people who have aphasia, an impairment of language that often happens after stroke or other brain injury. Aphasia affects about 1 in 250 people, making it more common than Parkinson's Disease or cerebral palsy, and can make it difficult to return to work and to maintain social relationships. A new study published in the journal Nature Communications provides a detailed brain map of language impairments in aphasia following stroke.

"By studying language in people with aphasia, we can try to accomplish two goals at once: we can improve our clinical understanding of aphasia and get new insights into how language is organized in the mind and brain," said Daniel Mirman, PhD, an assistant professor in Drexel University's College of Arts and Sciences who was lead author of the study.

The study is part of a larger multi-site research project funded by grants from the National Institutes of Health and led by senior author Myrna Schwartz, PhD of the Moss Rehabilitation Research Institute. The researchers examined data from 99 people who had persistent language impairments after a left-hemisphere stroke. In the first part of the study, the researchers collected 17 measures of cognitive and language performance and used a statistical technique to find the common elements that underlie performance on multiple measures.

They found that spoken language impairments vary along four dimensions or factors:

  • Semantic Recognition: difficulty recognizing the meaning or relationship of concepts, such as matching related pictures or matching words to associated pictures.
  • Speech Recognition: difficulty with fine-grained speech perception, such as telling "ba" and "da" apart or determining whether two words rhyme.
  • Speech Production: difficulty planning and executing speech actions, such as repeating real and made-up words or the tendency to make speech errors like saying "girappe" for "giraffe."
  • Semantic Errors: making semantic speech errors, such as saying "zebra" instead of "giraffe," regardless of performance on other tasks that involved processing meaning.

Mapping the Four Factors in the Brain

Next, the researchers determined how individual performance differences for each of these factors were associated with the locations in the brain damaged by stroke. This procedure created a four-factor lesion-symptom map of hotspots the language-specialized left hemisphere where damage from a stroke tended to cause deficits for each specific type of language impairment. One key area was the left Sylvian fissure: speech production and speech recognition were organized as a kind of two-lane, two-way highway around the Sylvian fissure. Damage above the Sylvian fissure, in the parietal and frontal lobes, tended to cause speech production deficits; damage below the Sylvian fissure, in the temporal lobe, tended to cause speech recognition deficits. These results provide new evidence that the cortex around the Sylvian fissure houses separable neural specializations for speech recognition and production.

Semantic errors were most strongly associated with lesions in the left anterior temporal lobe, a location consistent with previous research findings from these researchers and several other research groups. This finding also made an important comparison point for its opposite factor -- semantic recognition, which many researchers have argued critically depends on the anterior temporal lobes. Instead, Mirman and colleagues found that semantic recognition deficits were associated with damage to an area they call a "white matter bottleneck" -- a region of convergence between multiple tracts of white matter that connect brain regions required for knowing the meanings of words, objects, actions and events.

"Semantic memory almost certainly involves a widely distributed neural system because meaning involves so many different kinds of information," said Mirman. "We think the white matter bottleneck looks important because it is a point of convergence among multiple pathways in the brain, making this area a vulnerable spot where a small amount of damage can have large functional consequences for semantic processing."

In a follow-up article soon to be published in the journalNeuropsychologia, Mirman, Schwartz and their colleagues also confirmed these findings with a re-analysis using a new and more sophisticated statistical technique for lesion-symptom mapping.

These studies provide a new perspective on diagnosing different kinds of aphasia, which can have a big impact on how clinicians think about the condition and how they approach developing treatment strategies. The research team at the Moss Rehabilitation Research Institute works closely with its clinical affiliate, the MossRehab Aphasia Center, to develop and test approaches to aphasia rehabilitation that meet the individualized, long-term goals of the patients and are informed by scientific evidence.

According to Schwartz, "A major challenge facing speech-language therapists is the wide diversity of symptoms that one sees in stroke aphasia. With this study, we took a major step towards explaining the symptom diversity in relation to a few primary underlying processes and their mosaic-like representation in the brain. These can serve as targets for new diagnostic assessments and treatment interventions."

Studying the association between patterns of brain injury and cognitive deficits is a classic approach, with roots in 19th century neurology, at the dawn of cognitive neuroscience. Mirman, Schwartz and their colleagues have scaled up this approach, both in terms of the number of participants and the number of performance measures, and combined it with 21st century brain imaging and statistical techniques. A single study may not be able to fully reveal a system as complex as language and brain, but the more we learn, the closer we get to translating basic cognitive neuroscience into effective rehabilitation strategies.
_________________
Reference:

Science Daily. 2015. “Mapping language in the brain”. Science Daily. Posted: April 16, 2015. Available online: http://www.sciencedaily.com/releases/2015/04/150416113248.htm

Sunday, May 17, 2015

After learning new words, brain sees them as pictures

When we look at a known word, our brain sees it like a picture, not a group of letters needing to be processed. That's the finding from a Georgetown University Medical Center (GUMC) study published in the Journal of Neuroscience, which shows the brain learns words quickly by tuning neurons to respond to a complete word, not parts of it.

Neurons respond differently to real words, such as turf, than to nonsense words, such as turt, showing that a small area of the brain is "holistically tuned" to recognize complete words, says the study's senior author, Maximilian Riesenhuber, PhD, who leads the GUMC Laboratory for Computational Cognitive Neuroscience.

"We are not recognizing words by quickly spelling them out or identifying parts of words, as some researchers have suggested. Instead, neurons in a small brain area remember how the whole word looks -- using what could be called a visual dictionary," he says. This small area in the brain, called the visual word form area, is found in the left side of the visual cortex, opposite from the fusiform face area on the right side, which remembers how faces look. "One area is selective for a whole face, allowing us to quickly recognize people, and the other is selective for a whole word, which helps us read quickly," Riesenhuber says.

The study asked 25 adult participants to learn a set of 150 nonsense words. The brain plasticity associated with learning was investigated with functional magnetic resonance imaging (fMRI), both before and after training.

Using a specific fMRI technique know as fMRI-rapid adaptation, the investigators found that the visual word form area changed as the participants learned the nonsense words. Before training the neurons responded like the training words were nonsense words, but after training the neurons responded to the learned words like they were real words. "This study is the first of its kind to show how neurons change their tuning with learning words, demonstrating the brain's plasticity," says the study's lead author, Laurie Glezer, PhD. The findings not only help reveal how the brain processes words, but also provides insights into how to help people with reading disabilities, says Riesenhuber. "For people who cannot learn words by phonetically spelling them out -- which is the usual method for teaching reading -- learning the whole word as a visual object may be a good strategy."

In fact, after the team's first groundbreaking study on the visual dictionary was published in Neuron in 2009, Riesenhuber says they were contacted by a number of people who had experienced reading difficulties and teachers helping people with reading difficulties, reporting that learning word as visual objects helped a great deal. That study revealed the existence of a neural representation for whole written real words -- also known as an orthographic lexicon --the current study now shows how novel words can become incorporated after learning in this lexicon.

"The visual word form area does not care how the word sounds, just how the letters of the word look together," he says. "The fact that this kind of learning only happens in one very small part of the brain is a nice example of selective plasticity in the brain."
_________________
Reference:

Science Daily. 2015. “After learning new words, brain sees them as pictures”. Science Daily. Posted: March 24, 2015. Available online: http://www.sciencedaily.com/releases/2015/03/150324183623.htm

Saturday, January 17, 2015

Carnegie Mellon researchers identify brain regions that encode words, grammar, story

Some people say that reading "Harry Potter and the Sorcerer's Stone" taught them the importance of friends, or that easy decisions are seldom right. Carnegie Mellon University scientists used a chapter of that book to learn a different lesson: identifying what different regions of the brain are doing when people read.

Researchers from CMU's Machine Learning Department performed functional magnetic resonance imaging (fMRI) scans of eight people as they read a chapter of that Potter book. They then analyzed the scans, cubic millimeter by cubic millimeter, for every four-word segment of that chapter. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.

As Leila Wehbe, a Ph.D. student in the Machine Learning Department, and Tom Mitchell, the department head, report today in the online journal PLOS ONE, the model was able to predict fMRI activity for novel text passages with sufficient accuracy to tell which of two different passages a person was reading with 74 percent accuracy.

"At first, we were skeptical of whether this would work at all," Mitchell said, noting that analyzing multiple subprocesses of the brain at the same time is unprecedented in cognitive neuroscience. "But it turned out amazingly well and now we have these wonderful brain maps that describe where in the brain you're thinking about a wide variety of things."

Wehbe and Mitchell said the model is still inexact, but might someday be useful in studying and diagnosing reading disorders, such as dyslexia, or to track the recovery of patients whose speech was impacted by a stroke. It also might be used by educators to identify what might be giving a student trouble when learning a foreign language.

"If I'm having trouble learning a new language, I may have a hard time figuring out exactly what I don't get," Mitchell said. "When I can't understand a sentence, I can't articulate what it is I don't understand. But a brain scan might show that the region of my brain responsible for grammar isn't activating properly, or perhaps instead I'm not understanding the individual words."

Researchers at Carnegie Mellon and elsewhere have used fMRI scans to identify activation patterns associated with particular words or phrases or even emotions. But these have always been tightly controlled experiments, with only one variable analyzed at a time. The experiments were unnatural, usually involving only single words or phrases, but the slow pace of fMRI -- one scan every two seconds -- made other approaches seem unfeasible.

Wehbe nevertheless was convinced that multiple cognitive subprocesses could be studied simultaneously while people read a compelling story in a near-normal manner. She believed that using a real text passage as an experimental stimulus would provide a rich sample of the different word properties, which could help to reveal which brain regions are associated with these different properties.

"No one falls asleep in the scanner during Leila's experiments," Mitchell said.

They devised a technique in which people see one word of a passage every half second -- or four words for every two-second fMRI scan. For each word, they identified 195 detailed features -- everything from the number of letters in the word to its part of speech. They then used a machine learning algorithm to analyze the activation of each cubic centimeter of the brain for each four-word segment.

Bit by bit, the algorithm was able to associate certain features with certain regions of the brain, Wehbe said.

"The test subjects read Chapter 9 of Sorcerer's Stone, which is about Harry's first flying lesson," she noted. "It turns out that movement of the characters -- such as when they are flying their brooms - is associated with activation in the same brain region that we use to perceive other people's motion. Similarly, the characters in the story are associated with activation in the same brain region we use to process other people's intentions."

Exactly how the brain creates these neural encodings is still a mystery, they said, but it is the beginning of understanding what the brain is doing when a person reads.

"It's sort of like a DNA fingerprint -- you may not understand all aspects of DNA's function, but it guides you in understanding cell function or development," Mitchell said. "This model of reading initially is that kind of a fingerprint."

A complementary study by Wehbe and Mitchell, presented earlier this fall at the Conference on Empirical Methods in Natural Language Processing, used magnetoencephalography (MEG) to record brain activity in subjects reading Harry Potter. MEG can record activity every millisecond, rather than every two seconds as in fMRI scanning, but can't localize activity with the precision of fMRI. Those findings suggest how words are integrated into memory -- how the brain first visually perceives a word and then begins accessing the properties of the word, and fitting it into the story context.
_________________
EurekAlert. 2015. “Carnegie Mellon researchers identify brain regions that encode words, grammar, story”. EurekAlert. Posted: November 26, 2014. Available online: http://www.eurekalert.org/pub_releases/2014-11/cmu-cmr112414.php

Saturday, November 8, 2014

How a Second Language Trains Your Brain for Math

Speaking more than one language has some advantages beyond ordering food in the local tongue—psychologists believe that bilingualism has many other positive side effects. Now, researchers have evidence connecting bilinguals’ talents to stronger so-called executive control in the brain.

Much has been made recently about growing up learning more than one language, as about one in five do. There’s evidence that children who grow up speaking two languages may be more creative, that bilingualism might stave off demefntia, and that bilinguals are better at tasks that involve switching attention between different objects. That led some researchers to suspect that speaking two languages might improve our brains’ executive functions, the high-level circuits that control our ability to switch between tasks, among other things.

To get a little more sense of the matter, Andrea Stocco and Chantel Prat of the University of Washington screened 17 bilingual and 14 monolingual people for language proficiency and other factors and then tested them using a series of arithmetic problems. Each problem was defined by a set of operations and two inputs—divide x by two, add one to y, and subtract the second result from the first, for example—with x and y specified uniquely for each problem. First, participants ran through 40 practice problems using just two operation sets. Next, they went through another 40 problems, this time a mix of 20 new ones, each with a unique set of operations and inputs, and another 20 featuring the previously studied arithmetic operations, but with new inputs for x and y. Finally, the groups worked through 40 more problems, again a mix of familiar and novel, but this time, they completed them inside a fMRI brain scanner.

While bilinguals and monolinguals solved the problems with equal accuracy and took about the same amount of time on arithmetic with familiar sets of operations, bilinguals beat out monolinguals, on average, by about half a second on novel problems. What’s more, fMRI results showed that the basal ganglia, a brain region previously linked to learning about rewards and motor functions, responded more to novel math problems than old ones, but only in bilinguals.

That’s interesting, Stocco says, because more recent studies suggest the basal ganglia’s real role is to take information and prioritize it before passing it on to the prefrontal cortex, which then processes the information. If that’s correct, the new results suggest that learning multiple languages trains the basal ganglia to switch more efficiently between the rules and vocabulary of different languages, and these are skills it can then transfer to other domains such as arithmetic.

“Language is one of the hardest things the brain does,” Prat says, though we often realize that only when we try to learn a new language—a task that is “at least an order of magnitude” more difficult than learning the first one. But just as working on your core has benefits outside the gym, working your basal ganglia hard may be the key to promoting other cognitive skills, especially your hidden math genius.
_________________
Collins, Nathan. 2014. “How a Second Language Trains Your Brain for Math”. Pacific Standard Magazine. Posted: September 24, 2014. Available online: http://www.psmag.com/navigation/health-and-behavior/language-trains-brain-math-91289/

Saturday, October 25, 2014

Language evolution: Quicker on the uptake

The ability to acquire and creatively manipulate spoken language is unique to humans. "The genetic changes that occurred over the past 6 million years of human evolution to make this possible are largely unknown, but Foxp2 is the best candidate gene we now have," says Wolfgang Enard, Professor of Anthropology and Human Biology at LMU. In his efforts to understand the molecular biological basis of language Enard has now taken an important step forward. The results of his latest study, undertaken in collaboration with scientists at several universities, including the Massachusetts Institute of Technology in Cambridge and the Max Planck Institute for Evolutionary Anthropology, have recently appeared in the journal Proceedings of the National Academy of Sciences (PNAS).

The human homolog of Foxp2 codes for a protein – a so-called transcription factor – that regulates the activity of hundreds of genes expressed in various mammalian cell types. Individuals who carry only one functional copy of the gene instead of the usual two experience specific difficulties in learning to speak and in language comprehension. "Genetic mutations that occurred during the 6 million years since our lineage diverged from that of chimpanzees have resulted in localized alterations in two regions of the Foxp2 protein. That is quite striking when one considers that the normal mouse version differs from that found in chimps by only a single mutation, although these two species are separated by over 100 million years of evolution. The question is how the human variant of this transcription factor contributes to the process of language acquisition," says Enard.

Enard and his coworkers had previously shown that the alterations in the human gene for Foxp2 specifically affect certain regions of the brain. When the two human-specific substitutions were introduced into the mouse version of the gene, he and his team observed anatomical changes exclusively in two neuronal circuits in the basal ganglia of the mouse cortex, which are involved in the control of motor function. "These circuits play a crucial role in the acquisition of habitual behaviors and other cognitive and motor capabilities," Enard explains.

Conscious and unconscious learning processes

In their latest work with the same mouse model, Enard and his collaborators found that, under certain conditions, the human version of Foxp2 actually enhances learning. "We have shown for the first time that the evolved alterations in the human gene have an effect on learning ability. The human version modifies the balance between declarative and motor neuron circuits in the brain. As a result, the mice take less time to associate a given stimulus with the appropriate response, and hence learn more rapidly," says Enard.

Learning to speak clearly requires interactions between conscious "declarative" knowledge and the unconscious effects of repetitive stimulation of particular patterns of neural activity. "As we learn, the underlying neuronal processes become automated, they are converted into routine procedures, enabling us to learn faster," Enard explains. Using various tests, the researchers demonstrated that the human-specific mutations enhance cooperative interactions between the two affected circuits in the basal ganglia of the mouse brain. "The human variant of the Foxp2 gene modulates the associative and sensorimotor nerve connections formed, as well as levels of the neurotransmitter dopamine in the basal ganglia, during the learning process. The increased ability to switch between conscious and unconscious forms of learning may play a role in the acquisition of language," Enard concludes.

Foxp2 is the only gene so far that has been shown to be directly associated with the evolution of language, and studies of Foxp2 function promise to throw new light on the evolution of the human brain. The mutation that first revealed the link with language was discovered in a kindred, many of whose members displayed severe speech difficulties, primarily as a consequence of defective control of the muscles of the larynx, the lips and the face.
_________________
EurekAlert. 2014. “Language evolution: Quicker on the uptake”. EurekAlert. Posted: September 18, 2014. Available online: http://www.eurekalert.org/pub_releases/2014-09/lm-leq091814.php

Friday, July 18, 2014

Speaking 2 languages benefits the aging brain

New research reveals that bilingualism has a positive effect on cognition later in life. Findings published in Annals of Neurology, a journal of the American Neurological Association and Child Neurology Society, show that individuals who speak two or more languages, even those who acquired the second language in adulthood, may slow down cognitive decline from aging.

Bilingualism is thought to improve cognition and delay dementia in older adults. While prior research has investigated the impact of learning more than one language, ruling out "reverse causality" has proven difficult. The crucial question is whether people improve their cognitive functions through learning new languages or whether those with better baseline cognitive functions are more likely to become bilingual.

"Our study is the first to examine whether learning a second language impacts cognitive performance later in life while controlling for childhood intelligence," says lead author Dr. Thomas Bak from the Centre for Cognitive Aging and Cognitive Epidemiology at the University of Edinburgh.

For the current study, researchers relied on data from the Lothian Birth Cohort 1936, comprised of 835 native speakers of English who were born and living in the area of Edinburgh, Scotland. The participants were given an intelligence test in 1947 at age 11 years and retested in their early 70s, between 2008 and 2010. Two hundred and sixty two participants reported to be able to communicate in at least one language other than English. Of those, 195 learned the second language before age 18, 65 thereafter.

Findings indicate that those who spoke two or more languages had significantly better cognitive abilities compared to what would be expected from their baseline. The strongest effects were seen in general intelligence and reading. The effects were present in those who acquired their second language early as well as late.

The Lothian Birth Cohort 1936 forms the Disconnected Mind project at the University of Edinburgh, funded by Age UK. The work was undertaken by The University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology, part of the cross council Lifelong Health and Wellbeing Initiative (MR/K026992/1) and has been made possible thanks to funding from the Biotechnology and Biological Sciences Research Council (BBSRC) and Medical Research Council (MRC).

"The Lothian Birth Cohort offers a unique opportunity to study the interaction between bilingualism and cognitive aging, taking into account the cognitive abilities predating the acquisition of a second language" concludes Dr. Bak. "These findings are of considerable practical relevance. Millions of people around the world acquire their second language later in life. Our study shows that bilingualism, even when acquired in adulthood, may benefit the aging brain."

After reviewing the study, Dr. Alvaro Pascual-Leone, an Associate Editor for Annals of Neurology and Professor of Medicine at Harvard Medical School in Boston, Mass. said, "The epidemiological study by Dr. Bak and colleagues provides an important first step in understanding the impact of learning a second language and the aging brain. This research paves the way for future causal studies of bilingualism and cognitive decline prevention."
________________
References:

EurekAlert. 2014. “Speaking 2 languages benefits the aging brain”. EurekAlert. Posted: June 2, 2014. Available online: http://www.eurekalert.org/pub_releases/2014-06/w-stl052914.php

Thursday, May 8, 2014

Language learning: what motivates us?

"Where's your name from?"

I wasn't expecting to be the subject of my interview with John Schumann, but the linguistics professor had picked up on my Persian surname. Talking to me from California, where he is one of the world's leading academic voices on language learning, he effortlessly puts my own Farsi to shame.

Schumann learned Farsi in Iran, where he was director of the country's Peace Corps Teaching English as a Second Language (TESL) programme. He then went into academia, becoming a professor at the Univesity of California (UCLA), where he specialises in how we learn languages and its neurobiology.

Shumann's work and that of his colleagues in UCLA's Neurobiology of Language Research Group, is concerned with the processes that happen within the brain when we learn a language. Such work holds the answer to the holy grail of languages: what motivates learning?

In 2009, Schumann published The Interactional Instinct: The Evolution and Acquisition of Language. The work marked a crucial development in the study of language learning.

"We've developed a theory called 'the interactional instinct'," Schumann says. "We show that children are born with a natural tendency to attach, bond and affiliate with caregivers. They essentially have a drive to become like members of the same species. The child becomes motivated to learn their primary language through this innate interactional instinct."

Could this interactional instinct, then, be the key to learning additional languages? Schumann argues that the situation is different in the case of foreign languages. "The motivation for second language acquisition varies across individuals, the talent and aptitude for it varies across individuals, and the opportunity for it varies across individuals," he says. "Therefore we don't get uniform success across second language acquisition as we do – generally – in primary language acquisition."

For more than 50 years, two terms have categorised motivation in language learning: integrative and instrumental. Though distinct, these types of motivation are closely linked.

"Integrative motivation is the motivation to learn a language in order to get to know, to be with, to interact with and perhaps become like the speakers of the target language," Schumann says. "Children have integrative motivation in acquiring their first language. Instrumental motivation alongside this characterises second language acquisition."

"Instrumental motivation is language learning for more pragmatic or practical purposes," he explains. "Such as fulfilling a school requirement, getting a job, getting a promotion in that job, or being able to deal with customers."

So then, for an aspiring language learner, which kind of motivation might see them achieve the most success? "I wouldn't argue for the supremacy of one over the other in second language acquisition," Schumann says. "In most cases of language learning motivation, we have a mixture of integrative and instrumental influences."

Closer to home, significant research into language acquisition and language learning motivation is taking place at the University of York. Its Psycholinguistics Research Group is a collaborative effort engaged with a variety of elements connected to language acquisition.

Danijela Trenkic is a member of this group and a senior lecturer in the Department of Education at York. She highlights the importance of socialisation in staying motivated to learn a language. "The social relevance and social aspects of learning seem hugely important for sustaining motivation and so determining the outcome of learning," she says.

Alongside Trenkic, student Liviana Ferrari conducted a study into language learning motivation as part of her PhD. Her research investigated what kept adult English learners of Italian motivated during a beginners' course. Though the students joined the classes for a variety of reasons and were taught by different teachers using different approaches, it quickly became apparent that maintaining motivation was closely connected to the social elements involved.

"We found that those most likely to stick with it were the ones who developed a social bond within a group," Trenkic explains. "For them, learning Italian became part of their social identity: something they do one evening a week with a group of pleasant and like-minded people. For both groups [in the study], social participation was the driving force for sustaining motivation."

Native English speakers continue to be notoriously bad at mastering foreign languages. This example of integrative motivation at work could demonstrate a way that learners might see more success in their language learning efforts. But the English language is different from other languages.

Both Trenkic and Schumann believe that native English speakers are at a unique disadvantage in trying to learn other languages. The key issue in motivating English-speaking language learners is the prevalence of English as the world's lingua franca, an issue that has been explored and debated by experts for more than a decade.

"We speak natively the language that the world is trying to learn. For us, it's never clear that we need to learn a second language, and if we decide to, it's hard for us to pick which one," Schumann asserts. "It's also very difficult to maintain a conversation with a German if your German isn't good, because they'll quickly switch to English, and they're often more comfortable doing so."

"One of the main reasons there are more successful learners of English than of other languages is that there's more 'material' out there, and it's more socially relevant in the sense that people you know are likely to share your enthusiasm for the material – films and music, for example," Trenkic adds.

Does this mean that all hope is lost for native English speakers learning foreign languages? Not necessarily. Schumann argues that many European states are successful in cultivating bilingual societies because of active societal support and the national-level importance placed on it.

"In countries like Holland and Sweden, the society has realised they have to learn a more international language. They start teaching English very early but with no magic method," Schumann says. "The Dutch put on a lot of television in English with Dutch subtitles. In the entertainment media, they give a preference to English. Nationally, they give their communities a language they can use in the world."

English's role as a global lingua franca might make foreign language acquisition more of an effort, but the motivation – as Schumann puts it – "to get to know, to be with, to interact with and perhaps become like the speakers of [a] target language" remains intact. For English speakers, the focus must be on the cultural and social benefits of learning languages – on the symptoms of integrative motivation, which go beyond employment prospects and good grades.
________________
References:

Razavi, Lauren. 2014. “Language learning: what motivates us?”. The Guardian. Posted: March 19, 2014. Available online: http://www.theguardian.com/education/2014/mar/19/language-learning-motivation-brain-teaching

Friday, April 25, 2014

Areas of the brain process read and heard language differently

The brain processes read and heard language differently. This is the key and new finding of a study at the University Department of Radiology and Nuclear Medicine at the MedUni Vienna, unveiled on the eve of the European Radiology Congress in Vienna. The researchers have been able to determine the affected areas of the brain using speech processing tests with the aid of functional magnetic resonance tomography (fMRT).

The results of the study, published in Frontiers in Human Neuroscience, offer the field of radiology new opportunities for the pre-operative determination of areas that need to be protected during neurosurgical procedures -- for example the removal of brain tumours -- in order to maintain certain abilities. With regard to the speech processing parts of the brain in particular, individual mapping is especially important since individuals differ in terms of the location of their speech processing centres. "This also gives radiologists a tool with which they can decide whether it makes more sense during testing to present the words in visual or audible form," says Kathrin Kolindorfer who, together with Veronika Schöpf (both from the University Department of Radiology and Nuclear Medicine at the MedUni Vienna), headed up the study.

Personalised planning of radiological investigations

For the test design, the healthy test subjects were played simple nouns via headphones or shown them on a screen. They then had to form matching verbs from them. Says Kolindorfer: "Depending on whether the words were heard or seen, the neurons fired at different locations in the network."

"Our results therefore show that the precise and personalised planning of radiological investigations is of tremendous importance," says Schöpf. Following this investigation, the best proposed solution is then drawn up within the multidisciplinary team meetings with the patient.

The study falls within the remit of the Medical Neurosciences and Medical Imaging research cluster at the MedUni Vienna. There are five research clusters in total. These specialist areas are increasingly focusing on fundamental and clinical research at the MedUni Vienna. The other three research clusters are Immunology, Cancer Research / Oncology and Cardiovascular Medicine.
________________
References:

Science Daily. 2014. “Areas of the brain process read and heard language differently”. Science Daily. Posted: March 7, 2014. Available online: http://www.sciencedaily.com/releases/2014/03/140307084007.htm

Monday, March 17, 2014

Revealing how the brain recognizes speech sounds

UC San Francisco researchers are reporting a detailed account of how speech sounds are identified by the human brain, offering an unprecedented insight into the basis of human language. The finding, they said, may add to our understanding of language disorders, including dyslexia.

Scientists have known for some time the location in the brain where speech sounds are interpreted, but little has been discovered about how this process works. Now, in Science Express (January 30th, 2014), the fast-tracked online version of the journal Science, the UCSF team reports that the brain does not respond to the individual sound segments known as phonemes -- such as the b sound in "boy" -- but is instead exquisitely tuned to detect simpler elements, which are known to linguists as "features."

This organization may give listeners an important advantage in interpreting speech, the researchers said, since the articulation of phonemes varies considerably across speakers, and even in individual speakers over time.

The work may add to our understanding of reading disorders, in which printed words are imperfectly mapped onto speech sounds. But because speech and language are a defining human behavior, the findings are significant in their own right, said UCSF neurosurgeon and neuroscientist Edward F. Chang, MD, senior author of the new study.

"This is a very intriguing glimpse into speech processing," said Chang, associate professor of neurological surgery and physiology. "The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens."

Although we usually find it effortless to understand other people when they speak, parsing the speech stream is an impressive perceptual feat. Speech is a highly complex and variable acoustic signal, and our ability to instantaneously break that signal down into individual phonemes and then build those segments back up into words, sentences and meaning is a remarkable capability.

Because of this complexity, previous studies have analyzed brain responses to just a few natural or synthesized speech sounds, but the new research employed spoken natural sentences containing the complete inventory of phonemes in the English language.

To capture the very rapid brain changes involved in processing speech, the UCSF scientists gathered their data from neural recording devices that were placed directly on the surface of the brains of six patients as part of their epilepsy surgery.

The patients listened to a collection of 500 unique English sentences spoken by 400 different people while the researchers recorded from a brain area called the superior temporal gyrus (STG; also known as Wernicke's area), which previous research has shown to be involved in speech perception. The utterances contained multiple instances of every English speech sound.

Many researchers have presumed that brain cells in the STG would respond to phonemes. But the researchers found instead that regions of the STG are tuned to respond to even more elemental acoustic features that reference the particular way that speech sounds are generated from the vocal tract. "These regions are spread out over the STG," said first author Nima Mesgarani, PhD, now an assistant professor of electrical engineering at Columbia University, who did the research as a postdoctoral fellow in Chang's laboratory. "As a result, when we hear someone talk, different areas in the brain 'light up' as we hear the stream of different speech elements."

"Features," as linguists use the term, are distinctive acoustic signatures created when speakers move the lips, tongue or vocal cords. For example, consonants such as p, t, k, b and d require speakers to use the lips or tongue to obstruct air flowing from the lungs. When this occlusion is released, there is a brief burst of air, which has led linguists to categorize these sounds as "plosives." Others, such as s, z and v, are grouped together as "fricatives," because they only partially obstruct the airway, creating friction in the vocal tract.

The articulation of each plosive creates an acoustic pattern common to the entire class of these consonants, as does the turbulence created by fricatives. The Chang group found that particular regions of the STG are precisely tuned to robustly respond to these broad, shared features rather than to individual phonemes like b or z.

Chang said the arrangement the team discovered in the STG is reminiscent of feature detectors in the visual system for edges and shapes, which allow us to recognize objects, like bottles, no matter which perspective we view them from. Given the variability of speech across speakers and situations, it makes sense, said co-author Keith Johnson, PhD, professor of linguistics at the University of California, Berkeley, for the brain to employ this sort of feature-based algorithm to reliably identify phonemes.

"It's the conjunctions of responses in combination that give you the higher idea of a phoneme as a complete object," Chang said. "By studying all of the speech sounds in English, we found that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table."
________________
References:

Science Daily. 2014. “Revealing how the brain recognizes speech sounds”. Science Daily. Posted: January 30, 2014. Available online: http://www.sciencedaily.com/releases/2014/01/140130141305.htm

Sunday, December 1, 2013

Learning Dialects Shapes Brain Areas That Process Spoken Language

Using advanced imaging to visualize brain areas used for understanding language in native Japanese speakers, a new study from the RIKEN Brain Science Institute finds that the pitch-accent in words pronounced in standard Japanese activates different brain hemispheres depending on whether the listener speaks standard Japanese or one of the regional dialects.

In the study published in the Journal Brain and Language, Drs. Yutaka Sato, Reiko Mazuka and their colleagues examined if speakers of a non-standard dialect used the same brain areas while listening to spoken words as native speakers of the standard dialect or as someone who acquired a second language later in life.

When we hear language our brain dissects the sounds to extract meaning. However, two people who speak the same language may have trouble understanding each other due to regional accents, such as Australian and American English. In some languages, such as Japanese, these regional differences are more pronounced than an accent and are called dialects.

Unlike different languages that may have major differences in grammar and vocabulary, the dialects of a language usually differ at the level of sounds and pronunciation. In Japan, in addition to the standard Japanese dialect, which uses a pitch-accent to distinguish identical words with different meanings, there are other regional dialects that do not.

Similar to the way that a stress in an English word can change its meaning, such as "pro'duce" and "produ'ce," identical words in the standard Japanese language have different meanings depending on the pitch-accent. The syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.

The experimental task was designed to test the participants' responses when they distinguish three types of word pairs: (1) words such as /ame'/ (candy) versus /kame/ (jar) that differ in one sound, (2) words such as /ame'/ (candy) versus /a'me/ (rain) that differ in their pitch accent, and (3) words such as 'ame' (candy in declarative intonation) and /ame?/ (candy in a question intonation).

RIKEN neuroscientists used Near Infrared Spectroscopy (NIRS) to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers. This non-invasive way to visualize brain activity is based on the fact that when a brain area is active, blood supply increases locally in that area and this increase can be detected with an infrared laser.

It is known that pitch changes activate both hemispheres, whereas word meaning is preferentially associated with the left-hemisphere. When the participants heard the word pair that differed in pitch-accent, /ame'/ (candy) vs /a'me/ (rain), the left hemisphere was predominantly activated in standard dialect speakers, whereas in accent-less dialect speakers did not show the left-dominant activation. Thus, standard Japanese speakers use the pitch-accent to understand the word meaning. However, accent-less dialect speakers process pitch changes similar to individuals who learn a second language later in life.

The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect. "Our study reveals that an individual's language experience at a young age can shape the way languages are processed in the brain," comments Dr. Sato. "Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language."
__________________________
References:

Science Daily. 2013. “Learning Dialects Shapes Brain Areas That Process Spoken Language”. Science Daily. Posted: October 18, 2013. Available online: http://www.sciencedaily.com/releases/2013/10/131018132054.htm

Thursday, October 10, 2013

Study finds language and tool-making skills evolved at the same time

Research by the University of Liverpool has found that the same brain activity is used for language production and making complex tools, supporting the theory that they evolved at the same time.

Researchers from the University tested the brain activity of 10 expert stone tool makers (flint knappers) as they undertook a stone tool-making task and a standard language test. They measured the brain blood flow activity of the participants as they performed both tasks using functional Transcranial Doppler Ultrasound (fTCD), commonly used in clinical settings to test patients' language functions after brain damage or before surgery.

The researchers found that brain patterns for both tasks correlated, suggesting that they both use the same area of the brain. Language and stone tool-making are considered to be unique features of humankind that evolved over millions of years. Darwin was the first to suggest that tool-use and language may have co-evolved, because they both depend on complex planning and the coordination of actions but until now there has been little evidence to support this.

Dr Georg Meyer, from the University Department of Experimental Psychology, said: "This is the first study of the brain to compare complex stone tool-making directly with language.

"Our study found correlated blood-flow patterns in the first 10 seconds of undertaking both tasks. This suggests that both tasks depend on common brain areas and is consistent with theories that tool-making and language co-evolved and share common processing networks in the brain."

Dr Natalie Uomini from the University's Department of Archaeology, Classics & Egyptology, said: "Nobody has been able to measure brain activity in real time while making a stone tool. This is a first for both archaeology and psychology."
__________________________
References:

EurekAlert. 2013. “Study finds language and tool-making skills evolved at the same time”. EurekAlert. Posted: September 2, 2013. Available online: http://www.eurekalert.org/pub_releases/2013-09/uol-sfl090213.php

Monday, October 7, 2013

Learning a New Language Alters Brain Development

The age at which children learn a second language can have a significant bearing on the structure of their adult brain, according to a new joint study by the Montreal Neurological Institute and Hospital -- The Neuro at McGill University and Oxford University. The majority of people in the world learn to speak more than one language during their lifetime. Many do so with great proficiency particularly if the languages are learned simultaneously or from early in development.

The study concludes that the pattern of brain development is similar if you learn one or two language from birth. However, learning a second language later on in childhood after gaining proficiency in the first (native) language does in fact modify the brain's structure, specifically the brain's inferior frontal cortex. The left inferior frontal cortex became thicker and the right inferior frontal cortex became thinner. The cortex is a multi-layered mass of neurons that plays a major role in cognitive functions such as thought, language, consciousness and memory.

The study suggests that the task of acquiring a second language after infancy stimulates new neural growth and connections among neurons in ways seen in acquiring complex motor skills such as juggling. The study's authors speculate that the difficulty that some people have in learning a second language later in life could be explained at the structural level.

"The later in childhood that the second language is acquired, the greater are the changes in the inferior frontal cortex," said Dr. Denise Klein, researcher in The Neuro's Cognitive Neuroscience Unit and a lead author on the paper published in the journal Brain and Language. "Our results provide structural evidence that age of acquisition is crucial in laying down the structure for language learning."

Using a software program developed at The Neuro, the study examined MRI scans of 66 bilingual and 22 monolingual men and women living in Montreal. The work was supported by a grant from the Natural Science and Engineering Research Council of Canada and from an Oxford McGill Neuroscience Collaboration Pilot project.
__________________________
References:

Science Daily. 2013. “Learning a New Language Alters Brain Development”. Science Daily. Posted: August 29, 2013. Available online: http://www.sciencedaily.com/releases/2013/08/130829124351.htm

Sunday, October 6, 2013

Why your brain may work like a dictionary

Does your brain work like a dictionary? A mathematical analysis of the connections between definitions of English words has uncovered hidden structures that may resemble the way words and their meanings are represented in our heads.

"We want to know how the mental lexicon is represented in the brain," says Stevan Harnad of the University of Quebec in Montreal, Canada.

As every word in a dictionary is defined in terms of others, the knowledge needed to understand the entire lexicon is there, as long as you first know the meanings of an initial set of starter, or "grounding", words. Harnad's team reasoned that finding this minimal set of words and pinning down its structure might shed light on how human brains put language together.

The team converted each of four different English dictionaries into a mathematical structure of linked nodes known as a graph. Each node in this graph represents a word, which is linked to the other words used to define it – so "banana" might be connected to "long", "bendy", "yellow" and "fruit". These words then link to others that define them.

This enabled the team to remove all the words that don't define any others, leaving what they call a kernel. The kernel formed roughly 10 per cent of the full dictionary – though the exact percentages depended on the particular dictionary. In other words, 90 per cent of the dictionary can be defined using just the other 10 per cent.

But even this tiny set is not the smallest number of words you need to produce the whole dictionary, as many of these words can in turn be fully defined by others in the kernel. This is known as the minimal grounding set (MGS), which Harnad explores in his most recent work. Unlike the kernel, which forms a unique set of words for each dictionary, there are many possible word combinations that can be used to create an MGS – though it is always about half the size of the kernel.

What's more, the kernel has a deeper structure. The team found that half of its words made up a core group in which every word connects to every other via a chain of definitions. The other half was divided into satellite groups that didn't link to each other, but did connect with the core.

And this structure seems to relate to meaning: words in the satellites tend to be more abstract than those in the core, and an MGS is always made up of words from both the core and satellites, suggesting both abstract and concrete words are needed to capture the full range of meaning.

So what, if anything, can this tell us about how our brains represent words and concepts? To find out, Harnad's team looked at data on how children acquire words and found a pattern: as you move in from the full dictionary towards the kernel and finally the MGS, words tend to have been acquired at a younger age, be used more frequently, and refer to more concrete concepts. "The effect gets stronger as you go deeper into the kernel," Harnad says.

That doesn't mean children learn language in this way, at least not exactly. "I don't really believe you just have to ground a certain number of things and from then on close the book on the world and do the rest by words alone," says Harnad. But the correlation does suggest that our brains may structure language somewhat similarly to a dictionary. To learn more, the team has created an online game that asks players to define an initial word, then define the words in those definitions. The team then compares whether their mental dictionaries are similar in structure to actual ones.

Phil Blunsom at the University of Oxford isn't convinced word meanings can be reduced to a chain of definitions. "It's treating words in quite a symbolic fashion that is going to lose a lot of the meaning." But Mark Pagel of the University of Reading, UK, expects the approach to lead to new insights – at least for adult brains. "This will be most useful in giving us a sense of how our minds structure meaning," he says. For example, one question raised by the relatively small size of the MGS is why we burden ourselves with so much extraneous vocabulary.
__________________________
References:

Aron, Jacob. 2013. “Why your brain may work like a dictionary”. New Scientist. Posted: August 29, 2013. Available online: http://www.newscientist.com/article/mg21929322.700-why-your-brain-may-work-like-a-dictionary.html#.Uj2z87z3tDs