Showing posts with label skepticism. Show all posts
Showing posts with label skepticism. Show all posts

Wednesday, February 5, 2014

When being mean is actually being nice ... and when it is just being mean

It's a harsh world in here, in academia. We all already know that academic science is not a carebear teaparty, and apparently now things are worse than ever as far as potential jobs for Ph.D.s and funding goes. 

A brief interruption, for an ad:
Use Grammarly's plagiarism checker online because it's better to have a computer criticize you than a person.
 And now back to your regularly scheduled blogramming.

Scicurious has a great post up at Neurotic Physiology about what it is like to be out in the 'real world' and out of academia. She has some fascinating points about how academia has skewed her perspective on things, but one in particular jumped out at me: That now she has to re-learn how to take criticism.

Scicurious says:
"I remember a time when I took criticism well. I did a lot of theater and music, it was something you HAD to take well. I took it, I improved, worked harder, fixed things, and did better. Sometime during grad school, however, criticism began to paralyze me. Every critique felt like a critique of me, as a scientist. Since a scientist was what I WAS, all criticism began to feel like criticism of me, as a person. Sometimes it was indeed phrased that way. You are careless. You are not smart enough, why don't you get this?! You are not focused."
This got me thinking because, honestly, I feel exactly the opposite. I think I learned how to take criticism in grad school partly by learning how to give it.

It's for your own good! (source)

When I am editing a paper or grant for someone, I am trying to help them. The more critical I am the better their paper/grant will be. The paper is headed to peer-review which will determine whether it gets published or not , and the grant is headed to a study section which decides whether it gets funded. Both are grueling and rigorous examinations of quality and scientific merit. These review processes are so important because published papers and funded grants are 'science currency' and will determine your future. In some cases the funding status of a grant can determine whether a lab stays open or a PI gets tenure.

If there is a paragraph that doesn't make sense, or (gasp) a typo, it is obviously better for me to catch it than for someone important to see it and get confused or frustrated.

Understanding this concept, that constructive criticism is the nicest thing a scientist can do for another scientist taught me to take criticism much better than I had previously. I was one of those students who was always 'better enough' than the other students that teachers rarely bothered to push me to true excellence. So I was really not used to criticism, and the initial slings and arrows in graduate school did sting. However at one point it really sunk in that these criticisms were making me better... better at everything: writing, presenting, scientific thinking. 

So that is when being mean is actually being nice.

That said, I never had anyone tell me I wasn't 'smart enough' as Scicurious says. Just because constructive and thorough critique can sound mean, but actually be nice, doesn't mean that there is no such thing as 'meanness' in academia.

Sometimes being mean is really just being mean. A criticism that does not help me improve in any way is just mean. 'you are not smart enough to be a scientist' does not help anyone be a better scientist. It is a completely different kind of criticism than 'you really need to read more about X because you don't understand how X works.' Both are directed at 'you' personally, but one says you can't do it and the other says you can do it and even suggests how you can do it.


© TheCellularScale













Monday, May 6, 2013

Everyone should learn everything.

Today I am getting on a bit of a soapbox about things.  Specifically about things scientists should learn.
Scientists should learn everything (source)
In an ideal world everyone would be good at everything, but as you have probably noticed this is NOT the case. Some people are good at lots of things and some people are really good at specific things, but terrible at others, and some unfortunate people are generally bad at a lot of things and mediocre at a few.

Recently, I've been hearing increasing noise for scientists (or scientists-in-training) to learn X, Whatever X is. 'Scientists should learn art"; "Scientists should learn creative writing"; "Scientists should learn how to communicate to the public more clearly" ; "Scientists should learn managerial skills" and so forth.

This bothers me for a couple of reasons.

1. Why should the scientists learn all this stuff? Why aren't people clamoring for artists to learn microbiology, or for novelists to brush up on their molecular genetics?

and

2. What is wrong with some people being good at science and NOT being good at much else?

Yes, if waving a magic wand could suddenly make scientists good communicators, artists, and managers, I wouldn't object. But these things (like science itself) take training. And god knows, graduate students already get a lot of training.

And yes, running a lab takes managerial skills and grant writing requires clear communication and story-telling skills. But instead of requiring one person to be good at all these things, why not divide up the labor a little and have a 'lab manager' help run the lab, and a 'departmental grants guru' to help polish the grants.

It is really easy to say 'scientists should learn X' because...

1. there is a perception that scientists are smart and can learn things easily

and

2. it is always impossible to argue that things wouldn't be better if scientists were good at X. (Wouldn't it be great if all scientists were excellent public speakers? yes of course.)

The problem is implementing the extensive training in X that a scientist should have, and what current training to replace. Therefore I propose that the 'scientists should learn X' statements should all be adjusted to say 'scientists should get extensive training in X rather than Y'.

© TheCellularScale

Saturday, April 27, 2013

LMAYQ: Mirror Neurons

Mirror neurons really excite people. They've been hyped as the root of empathy and essential to human nature. I've addressed some of this hype, but questions remain. So for this edition of Let Me Answer Your Questions, we will focus on mirror neurons. As always, the LMAYQ series can be found here.

Escher's Mirror (source)


1. "What do mirror neurons look like?" 
Good question, and guess what? I have addressed this directly.

2. "Do mirror neurons fire when you die?"
Another good question. Ultimately, all neurons stop firing when you die including mirror neurons. But this doesn't happen immediately. In fact, if the death is due to something traumatic such as decapitation, the neurons might fire more when the nerves are severed between the spinal cord and the brain. But this just brings up questions about the moment of death. Is it when the heart stops, the head is severed? or is it when the neurons stop firing? Can a 'person' be dead when some of their cells are still alive?

In a lot of cellular-level research, cells are kept alive after the animal that they came from has died. Electrophysiologists keep slices of brain alive for hours to record electrical signals from their neurons. Still other projects involve culturing neurons that have been extracted from an animal. These neurons are carefully tended for days, weeks, and even months. These neurons not only stay alive in little dishes, but they can also grow and even control robots.

There are living neurons in there (source)
3. "what does it mean to have a mirrored brain?"
Well. nothing really. I have never heard the term 'mirrored brain' before, and it sounds like something that might be in a pseudo-scientific quiz along the lines of Are you left brained or right brained?  "Do you have a mirrored brain? take our quiz and find out"

4. "Is love nothing but mirror cells?"
I love and hate these kinds of questions. The idea that love is nothing if it can be explained by a biological mechanism really gets me. If love is just neurons firing (mirror or otherwise), so what? Why would that make LOVE any less meaningful?

Heart Mirror (source)
On the other hand, this is a really interesting question if it is asking whether mirror neurons have anything to do with love. Again mirror neurons are neurons that fire when you do something and also when you see someone else do that thing. Specifically, they were discovered in monkeys when monkeys reached for something and then saw other hands reach for something. Then the concept got hyped up. It's easy to imagine that if you have neurons that fire when you do something and when you see some one else do that same thing, that those might have something to do with 'feeling another's pain' and thus empathy. So it's not a huge step to take from there to think that maybe mirror neurons could have something to do with building relationships and love.

But the speculation here is WAY beyond the science. There isn't good solid evidence for mirror neurons controlling empathy, and certainly not for being the basis of love.

 © TheCellularScale

Wednesday, April 17, 2013

Van Gogh was afraid of the moon and other lies

I remember the first time I realized just how easily false information gets spread about.

A terrifying starry night
I was in French class in high school. Our homework had been to find out 1 interesting fact about Van Gogh and tell it to the class. When it was my turn, I said some boring small fact that I no longer remember. My friend sitting behind me, however, had a fascinating fact: When Van Gogh was a young child, he was actually afraid of the moon.

The teacher and the class were all quite impressed and thought about how interesting that was and how that fact might be reflected in the way that he paints the Starry Night. Though this fact was new to everyone, including the teacher, no one even thought to question its truth.

In fact, the teacher was so enthralled by this idea that she passed the information on to all the other French classes that day.

When talking to my friend later that day, he admitted that he had not done the assignment, and just made the 'fact' up. I was completely surprised, not only that someone had not done their homework *gasp*, but that I hadn't even thought to question whether this was true or not. 
The best lies have an element of truth (source)
 Misinformation like this spreads like wildfire and is exceptionally difficult to undo. The more things you can link this piece of information to in your brain, the more true you might think it and even after your learn that it's not true, you still might inadvertently believe it or fit new ideas into the context it creates. Myths like the corpus callosum is bigger in women than in men is just one of those things that is easy to believe.

An interesting paper by Lewandowsky et al. (2012) explains how this kind of persistent misinformation is detrimental to individuals and to society with the example of vaccines causing autism. This particular piece of misinformation is widely believed to be true despite numerous attempts to publicize the correct information and the most recent scientific findings showing no evidence for a link between the two

The authors of this paper give some recommendations for making the truth more vivid and effectively replacing the misinformation with new, true information. For example:
"Providing an alternative causal explanation of the event can fill the gap left behind by retracting misinformation. Studies have shown that the continued influence of misinformation can be eliminated through the provision of an alternative account that explains why the information was incorrect." Lewandowsky et al. (2012)
Misinformation can be replaced with information, but it takes more work to replace a 'false fact' than to just have the truth out there in the first place. It is much better when misinformation is not spread around in the first place, than when it is retroactively corrected.

This paper is also covered over at The Jury Room.


© TheCellularScale


ResearchBlogging.org
Lewandowsky, S., Ecker, U., Seifert, C., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing Psychological Science in the Public Interest, 13 (3), 106-131 DOI: 10.1177/1529100612451018

Tuesday, March 19, 2013

What is up with the "Dopamine Project"?

Someone is trying to make me eat my words.

yum. (source)
That someone is the Dopamine Project. I am on record as saying "It is better for the public to learn simplified bite-size science morsels than to learn nothing at all." And my specific example was that it's better for people to know that 'dopamine is a reward molecule' than to not even know the term dopamine.

But sometimes things just go too far. The "Dopamine Project" is a website run by Charles Lyell with a stated 'self-help' purpose:
"The Dopamine Project was founded to foster positive change by encouraging open-minded individuals to share readily available research into the connections between dopamine and a growing list of addictive behaviors." -About Tab
Doesn't sound too terrible, right? Share research about dopamine? sign me up! .... However, I don't see ANY research, or even references to research, on this website. In fact it's quite wootastic. Going through the posts, you get some gems like

"A Message from you Dopamine Angel"

and

"Keeping a Dopamine Diary: Wrestling with Dopamine-Induced Ignorance"

It's all about how 'good dopamine' makes you want things you should want (food) and 'bad dopamine' makes you want things that will hurt you later (addictive drugs for example). Basically the website's message is a self-help, self-control one with the word dopamine sprinkled all over it.

The worst part is that not only does the website not include a single citation to a research paper, it actively rails against science.

"The future depends on how long it takes scientists to discover what they haven’t been interested in discovering so far. Rather than wait for the mainstream scientists and media to get started, we’re reaching out to anyone interested in fostering positive change by raising dopamine awareness." -Welcome to the Dopamine Project
Trust me, scientists want to understand dopamine. At the IBAGS conference half the talks related to dopamine, and there is a conference completely devoted to dopamine coming up in May. The specific action of dopamine is really really complex, and scientists are working really really hard to unravel its intricacies. This Charles Lyell guy is pulling out a typical woo card, implying that he knows what scientists don't want you to know.
"If the thought of fostering positive change through dopamine awareness triggers a shot of dopamine that brings a smile to your face, this might be your chance to be among the top .001% who go on record as the first to understand and apply what we know about dopamine to make a difference."  -Welcome to the Dopamine Project
 He also seems to feel personally attacked by Steven Poole's New Statesman article on Neurobollocks.

Is the 'Dopamine Project' ridiculous and unscientific? Absolutely.

Is it harmful and dangerous to people? ... Honestly, I'm not sure. Reading it certainly makes me want to throw up, but there are worse things for pseudoscience to encourage than self-control. I'm not sure if I should devour my earlier words quite yet.

© TheCellularScale

To read more on the confusing line between science and pseudoscience, see Michael Shermer's Scientific American article:


ResearchBlogging.org
Shermer M (2011). What is pseudoscience? Scientific American, 305 (3) PMID: 21870452

Friday, March 15, 2013

Is it 'Important' or is it 'valuable'?

We've recently discussed dopamine as a reward prediction signal. But that is really just the start of the complicated dopamine story.

Dopamine's role in reward and punishment (by the hiking artist)
Some research groups have also found that dopamine neurons respond to aversive stimuli, like an air puff to the face or an electric shock. This finding seems to be be completely incompatible with the idea that dopamine is a signal for reward.

Luckily some scientists took the time to try to resolve this discrepancy. Bromberg-Martin, Matsumoto, and Hikosaka (2010) have written an excellent review paper explaining that some dopamine neurons do code for value (reward), but other dopamine neurons code for salience (importance).

Differential Dopamine Coding (Bromberg-Martin et al., 2010 Fig 4)

When researchers are recording from a value coding dopamine neuron, it looks like the neuron responds to reward and actually reduces its response to the air puff. This makes sense as a 'dopamine = good' signal.

However, when a researcher is recording from a salience coding dopamine neuron, it looks like the neuron is responding equally to the good thing (reward) and the bad thing (air puff). This is confusing if you think 'dopamine = good', but makes sense if you think 'dopamine = important'. When the cue comes on (a light or a tone that signifies a reward is coming next or an air puff is coming next), these dopamine neurons fire if that cue means something.


Instead of just being confused about why sometimes dopamine would code for value and sometimes it would code for salience, Hikosaka's group showed that these two types of neurons are actually separate populations, and even seem separated in space.
(Bromberg-Martin et al., 2010 Fig 7B)
The value dopamine neurons are more ventral in the (monkey) brain, while the salience dopamine neurons are more dorsal-lateral. Importantly these two populations of neurons go to slightly different parts of the striatum and receive signals from different parts of the brain. The review paper suggests that the salience coding neurons receive their input from the central nucleus of the amygdala, while the value coding neurons receive their input from the lateral habenula-RMTg pathway.

The important thing here is that dopamine does not do just one thing to the brain. It doesn't just tell the rest of the brain 'yay, you won!' or 'you want that' etc... It says different things depending on different specific conditions. 

Dopamine doesn't 'mean' anything, the cell it comes from and the cell it goes to are what determine what it does. It certainly can't be classified as the 'love molecule'

 © TheCellularScale


ResearchBlogging.org
Bromberg-Martin ES, Matsumoto M, & Hikosaka O (2010). Dopamine in motivational control: rewarding, aversive, and alerting. Neuron, 68 (5), 815-34 PMID: 21144997


Sunday, February 10, 2013

Why scientists should play games

I have just finished reading Jane McGonigal's book Reality is Broken: Why games make us better and how they can change the world. It is a fascinating book which presents a strong case for games (including video games) doing good in the world.

Reality is Broken by Jane McGonigal

I have to admit, part of me wanted to read this book to make me feel better about my own video game habit. It certainly helped solidify the vague ideas I had about what good they might be doing me.

Specifically, the book made me think that scientists of all people might benefit greatly from playing games. There is one major reason why:

Games make you more resistant to failure

If there is one thing that scientists need to persist in their research its resilience in the face of failure. If you didn't know this already, just start following some 'life in academia' bloggers on twitter. Failure is a staple of scientific life.

Just yesterday I awoke to a small grant rejection. I started thinking about just how many things I had applied for during my (still new) scientific career, and just what proportion of those applications had resulted in rejections. I tallied it up on a chart (similar to a failure C.V.), and discovered that for about every 3.5 things I have applied for, only one was successful. This includes grant applications, travel fellowships, paper submissions and re-submissions, and miscellaneous things like applying to be an SfN Neuroblogger. (I did not include abstract submissions or applications to graduate school.) I actually think this is a relatively good ratio, and I expect this ratio to get worse in the future, because the competition for the things I am applying for will be even tougher.

Part of the reason I wanted to calculate my success/attempt ratio was to see how many things I had actually applied for. I was glad that the list was long, and that I applied for lots of things, even if it means that my 'ratio' is the worse for it. I would posit that having a good success/attempt ratio is not really that great if you only ever apply for a few things that are easy to get.

In science, you will fail; there is absolutely no scientist EVER who hasn't been rejected from something.

So back to games. Reality is Broken explains that games teach you to persist in the face of failure, and that games increase your optimism.

"Learning to stay urgently optimistic in the face of failure is an important emotional strength that we can learn in games and apply to our real lives. When we're energized by failure, we develop emotional stamina. And emotional stamina makes it possible for us to hang on longer, to do much harder work, and to tackle more complex challenges. We need this kind of optimism in order to thrive as human beings." -Reality is Broken, chapter 4

When I think of my own resistance to failure (which is decent, but could be better), I think of my time spent learning from games that failure is not the end of the world. Ever since I repeatedly failed to jump Mario over the first Goomba, video games were teaching me to try again, and again, and again.

Mario and Goomba level 1. (source)
Jane McGonigal brings up Tetris, one of the most popular video games of all time. Tetris is a game with no possible outcome except failure. You keep playing until you lose, and yet the game is immensely fun and ultimately rewarding. Each time you fail you want to try again, and you feel that you will probably do better next time.

In summary, games reward persistence and desensitize you to failure. When you play video games you learn implicitly that trying again is worth it and that failing isn't the end of the world. These skills are great to have in life and are essential to have in an academic career.

Reality is Broken lists 13 other ways that games 'fix' reality. Some of these fixes are about personal betterment (like persistence in the face of failure), but some of these fixes are about how games can ultimately change the larger reality. Games that combat global warming, for example, or games like Fold-It that actually further scientific progress and human knowledge. Whether you already play games or not, you can get something out of this book.

A nice addition to this book is the appendix "Practical advice for gamers" in which Jane McGonigal lays out some guidelines for getting the most out of games. For example, one rule is to never play more that 21 hours in a week. While video games have benefits, there are problems that can result from compulsive video game play, and you shouldn't think think that you are doing something healthy if you play video games for 50 hours a week and completely ignore reality. The idea is that playing games can help you function in reality. If you never venture into reality, you won't make any use of the benefits that the game might have given you.


© TheCellularScale

Here are further reviews of Reality is Broken:

ResearchBlogging.org
Ferguson, C. (2011). Reality is broken, and the video game research field along with it. PsycCRITIQUES, 56 (48) DOI: 10.1037/a0026131
 
Farhangi, S. (2012). Reality is broken to be rebuilt: how a gamer’s mindset can show science educators new ways of contribution to science and world? Cultural Studies of Science Education, 7 (4), 1037-1044 DOI: 10.1007/s11422-012-9426-y

Wednesday, January 30, 2013

Intuition or a sense of Smell?

I've long been fascinated by the idea that those feelings often attributed to 'intuition' or 'following your gut' might occur physiologically in the form of odor cues that we don't consciously register.

Intuition or Olfactuation? (source)
An example of this might me when you can just 'tell something is wrong' in a situation and decide to leave, and later found out that something bad happened later that evening. These sorts of stories are often used as evidence that people have psychic powers of some kind, and are equally often dismissed as just a coincidence.

But another possibility is that humans communicate through scents more than we realize. Maybe you could actually 'smell something is wrong' rather than supernaturally 'tell something is wrong' in the above hypothetical situation.

Researchers in the Netherlands tested whether the feelings of 'disgust' and 'fear' could be communicated through smell. They had guys watch scary parts of horror movies or disgusting graphic parts of MTV's Jackass while wearing 'sweat pads' in their armpits.

Who knew this would contribute to SCIENCE?

They then had female volunteers smell the sweat pads and measured their facial motions to see if the expressions they made were more like fear or disgust.

Importantly the protocol was double-blind, so neither the experimenters handing out the sweat pad vials, nor the participants had any idea what 'emotion' was sweated into those pads.

And they found what they thought they would find: the 'fear muscles' (Medial Frontalis) were most active for the women smelling the sweat of the horror-watching men, and the 'disgust muscles' (Levator Labii) were most active for the women smelling the sweat of the Jackass-watching men. In the authors words (stats taken out for readability):
"Moreover, fear chemosignals generated an expression of fear and not disgust, disgust chemosignals induced a facial configuration of disgust rather than fear, and neither fear, nor disgust, were evoked in the control condition" de Groot et al. (2012)
So at very very close range (like nose in armpit), it seems that emotional signals can be transmitted through scent.
The smell of fear (source)

A quick side note: the scent in this study was created by men and smelled by women. I wonder if this specific gender combination is necessary for the scent-based communication. You would think men smelling men and women smelling women would have the same effect, but they did not investigate other combinations.

If you learn anything from this, let it be to not go see a disgusting movie on a first date, you might end up repulsing each other with your 'disgust sweat' later.

© TheCellularScale

ResearchBlogging.org
de Groot JH, Smeets MA, Kaldewaij A, Duijndam MJ, & Semin GR (2012). Chemosignals communicate human emotions. Psychological science, 23 (11), 1417-24 PMID: 23019141

Friday, January 11, 2013

On Selling and Over-Selling Science

Science!!! (source)
Science communication is a persistent topic of ... well communication. Who is responsible for communicating science? How can science be best communicated to the public? What can we to do stop sensationalist and misleading articles from controlling what findings are generally accepted in the public sphere?

All these questions rise up in science blogs and on twitter and then fade back into the background. Then something happens and a flurry of posts about communicating science float to the surface again.

I have decided to join this party, and have written a Guest Editorial at the Biological Bulletin.

It's called "On Selling and Over-Selling Science" and is about trying to find that perfect balance between communicating a scientific finding accurately and accessibly.

I'd love to hear new opinions on this. So feel free to follow the link and leave a comment about it here. 

© TheCellularScale

I was not able to use my 'blogging name' like Neuroskeptic was, so here is the article and my identity along with it:

ResearchBlogging.org
Evans RC (2012). Guest editorial on selling and over-selling science. The Biological bulletin, 223 (3), 257-8 PMID: 23264470


Sunday, November 4, 2012

Ketamine for depression via neurogenesis?

A lot of fuss has been made recently about the street drug "Special K" (ketamine). It's basically an anesthetic used in labs and veterinary offices to tranquilize mice, rats, cats, and (famously) horses, but recently its been lauded as a newer faster anti-depressant.

Ketamine: from the dealer or from the doctor? (image source)
The possibility that it might have near immediate anti-depressant effects on humans has been around for a little while, but the concept is picking up steam as new research finds mechanisms for how it might actually work in depressed patients. (I briefly mention one new study in an SfN neuroblogging post. )

An emerging theory is that depression is not so much a chemical imbalance as it is a loss of neurons. Thus the cure for depression is not restoring the balance of serotonin or dopamine, but restoring the growth of new neurons. Some suggest that this is how classic anti-depressants (like Zoloft) work, by fixing the neuron atrophy problem. This could also explain why these anti-depressants take so long to work, though I have expressed skepticism about this hypothesis.

So the question is: Does ketamine cause the growth of new neurons, help in their maturation, or prevent neuronal atrophy? Ketamine is an NMDA receptor antagonist, so it inhibits synaptic transmission. It doesn't inhibit all synaptic transmission like deadly poisons do (tetrodotoxin for example), but enough of it to change something in the brain. Knowing something about NMDA receptors, it was still hard for me to conceive of a connection between blocking them and neuronal growth.

A nice review by Duman and Li (2012) spells it out for me, explaining new research that links ketamine with the growth of new synapses.

Duman and Li 2012 figure 3
The idea is that ketamine blocks the NMDA receptors on the GABAergic (inhibitory) neurons, so there is less inhibition and more glutamate. When there is more glutamate, there is more BDNF (brain derived neurotrophic factor). BDNF helps synapsse grow by triggering a cascade of events (via mTOR) which causes more AMPA receptors to be inserted into the synapse, making the synapse stronger, more stable, and more mature.

The authors cite their previous Li et al., 2010 Science paper explaining that when they block mTOR with the drug rapamycin, the effects of ketamine on new spine growth disappear and its anti-depressant effects disappear. However, this is a study in rats and assessing the depressed state of a rat is as tricky as assessing a rat's post-traumatic stress. So the claim here isn't so much that ketamine causes neurogenesis, but that it could help new neurons become synaptically mature, and thus functionally useful. (Carter et al. is investigating this further)

As shiny and interesting as this is, I am not quite sold on it. I don't see how the NMDA antagonist is going to inhibit the inhibitory neurons more than the excitatory neurons, and I would love to see research showing how ketamine causes glutamate accumulation.

And as far as actually using it as a treatment for depression, there are some serious side-effects. Ketamine is a hallucinagenic street drug which can cause a schizophrenia-like state. Therefore, it seems unlikely that ketamine itself will ever be prescribed as an anti-depressant, but new research could reveal (or synthesize) other molecules that activate mTOR directly or somehow bypass the hallucinogenic aspect of ketamine.

For more, see some skeptical and critical analyses of human ketamine studies.

© TheCellularScale

ResearchBlogging.orgDuman RS, & Li N (2012). A neurotrophic hypothesis of depression: role of synaptogenesis in the actions of NMDA receptor antagonists. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 367 (1601), 2475-84 PMID: 22826346

Li N, Lee B, Liu RJ, Banasr M, Dwyer JM, Iwata M, Li XY, Aghajanian G, & Duman RS (2010). mTOR-dependent synapse formation underlies the rapid antidepressant effects of NMDA antagonists. Science (New York, N.Y.), 329 (5994), 959-64 PMID: 20724638

Tuesday, October 23, 2012

Can you turn a rat gay?

What does it take to 'turn a rat gay'? This question may have crossed your mind, but a group in Mexico actually did the experiments to test it.

A weak first attempt (source)
Triana-Del Rio et al., 2011 used a co-habitation conditioning paradigm to see if they could condition a male rat to prefer a male partner.
The basic paradigm was to house the 'experimental rat' to the 'stimulus rat' (who was scented with almond) for a full day every 4 days. Under these conditions, the experimental rat did not show any preference for the almond-scented stimulus rat later on.  However, if the experimental rat was injected with quinpirole, which stimulates dopamine D2 receptors, he did develop a preference for the almond-scented rat. This preference was not sexual in nature. Preference was measured by time spent together, and these guys just wanted to hang out.

Triana-Del Rio et al., 2011 (figure 1)

The authors then did a separate experiment where instead of using 'sexually naive' rats as the stimulus rats, they used 'sexually expert' rats.  They created these Cassanovas by riling them up with very 'receptive' female rats at least 10 separate times. They refer to this as 'sexual training.' When the sexually expert rats were used as stimulus rats, the experimental rats developed a sexual preference when injected with quinpirole. These experimental rats strongly preferred their almond-scented partners as measured by time spent together, mounting, and 'genital investigation.'

So what does this mean? First of all, even the most drastic change was not permanent, partner preference dissipated after 45 days. And as I mentioned in my SfN summary, this protocol did not have the same effect in female rats. I do not think that the researchers here 'turned a rat gay.' While they did succeed in biasing the preference of the experimental rat for the guy he was housed with, they certainly didn't change the rats sexual preference in a deep or universal way. There is no evidence that the experimental rat preferred males in general over females, just that he really likes the one guy he was hanging out with.

So this study does not really tell us anything about the biological basis of homosexuality, and it certainly does not tell us how to make a gay bomb. The most interesting implication for this study is in the activity of the D2 dopamine receptor, which may be involved in pair-bonding. I would be interested to see what some ex vivo cellular studies revealed about this treatment. Does quinpirole application cause a change in the number or location of the D2 dopamine receptors or the activity of the neuron?


© TheCellularScale


ResearchBlogging.orgTriana-Del Rio R, Montero-Domínguez F, Cibrian-Llanderal T, Tecamachaltzi-Silvaran MB, Garcia LI, Manzo J, Hernandez ME, & Coria-Avila GA (2011). Same-sex cohabitation under the effects of quinpirole induces a conditioned socio-sexual partner preference in males, but not in female rats. Pharmacology, biochemistry, and behavior, 99 (4), 604-13 PMID: 21704064
 

Monday, September 3, 2012

The Optimism Bias in Science

"I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the face of repeated experiences of multiple small failures and rare successes, the fate of most researchers"     -Daniel Kahneman

The Brain: Irrational, Positive, Deceptive
I just finished reading The Optimism Bias by Tali Sharot.  The book explains that most people have an "Optimism Bias," a tendency to over-estimate how smart, good-looking, and capable they are as well as the likelihood that good things will happen to them. 

Sharot points out that in a 1981 study (Swenson O) 93% of participants rated themselves as in the top 50th percentile (i.e. 'above average') for driving ability.  Other studies have shown that this "Better than Average Effect" applies to many aspects of our self-image.  Think about yourself right now... do you think you are smarter than average? better looking than average? nicer than average? etc.  You probably do.  And even though it is logically impossible for 93% of people to be better than the 50% mark, you probably still think that you are actually  better/smarter/nicer. 

So even though you think you are smarter than most people, the reality is that most people think they are smarter than most people.

Similarly people under-estimate the likelihood that bad things will happen in their life and over estimate the likelihood that good things will happen. Ask any newly engaged couple what they think their chances of divorce are, and if not too offended by such a rude question, they will probably rate the chance of divorce as very low or even zero.  However reality says that they actually have a 41-50% chance of divorce. 

divorce cake (source)

But as Sharot claims, this optimistic skew to reality is actually beneficial. Which newly engaged couple would actually get married if they fully realized and believed that their chances of staying married were no better than the chance of flipping heads or tails on a coin? The irrational belief that we are somehow exceptional is motivating. Sharot even suggests that the optimism bias is so prevalent in our species and culture that people who realistically evaluate their situation are not the norm, and may even be clinically depressed. 

While The Optimism Bias has a great premise and recounts some exciting research, I thought the book in general was way too long.  Some very simple concepts (like that people have an optimism bias) were repeated over and over and over, and some (interesting) concepts were introduced that had pretty much nothing to do with optimism (like that memories are unreliable). 

The book didn't really teach me much about how the brain works, but it did set me thinking about how a strong optimism bias is an essential trait in academia.  As the Kahneman quote above states, most scientists face critique after critique and failure after failure.  Successes are few and far between and the same sense of realism that would prevent many a marriage, would also prevent a potential scientist from entering a Ph.D. program. Who would even apply to graduate school if they fully understood and believed the dismal statistics about finishing Ph.D. programs and the subsequent tenure-track job search. 

We have to believe that we are special, that our work is crucial, and that our contributions are significant.  No scientist will succeed if they get their peer-reviewed paper back from a journal and immediately think: 'yep, the third reviewer is correct, this work is flawed and has little impact, I should quit and become a cab driver.' A near-delusional sense of significance and an "it's not me, it's them" attitude is required to stand by your ideas and abilities in the face of these kinds of criticisms. 


© TheCellularScale


ResearchBlogging.org
Sharot T (2011). The optimism bias. Current biology : CB, 21 (23) PMID: 22153158


Sunday, July 22, 2012

Do small men think like big women?

Endless research has been conducted on the neurological differences between women and men. However, a study out of the University of Florida explains that almost all of the anatomical differences previously reported can be accounted for simply by adjusting for total brain size.

(Lady Gaga is an excellent source of exaggerated imagery)
Leonard et al., (2008) recruited 100 men and 100 women and imaged their brains. They showed that men generally have larger brains that women (not surprising, men generally have larger bodies than women).

Leonard et al., 2008 Figure 2

But what is fascinating is that when comparing specific regions, the gender of the brain mattered less than the size of the whole brain. 

In other words if you had a small male brain, it would look almost indistinguishable from a large female brain.  (See their Figure 3)


What I find most interesting in this paper is that it refutes the much purported "Corpus Callosum Myth".

Corpus Callosum (source)
The Corpus Callosum is main white matter connection between the two hemispheres of the brain. The "Corpus Callosum Myth" is that female brains have larger corpus callosa than male brains.

I have to admit that I am not immune from gender bias.  When I first heard that women had larger corpus callosa than men, my immediate thoughts were towards how that could make sense.  I thought "ah, well then maybe that is why women are better at seeing the big picture or at multi-tasking" and other thoughts along those lines.  

What I definitely did NOT think was "I bet that was a small, poorly controlled study which did not even reach statistical significance."  Well as it turns out, I should have.  DeLacoste-Utamsing and Holloway (1982) analyzed only 14 brains (9 male and 5 female), and found that

"The average area of the posterior fifth of the corpus callosum was larger in females than in males (p=0.08)" DeLacoste-Utamsing and Holloway (1982) p. 1431

A result hardly worth speculating upon.

Leonard et al., 2008 also found some corpus callosum differences between the genders, but when they graphed the size of the corpus callosum against the size of the whole brain...

Figure 3B (female brains white circles, male brains filled squares)
They found a continuum. The difference in size between the female and male corpus callosum is entirely due to the difference in size of the female and male brain as a whole. 

As with Von Economo neurons, maybe brains of different sizes work similarly, but have to be shaped differently to do so.

So rather than wildly speculating that women are better at this or that because they have stronger connections between their hemispheres, we should put our efforts into discovering evolutionary reasons why small men would be better multi-taskers that large men.


© TheCellularScale
 
UPDATE (7/23/12): I just want to be perfectly clear. I don't actually think that small men think like women.  The whole point of this post is to show that popular studies explaining that 'men and women's brains are different' may sound like they make sense, but there is often another explanation. In this case: if you are going to claim that the size of the corpus callosum means that women are better multi-taskers, then you have to ALSO claim that small men are better multi-taskers. And that large women are worse multi-taskers.  (These seem like totally ridiculous claims to me, but feel free to construct an experiment to test these hypotheses). 

For more on gender and gender differences (or lack thereof) in the brain, see my previous posts:






ResearchBlogging.orgDeLacoste-Utamsing C, & Holloway RL (1982). Sexual dimorphism in the human corpus callosum. Science (New York, N.Y.), 216 (4553), 1431-2 PMID: 7089533

Leonard CM, Towler S, Welcome S, Halderman LK, Otto R, Eckert MA, & Chiarello C (2008). Size matters: cerebral volume influences sex differences in neuroanatomy. Cerebral cortex (New York, N.Y. : 1991), 18 (12), 2920-31 PMID: 18440950

Sunday, July 1, 2012

A little stress goes a long way

.... toward preventing PTSD symptoms.
Post Traumatic Stress Disorder

This may surprise you as the S in PTSD stands for STRESS.  How on earth could stress prevent it? But you heard correctly. A new paper by Rao et al., (2012) from Biological Psychiatry shows that a little stress in the form of glucocorticoids, prior to an acute stress event actually prevents PTSD-like symptoms in rats.

First of all how do you tell if a rat has PTSD?
This study uses two measures: one behavioral and one cellular.

To test anxiety in a rat, you can put in on an Elevated Plus Maze (EPM). Rats don't love heights, and they do love dark corners. But, they are also somewhat naturally curious. The EPM makes use of these rat characteristics to test how anxious the rat is.
Elevated Plus Maze (source)
The EPM has four arms, two are open (but far enough off the ground that the rat can't just step off the maze) and two are enclosed with walls. Normal rats tend to explore all the arms of the maze roughly equally, but anxious rats tend to strongly avoid the open arms. The amount of time spent in the open arm area is a generally accepted measure of how anxious the rat is.

An earlier paper from the same lab, found that rats who had undergone the single stress event were more anxious (spent less time in the open arms of the EPM) 10 days after the event, but NOT 1 day after the event.  The single event stress and the delay of symptom onset are why this study is more relevant for PTSD than for chronic stress. 

Rao et al., 2012 Fig 4B
As interesting as the behavioral experiments are, the cellular level experiments are where it gets really cool (The Cellular Scale is not biased or anything). They used the Golgi stain to visualize neurons in the Amygdala. They measured how long the dendrites were and also how many spines they had on them. (Spines are the little protrusions that come of dendrites to receive synaptic inputs).


They found that the stressed rats had more dendritic spines on the amygdala neurons than the non-stressed rats.  Not only that, but this increase in spine density was apparent 10 days after the stress event, but not 1 day after.  


You might think dendritic spine growth is a good thing, and likely signifies synaptic plasticity and pathway strengthening... but remember this is the amygdala, a structure critical for FEAR learning, more spines here may not be beneficial. Stronger pathways to these amygdala neurons likely means that they fire more easily.


Now that we understand how PTSD is measured in a rat, we can move on to how they 'cured' it in this paper.  

Rao et al found that when they injected vehicle (a fancy science term for 'nothing' or 'placebo' or 'saline') into the rat 30 minutes before the 2 hour stress event, the rat no longer showed either the increased in anxiety (fewer open arm entries on the EPM) or the increase in dendritic spine density.

Pretty weird, considering they were injecting vehicle prior to the stress event.  How could inactive saline (essentially nothing) cure PTSD symptoms?

They figured out that the actual injection process was stressing the rat out a little bit. When animals (including humans) are stressed, they release a hormone called cortisol.


Rao et al., 2012 Fig 1C,D,E

They found that the 2 hour stress event caused a huge rise in corticosterone (right and left panels), while the injection (vehicle) alone caused a small rise (middle panel). 

Because they were injecting nothing, they hypothesized that the corticosterone produced by the small stress of being injected was somehow protecting against the large 2 hour stress event.

The rest of their paper is basically confirming this. They add corticosterone to the water of the rats and this also prevents the PTSD-like symptoms.  They find that all their manipulations isolating the corticosterone confirm that this is what is protecting the rats from the delayed impact of the stress event.  

Interestingly there is evidence that 'small stress' can help prevent 'big stress' in humans too. They cite clinical studies reporting that intensive care unit (ICU) patients who receive injections of stress-level cortisol during treatment are less likely to develop ICU-related PTSD symptoms.

It is a puzzling paradox at the moment, but the next step is to figure out how exactly this little stress can reduce big stress.


Epilogue: 

I was lucky enough to see Dr. Chattarji, the principle investigator of this study, give a talk at a conference a few months ago.  And one interesting piece of information that you can get from a talk, but will never read in a paper is how the scientists originally stumbled upon their finding.  In this case, Chattarji's lab didn't start their study by injecting vehicle. They were actually testing a real drug that they thought might help alleviate PTSD.  They had a beautiful result showing that when you injected "drug X" before the 2 hour stress event, you eliminated the PTSD symptoms. The natural conclusion is to think that "drug X" is a new cure for PTSD.

 But therein lies the importance of the control group. To control for any effects of simply injecting the rat, they injected vehicle. When they saw that the vehicle prevented the PTSD symptoms just like the actual drug, they were crushed! This is the ultimate demise of an experiment.  The control group shows the same thing as the drug group, which means that the drug does not work! Luckily they were flexible and smart enough to investigate what they did see, that the injection alone could protect against the PTSD symptoms.

Also, if someone would like to explain the difference between cortisol and corticosterone, please do. I clearly do not have a full understanding here.

© TheCellularScale




ResearchBlogging.orgRao RP, Anilkumar S, McEwen BS, & Chattarji S (2012). Glucocorticoids Protect Against the Delayed Behavioral and Cellular Effects of Acute Stress on the Amygdala. Biological psychiatry PMID: 22572034

Mitra R, Jadhav S, McEwen BS, Vyas A, & Chattarji S (2005). Stress duration modulates the spatiotemporal patterns of spine formation in the basolateral amygdala. Proceedings of the National Academy of Sciences of the United States of America, 102 (26), 9371-6 PMID: 15967994