My Reflection on Obsessive Compulsive Disorder

 Anonymous | 30 JAN 2019

“You’ve got that compulsive… bloody… disorder.’ -my dad

Words from my dad. I was keeping him awake most nights, pacing the hallway outside his room. The thing was, I couldn’t go to bed without walking in and out of my room until it felt …right. First, I would make my bed and shake the duvet five times. If I lost concentration or had a certain thought, I had to shake the duvet five times more. Then I would go to the loo, and go back to my room, that’s where my pacing came in. Closing the door had to be done correctly too, a certain number of times or until it felt safe to move on. All of this was just too loud for my dad.

On a good night, this took around an hour. On a bad night, it could take two and a half. It was like there were two parts to me, one part was saying ‘this is so stupid, go to bed,’ and the OCD part was saying ‘walk in and out of your room again, do it again, do it again, do it again or something awful will happen.’

Obsessive Compulsive Disorder (OCD) affects 1.2 % of the population, 12 in every 1000 people. It is a condition characterised by intrusive thoughts (obsessions), and behaviours aimed at ‘neutralising’ these thoughts (compulsions). Everyone has intrusive thoughts. They are those unwanted, nasty thoughts or images that unprecedentedly infiltrate your consciousness. If you are driving and you stop at a red light, you might think ‘what would happen if I took my foot off the brake and hit all of those people?’ When healthy people experience intrusive thoughts they are quickly dismissed, unstudied. In OCD the thoughts dominate, causing a lot of anxiety.

‘Your dad will attack you.’ That was my main thought. Most nights, just thinking the word ‘attack’ while I’m in the middle of a compulsive behaviour means I have to start the whole thing again. There was nothing in the physical world to give that thought any meaning. Intrusive thoughts are like that, rarely grounded in reality. As a sufferer of OCD you worry that because you thought something dreadful, it might actually happen.

I was about 8 when I first started showing symptoms of OCD. At 13 my dad said the ‘bloody…disorder’ thing. I didn’t think that could describe me – my actions were about getting into bed, not washing my hands. That’s what OCD is, right? I had no idea what it was until I was around 15, when I learned more about mental health disorders. At 17 I went to the doctors, made some awkward jokes, and blurted out that I thought I had OCD.

In OCD there is often a vicious cycle, where intrusive thoughts cause anxiety, which is (somewhat) reduced by compulsions. This means that in an attempt to reduce future anxiety, compulsions are repeated. The compulsion also reinforces the thought. It is never proven that if you don’t act on a thought nothing bad happens, so the negativity of the thought gets stronger.

A common compulsion is checking things; checking doors are locked, checking you turned the hob off. Everyone checks, but it becomes extreme in OCD with people checking many times. There is evidence that the more frequently someone checks something, the less they are able to remember what state it was in (Radomsky, Gilchrist & Dussault, 2006).  Researchers tasked healthy individuals with checking whether a virtual ‘hob’ was on, either twice or twenty times. Those who checked the hob twenty times were less certain whether it had been on last time they checked. This shows how compulsions in OCD perpetuate the disorder. They feel like they’re making it better, but actually they’re making it worse.

What can help someone with OCD? There are a number of therapy types, and even medication, that can help. In particular cognitive behavioural therapy is proven to be very effective. It educates sufferers about the disorder, teaches them to recognise obsessive thoughts and change negative compulsive behaviours. Understanding my disorder was the thing that helped me most. I’m now the best I’ve been in years, and I’m glad. There was a stage where I had to tap each of my limbs 16 times before I got into bed. Imagine doing that when you want to sleep over at your boyfriend’s house.

In order to teach myself that my thoughts had no determinative effect on the outside world, every time I had an intrusive thought I would force myself to have another thought – ‘in 10 seconds your hand will explode.’ I would count to 10. My hand never exploded.

Edited by Sophie Waldron, Jon Fagg, Josh Stevenson-Hoare & Oly Bartley

Can’t or Won’t – An Introduction To Apathy.

 Megan Jackson | 19 DEC 2018

Often, when a person hears the word apathy, an image comes to mind. A glassy-eyed teenager scrolling vacantly through their phone while their parent looks on in despair. While comical, it does not reflect what apathy really is clinically: a complex symptom that has important clinical significance.

In 1956, a study was published describing a group of Americans released from Chinese prison camps following the Korean war1. As a reaction to the severe stress they had suffered during their time in prison, the men were observed to be ‘listless’, ‘indifferent’ and ‘lacking emotion’. The scientists decided to call this pattern of behaviours apathy. However, at this point, there was no formal way to measure apathy. It was acknowledged that it could manifest in varying degrees, but that was the extent of it. It was over 30 years before apathy was given a proper definition and recognised as a true clinical construct.

As time went on, scientists noticed that apathy doesn’t just arise from times of extreme stress, like time in a prison camp, but that it also appears in a variety of clinical disorders. A proper definition and a way of assessing apathy was needed. In 1990, Robert Marin defined apathy as ‘a loss of motivation not attributable to current emotional distress, cognitive impairment, or diminished level of consciousness’. As this is a bit of a mouthful, it was summarised as ‘A measurable reduction in goal-directed behaviour’. This definition makes it easy to imagine an individual who no longer cares about, likes, or wants anything and therefore does nothing. However, this is not always the case. There are different subtypes of apathy that each involve different brain regions and thought-processes, these are:

Cognitive – in which the individual does not have the cognitive ability to put a plan into action. This may be due to disruption to the dorsolateral prefrontal cortex.

Emotional-affective – in which the individual can’t link their behaviour or the behaviour of others with emotions. This may be due to disruption to the orbital-medial prefrontal cortex.

Auto-activation – in which the individual can no longer self-initiate actions. This may be due to disruption to parts of the globus pallidus.

It’s much easier to picture how the different types of apathy affect behaviour. Take Bob for example. Bob has apathy, yet Bob likes cake. When somebody asks Bob whether he would like cake he responds with a yes. However, Bob makes no move to go and get it. Bob still likes cake, but he can no longer process how to obtain that cake. He has cognitive apathy. In another example, Bob may want cake but does not want to get up and get it. However if someone told him to, he probably would go and get it. This is auto-activation apathy and is the most severe and the most common kind. If Bob could no longer associate cake with the feeling of happiness or pleasure, then he has emotional-affective apathy.

So, whatever subtype of apathy Bob has, he doesn’t get his cake. A shame, but this seems a little trivial. Should we really care about apathy? Absolutely! Imagine not being able to get out of your chair and do the things you once loved. Imagine not being able to feel emotions the way you used to. Love, joy, interest, humour – all muted. Think of the impact it would have on your family and friends. It severely diminishes quality of life, and greatly increases caregiver burden. It is extremely common in people with neurodegenerative diseases like dementia2, psychiatric disorders like schizophrenia3, and in people who’ve had a stroke4. It can even occur in otherwise healthy individuals.

Elderly people are particularly at risk, though scientists haven’t yet figured out why. Could it be altered brain chemistry? Inevitable degeneration of important brain areas? One potential explanation is that apathy is caused by a disruption to the body clock. Every person has a body clock, a tiny area of the brain called the suprachiasmatic nucleus, which controls the daily rhythms of our entire bodies like when we wake up, go to sleep, and a load of other really important physiological processes like hormone release. Disruption to the body clock can cause a whole host of health problems, from diabetes to psychiatric disorders like depression. Elderly people have disrupted daily rhythms compared to young, healthy people and it is possible that the prevalence of apathy in the elderly is explained by this disrupted body clock. Much more research is needed to find out if this is indeed the case and why!

Figuring out how or why apathy develops is a vital step in developing a treatment for it, and it’s important that we do. While apathy is often a symptom rather than a disease by itself, there’s now a greater emphasis on treating neurological disorders symptom by symptom rather than as a whole, because the underlying disease mechanisms are so complex. So, developing a treatment for apathy will benefit a whole host of people, from the elderly population, to people suffering from a wide range of neurological disorders.

Edited by Sam Berry & Chiara Casella

References:

  • Chase, T.N., (2011) Apathy in neuropsychiatric disease: diagnosis, pathophysiology, and treatment Neurotox Res 266-78.
  • Chow, T.W., (2009) Apathy Symptom Profile and Behavioral Associations in Frontotemporal Dementia vs. Alzheimer’s Disease, Arch Neurol, 66(7); 88-83
  • Gillette M.U., (1999) Suprachiasmatic nucleus: the brain’s circadian clock, Recent Prog Horm Res. 54:33-58
  • Stassman, H.D. (1956) A Prisoner of War Syndrome: Apathy as a Reaction to Severe Stress, Am.J.Psyc., 112(12):998-1003
  • Willem van Dalen, J., (2013) Poststroke apathy, Stroke, 44:851-860.

The Story of Adult Human Neurogenesis or: How I learned to Stop Worrying and Love The Bomb

Dr Niels Haan | 5 DEC 2018

Recently, the debate about adult human neurogenesis seems to be just a dame short of a panto. Do adult humans form new neurons? Oh no, they don’t! Oh yes, they do! There are not many fields where people debate the very existence of the phenomenon they are studying. What do nuclear bombs have to do with it? We’ll come to that later.

What is the big deal?

For many decades, neuroscience dogma was that once the brain was formed after childhood, that was it. All you could do was lose cells. This was first challenged by Robert Altman in the 1960s, when he showed new neurons were forms in adult rodents, but his work was largely ignored at the time. A second wave of evidence came along in the late 1980s and 90s, first starting in songbirds, and later on the conformation that adult neurogenesis does take place in rodents.

In the years that followed, it has been shown that rodent adult neurogenesis takes place in two main areas of the brain, the wall of the lateral ventricles, and the hippocampus. The real importance is the function of these new neurons. In rodents, these cells are involved in things like discrimination of similar memories, spatial navigation, and certain forms of fear and anxiety.

Obviously, the search for adult neurogenesis in humans started pretty much immediately, but decades years later we still haven’t really reached a conclusion.

Why is there so much controversy?

To definitively show adult neurogenesis, you need to be able to show that any given neuron was born in the adult animal or human, rather than in the womb or during childhood. This means using a way to show cell division, as birth of a neuron requires a stem cell to divide and produces at least one daughter cell that ends up being a neuron.

In animals, this is straightforward. Cell division requires the copying of a cell’s DNA. You inject a substance that gets built into new DNA, detect this later once the new neuron has matured, and say “this cell was born after the injection”. To test what these cells are used for, we tend to reduce the numbers the stem cells with chemicals or genetic tricks, and see what the effect on the behaviour of the animal is.

However, injecting chemicals into the brains of humans tends to be frowned upon. Similarly, killing off all their stem cells and doing behavioural tests doesn’t tend to go down well with volunteers. So, we can’t use our standard methods. What we’re left with then is to detect certain proteins that are only found in stem cells or newly born neurons, to show they are present in the adult brain. However, that’s easier said than done.

Although there are proteins that mark mainly things like dividing cells, stem cells, or newly born neurons, those are not necessarily always only found in those cells. All these markers have been found time and again in the human hippocampus. However, because they are not always unique to stem cells and newly born neurons, there is endless debate on which proteins – or indeed which combinations of proteins – to look at, and what it means when cells have them.

What is the evidence?

Dozens, if not hundreds of papers have tried to address this question, and I don’t have the space – or the energy – to discuss all of them. Let’s look at some influential and recent studies that made the headlines, to show how radically different some people are thinking about this subject.

One of the major influential studies in the field came from the lab of Jonas Frisen in 2015. They used a way to get round the problem of detecting dividing cells. When DNA is copied to make a new cell, a lot of carbon is used. Nuclear bomb testing in the 50s and 60s introduced small amounts of (harmless) radioactive carbon into the atmosphere, and so eventually into the DNA of cells born during that time. The end of nuclear testing has lead to a slow decline of that radioactive carbon. So, by measuring how much radioactive carbon is in a cells DNA, you can determine when it was born. Frisen and his group did just this, and showed that people had neurons born throughout their lives in their hippocampus, with about 700 cells being born per day.

This didn’t convince everyone though. This was shown earlier this year when a widely publicised paper came out in Nature. This group did not do any birthdating of cells, but looked for the characteristic markers of stem cells and immature neurons in brains from people of a wide range or ages. According to them, the only way to reliably detect a newly born neuron is to look for two different markers on the same cell. They could only find one of the markers in adults, so by this measure, they found no new neurons after childhood.

The very next month, a very similar paper came out, using essential identical methods, and showed the exact opposite. They did find the new neurons in brains of a wide range of ages, and when counting them, found very similar rates of neurogenesis as Frisen did with his completely different methods.

So, who is right?

That depends on who you ask, and it depends on the question you’re asking (isn’t science fun?). The majority of studies have shown evidence for some neurogenesis in one form or another. How convincing this evidence is comes down to seemingly petty technical arguments, and biases of who you’re asking. The biggest questions are about which markers to use to find the cells, as shown by the two studies mentioned above, and nobody agrees on this yet.

Barring some spectacular technical breakthrough that gives us the same sorts of tools in humans as we have in animals, this debate will undoubtedly keep going for some years yet. The bigger question, which we haven’t addressed at all yet, is whether these adult born cells actually do anything in humans. That’s the next big debate to have…..

Edited by Lauren Revie, Chiara Casella, and Rae Pass

Reading Without Seeing

Melissa Wright | 13 NOV 2018

When the seeing brain goes blind

In the late 90’s, a blind 63-year old woman was admitted to a university hospital emergency room. After complaining to co-workers of light-headedness that morning, she had collapsed and become unresponsive. Within the 48 hours following her admission, after what was found to be a bilateral occipital stroke, she recovered with no apparent motor or neurological problems. It was only when she tried to read that an extraordinary impairment became apparent: despite the damage only occurring in the occipital lobe, which is typically devoted to vision, she had completely and specifically lost the ability to read Braille. Braille is a tactile substitution for written letters, consisting of raised dots that can be felt with the fingertips. Before this, she had been a proficient Braille reader with both hands, which she had used extensively during her university degree and career in radio (Hamilton, Keenan, Catala, & Pascual-Leone, 2000). So what happened?

The Visual Brain

It is estimated that around 50% of the primate cortex is devoted to visual functions (Van Essen, Anderson, & Felleman, 1992), with the primary visual areas located right at the back of the brain within the occipital lobe (also known as the visual cortex). Visual information from the retina first enters the cortex here, in an area named V1. Within V1, this information is organised to reflect the outside world, with neighbouring neurons responding to neighbouring parts of the visual field. This map (called a retinotopic map) is biased towards the central visual field (the most important part!) and is so accurate that researchers have even managed to understand which letters a participant is reading, simply by looking at their brain activity (Polimeni, Fischl, Greve, & Wald, 2010). These retinotopic maps are found in most visual areas in some form. As information is passed forward in the brain, the role of these visual areas becomes more complex, from motion processing, to face recognition, to visual attention. Even basic visual actions, like finding a friend in the crowd, requires a hugely complex chain of processes. With so much of the cortex devoted to processing visual information, what happens when visual input from the retina never occurs? Cases such as the one above, where a person is blind, suggest that the visual cortex is put to use in a whole new way.

Cortical Changes

In sighted individuals, lexical and phonological reading processes activate frontal and parietal-temporal areas (e.g. Rumsey et al., 1997), while touch involves the somatosensory cortex. It was thought that braille reading activated these areas, causing some reorganisation of the somatosensory cortex. However, as the case above suggests, this does not seem to be the whole story (Burton et al., 2002). Remember, in this instance, the unfortunate lady had  damage to the occipital lobe, which is normally involved in vision, but as the lady was born blind it had never received any visual information. Although you might expect that damage to this area would not be a problem for someone who is blind, it turned out instead to impair abilities associated with language and touch! This seriously went against what scientists had understood about brains and their specialised areas, and had to be investigated.

Neuroimaging, such as functional Magnetic Resonance Imaging (fMRI), allows us to look inside the brain and see what area is activated when a person performs a certain task. Using this technique, researchers have found that in early blind individuals, large portions of the visual cortex are recruited when reading Braille (H. Burton et al., 2002). This activity was less apparent for those who became blind in their later years, though was still present, and it wasn’t there at all for sighted subjects. That the late-blind individuals had less activity in this region seems to show that as we get older and as brain regions become more experienced, they become less adaptable to change. A point to note however – fMRI works by correlating increases in blood oxygen (which suggests an increase in energy demand and therefore neural activity) with a task, such as Braille reading. As any good scientist will tell you, correlation doesn’t equal causation! Perhaps those who cannot see are still somehow ‘visualising’ the characters?

So is there any other evidence that the visual areas can change their primary function? Researchers have found that temporarily disrupting the neural activity at the back of the brain (using a nifty technique called Transcranial Magnetic Stimulation) can impair Braille reading, or even induce tactile sensations on the reading fingertips (e.g. Kupers et al., 2007; Ptito et al., 2008)!

Other fMRI studies have investigated the recruitment of the occipital lobe in non-visual tasks and found it also occurs in a variety of other domains, such as in hearing (e.g. Burton, 2003) and working memory (Harold Burton, Sinclair, & Dixit, 2010). This reorganisation seems to have a functional benefit, as researchers have found that the amount of reorganisation during a verbal working memory task is correlated with performance (Amedi, Raz, Pianka, Malach, & Zohary, 2003). As well, it has been reported that blind individuals can perform better on tasks such as sound localisation (though not quite as good as Marvel’s Daredevil!) (Nilsson & Schenkman, 2016).

But Is It Reorganisation?

This is an awesome example of the ability of the brain to change and adapt, and this seems true also for areas that are so devoted to one modality. How exactly this happens is still unknown, and could fill several reviews on its own! One possibility is that neuronal inputs from other areas grow and invade the occipital lobe, although this is difficult to test non-invasively in humans because we can’t look at individual neurons with an MRI scan. The fact that much more occipital lobe activity is seen in early-blind than late-blind individuals (e.g. H. Burton et al., 2002) suggests that whatever is changing is much more accessible to a developing brain. However, findings show that some reorganisation can still occur in late-blind, and even in sighted individuals who undergo prolonged blindfolding or sensory training (Merabet et al., 2008). This rapid adaptation suggests that the mechanism involved may be making use of some pre-existing multi-sensory connections that multiply and reinforce following sensory deprivation.

Cases of vision restoration in later life are rare, but one such example came from a humanitarian project in India, which found and helped a person called SK (Mandavilli, 2006). SK was born with Aphakia, a rare condition in which his eye developed without a lens. He grew up near blind, until the age of 29 when project workers gave him corrective lenses. 29 years with nearly no vision! Conventional wisdom said there was no way his visual cortex could have developed properly, having missed the often cited critical period that occurs during early development. Indeed, his acuity (ability to see detail, tested with those letter charts at the optometrists) showed initial improvement after correction, but this did not improve over time suggesting his visual cortex was not adapting to the new input. However, they also looked at other forms of vision, and there they found exciting improvements. For example, when shown a cow, he was unable to integrate the patches of black and white into a whole until it moved. After 18 months, he was able to recognise such objects even without movement. While SK had not been completely without visual input (he had still been able to detect light and movement), this suggests that perhaps some parts of the visual cortex are more susceptible to vision restoration. Or perhaps multi-sensory areas, that seem able to reorganise in vision deprivation, are more flexible to regaining vision?

So Much left to Find Out!

From this whistle-stop tour, the most obvious conclusion is that the brain is amazing and can show huge amounts of plasticity in the face of input deprivation (see the recent report of a boy missing the majority of his visual cortex who can still see well enough to play football and video games; https://tinyurl.com/yboqjzlx). The question of what exactly happens in the brain when it’s deprived of visual input is incredibly broad. Why do those blind in later life have visual hallucinations (see Charles Bonnet Syndrome)? Can we influence this plasticity? What of deaf or deaf-blind individuals? Within my PhD, I am currently investigating how the cortex reacts to another eye-related disease, glaucoma. If you want to read more on this fascinating and broad topic, check out these reviews by Merabet and Pascual (2010), Ricciardi et al. (2014) or Proulx (2013).

Edited by Chiara Casella & Sam Berry

References:

  • Amedi, A., Raz, N., Pianka, P., Malach, R., & Zohary, E. (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience, 6(7), 758–766. https://doi.org/10.1038/nn1072
  • Burton, H. (2003). Visual cortex activity in early and late blind people. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(10), 4005–4011.
  • Burton, H., Sinclair, R. J., & Dixit, S. (2010). Working memory for vibrotactile frequencies: Comparison of cortical activity in blind and sighted individuals. Human Brain Mapping, NA-NA. https://doi.org/10.1002/hbm.20966
  • Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., & Raichle, M. E. (2002). Adaptive Changes in Early and Late Blind: A fMRI Study of Braille Reading. Journal of Neurophysiology, 87(1), 589–607. https://doi.org/10.1152/jn.00285.2001
  • Fine, I., Wade, A. R., Brewer, A. A., May, M. G., Goodman, D. F., Boynton, G. M., … MacLeod, D. I. A. (2003). Long-term deprivation affects visual perception and cortex. Nature Neuroscience, 6(9), 915–916. https://doi.org/10.1038/nn1102
  • Hamilton, R., Keenan, J. P., Catala, M., & Pascual-Leone, A. (2000). Alexia for Braille following bilateral occipital stroke in an early blind woman. Neuroreport, 11(2), 237–240.
  • Kupers, R., Pappens, M., de Noordhout, A. M., Schoenen, J., Ptito, M., & Fumal, A. (2007). rTMS of the occipital cortex abolishes Braille reading and repetition priming in blind subjects. Neurology, 68(9), 691–693. https://doi.org/10.1212/01.wnl.0000255958.60530.11
  • Mandavilli, A. (2006). Look and learn: Visual neuroscience. Nature, 441(7091), 271–272. https://doi.org/10.1038/441271a
  • Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T., Pitskel, N. B., … Pascual-Leone, A. (2008). Rapid and Reversible Recruitment of Early Visual Cortex for Touch. PLoS ONE, 3(8), e3046. https://doi.org/10.1371/journal.pone.0003046
  • Merabet, L. B., & Pascual-Leone, A. (2010). Neural reorganization following sensory loss: the opportunity of change. Nature Reviews Neuroscience, 11(1), 44–52. https://doi.org/10.1038/nrn2758
  • Nilsson, M. E., & Schenkman, B. N. (2016). Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hearing Research, 332, 223–232. https://doi.org/10.1016/j.heares.2015.09.012
  • Park, H.-J., Lee, J. D., Kim, E. Y., Park, B., Oh, M.-K., Lee, S., & Kim, J.-J. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47(1), 98–106. https://doi.org/10.1016/j.neuroimage.2009.03.076
  • Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010). Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1. NeuroImage, 52(4), 1334–1346. https://doi.org/10.1016/j.neuroimage.2010.05.005
  • Proulx, M. (2013, February). Blindness: remapping the brain and the restoration of vision. Retrieved 28 March 2018, from http://www.apa.org/science/about/psa/2013/02/blindness.aspx
  • Ptito, M., Fumal, A., de Noordhout, A. M., Schoenen, J., Gjedde, A., & Kupers, R. (2008). TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Experimental Brain Research, 184(2), 193–200. https://doi.org/10.1007/s00221-007-1091-0
  • Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77. https://doi.org/10.1016/j.neubiorev.2013.10.006
  • Rumsey, J. M., Horwitz, B., Donohue, B. C., Nace, K., Maisog, J. M., & Andreason, P. (1997). Phonological and orthographic components of word recognition. A PET-rCBF study. Brain: A Journal of Neurology, 120 ( Pt 5), 739–759.
  • Van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information processing in the primate visual system: an integrated systems perspective. Science (New York, N.Y.), 255(5043), 419–423.

It’s Fright Night!

Lucy Lewis | 31 OCT 2018

Halloween is the time for all things scary, and the best thing for scaring people is to put on a good horror movie, preferably something with creepy dark figures appearing suddenly after a long, tense build up. But what is it about these films that gets us so freaked out?

Fear is an integral part of our survival mechanisms, it is a part of the ‘fight or flight’ response to prepare us for combating or escaping from a potential threat. Deep down, you know the demons from Insidious aren’t real, and the clown from It isn’t going to appear behind the sofa, and yet we jump at their appearance on screen, hide behind pillows when the music goes tense and even continue thinking about them once the film is finished.

Halloween

Scene from ‘A Nightmare on Elm Street’ (1984).

This is because horror movies tap in to this innate survival mechanism by presenting us with a situation/character that looks like a threat. When we see something potentially threatening, this activates a structure in the brain known as the amygdala, which associates this threat with something to be fearful of (Adolphs 2013). The amygdala acts as a messenger to inform several other areas of the brain that there is a threat present, thus resulting in multiple physiological changes, including the release of adrenaline (Steimer, 2002) which causes the typical bodily functions we experience during fear, e.g. increased heart rate.

Even though we know the movie isn’t real, the amygdala takes over our logical reasoning to produce these responses, a phenomenon known as the “amygdala hijack” (Goleman, 1995).  This is when the amygdala stimulates these physiological changes to prepare us against this fearful threat before the prefrontal cortex of the brain, responsible for executive functions, can assess the situation and regulate our reactions accordingly (Steimer, 2002).

In fact, studies looking into fear conditioning, which is when associations are made between a neutral stimulus like a clicking sound and an aversive stimulus like an electric shock (Gilmartin et al 2015), have shown that inactivation of the medial prefrontal cortex results in a deficit in the ability to disconnect this association, suggesting that the prefrontal cortex is necessary for the extinction of a fear memory (Marek et al, 2013).

Halloween2

A schematic diagram of the fear pathway, adapted from a VideoBlocks image.

You’ve probably noticed that you can also remember something that scared you so much better than even remembering what you had for breakfast. Forming memories of the events that cause us to feel fear is important for our survival mechanisms, because it helps us to avoid threats in the future. Evidence suggests that the stress hormones that are released following exposure to fearful events contribute to the consolidation of memories, for example Okuda et al (2004) gave rats an injection of corticosterone (the major stress hormone for rodents) immediately following one training session of a task requiring the animals to remember objects presented to them in an arena. They showed that, 24 hours later, the animals that received this injection showed significantly greater recognition of these objects than controls, suggesting that this stress hormone increased their ability to remember the objects around them. Therefore, the scarier the film, the more we remember it.

So, don’t worry if you are a big scaredy-cat when it comes to horror films, it’s actually natural! It’s just your brains response to a threatening situation, and you can’t control it no matter how hard you try. But if you are planning a film night full of ghosts and ghouls this Halloween, go and get yourself a big pillow to hide behind and prepare for that good old amygdala to get into action!

Edited By Lauren Revie & Sophie Waldron

References:

  • Adolphs, R. 2013. The Biology of Fear. Current Biology, 23:R79-R93.
  • Gilmartin, M. R., Balderston, N. L., Helmstetter, F. J. 2015. Prefrontal cortical regulation of fear learning. Trends in Neuroscience, 37:455-464.
  • Goleman, D. 1995. Emotional Intelligence: Why It Can Matter More Than IQ. New York: Bantam Books.
  • Marek, R., Strobel, C., Bredy, T. W., Sah, P. 2013. The amygdala and medial prefrontal cortex: partners in the fear circuit. Journal of Physiology, 591: 2381-2391.
  • McIntyre, C. K. and Roozendaal, B. 2007. Adrenal Stress Hormones and Enhanced Memory for Emotionally Arousing Experiences. In: Bermudez-Rattoni, F. Neural Plasticity and Memory: From Genes to Brain Imaging. Boca Raton (FL):CRC Press/Taylor & Francis, Chapter 13.
  • Okuda, S., Roozendaal, B., McGaugh, J. L. 2004. Glucocorticoid effects on object recognition memory require training-associated emotional arousal. Proceedings of the National Academy of Science, USA, 101:853-858.
  • Steimer, T. 2002. The biology of fear- and anxiety-related behaviors. Dialogues in Clinical Neuroscience, 4:231-249.
  • VideoBlocks, AeMaster still photo from: 3D Brain Rotation with alpha. https://www.videoblocks.com/video/3d-brain-rotation-with-alpha-4u23dfufgik3slexf

Square eyes and a fried brain, or a secret cognitive enhancer- how do video games affect our brain?

Shireene Kalbassi | 3 SEP 2018

If, like me, you spent your childhood surrounded by Gameboys and computer games, you have probably heard warnings from your parents that your eyes will turn square, and that your brain will turn to mush. While we can safely say that we are not suffering from an epidemic of square-eyed youths, it is less clear what gaming is doing to our brain.

In the support of worried parents all around the world, there is a disorder associated with gaming. Internet gaming disorder is defined as being an addictive behaviour, characterised by an uncontrollable urge to play video games. In 2013, internet gaming disorder was added to the Diagnostic and Statistical Manual of Mental Disorders (DSM), with a footnote, saying that more research on the matter is needed (American Psychiatric Association, 2013). Similarly, in 2018, the world health organisation (WHO) included internet gaming disorder to the section ‘disorders due to addictive behaviours’ (World Health Organization, 2018).

There is evidence to suggest that internet gaming does lead to changes in brain regions associated with addiction. Structurally, it has been shown that individuals diagnosed with internet gaming disorder show an increase in the size of a brain region known as the striatum, a region associated with pleasure, motivation, and drug addiction (Cai et al 2016, Robbins et al 2002). The brains of those with internet gaming disorder also show altered responses to stimuli related to gaming. In one study, two groups of participants were assessed: one with internet gaming addiction, and the other without. All the participants with internet gaming disorder were addicted to the popular multiplayer online role-playing game, World of Warcraft. The participants were shown a mixture of visual cues, some being associated with World of Warcraft, and others being neutral. Whilst being shown the visual cues, the brains of the participants were scanned for brain activation using an fMRI machine. It was observed that when being shown visual cues relating to gaming, the participants with internet gaming disorder showed increased activation of brain regions associated with drug addiction, including the striatum and the prefrontal cortex.  The activation of these brain regions was positively correlated with self-reported ‘craving’ for these games; the higher the craving for the game, the higher the levels of activation (Ko et al 2009). These studies, among others, do suggest that gaming does have a place in joining the list of non-substance related addictive disorders.

But don’t uninstall your games yet; it is important to note that not everyone who plays computer games will become addicted. And what if there is a brighter side to gaming? What if all those hours of grinding away on World of Warcraft, thrashing your friends on Mario Kart, or chilling on Minecraft might actually benefit you in some way? There is a small, but growing, amount of research that suggests that gaming might be good for your brain.

What we have learnt about how the brain responds to the real world, is being applied to how the brain responds to the virtual world. In the famous work of Maguire et al (2000), it was demonstrated that the taxi drivers of London showed an increased volume of the hippocampus, a region associated with spatial navigation and awareness. This increased volume was attributed to the acquisition of a spatial representation of London. Following from this, some researchers asked how navigation through a virtual world may impact the hippocampus.

In one of these studies, the researchers investigated how playing Super Mario 64, a game in which you spend a large amount of time running and jumping around a virtual world (sometimes on a giant lizard) impacts the hippocampus. When compared to a group that did not train on Super Mario 64, the group that trained on Super Mario 64 for 2 months showed increased volumes of the hippocampus and the prefrontal cortex. As reduced volumes of the hippocampus and the prefrontal cortex are associated with disorders such as post-traumatic stress disorder, schizophrenia and neurodegenerative diseases, the researchers speculate that video game training may have a future in their treatment (Kühn et al 2014). In another study, the impact of training on Super Mario 64 on the hippocampus of older adults, who are particularly at risk of hippocampus-related pathology, was assessed. It was observed that the group that trained by playing Super Mario 64 for 6 months showed an increased hippocampal volume and improved memory performance compared to participants who did not train on Super Mario 64 (West et al 2017). So, it appears that navigating virtual worlds, as well as the real world, may lead to hippocampal volume increase and may have positive outcomes on cognition.

1

A screenshot of Super Mario 64. This game involves exploration of a virtual world. Image taken from Kühn et al 2014[1]

Maybe it makes sense that the world being explored doesn’t have to be real to have an effect on the hippocampus, and games like Super Mario 64 have plenty to offer in terms of world exploration and navigation. But what about the most notorious of games, those first-person shooter action games? It has been suggested that first-person shooter games can lead to increased aggressive behaviours in those who play them, however researchers do not agree that this effect exists (Markey et al 2014 Greitemeyer et al 2014). Nevertheless, can these action games also have more positive effects on the cognitive abilities of the brain? Unlike Super Mario 64, these games require the player to quickly respond to stimuli and rapidly switch between different weapons and devices to use, depending upon the given scenario. Some researchers have investigated how playing action games, such as Call of Duty, Red Dead Redemption, or Counterstrike, impact short-term memory. Participants who either did not play action games, causally played action games, or were experienced in playing action games were tested for visual attention capabilities. The participants were tested on numerous visual attention tests, involving recall and identification of cues that were flashed briefly on a screen. The researchers observed that those who played action games showed significantly better encoding of visual information to short-term memory, dependent on their gaming experience, compared to those who did not (Wilms et al 2013).

In another study, the impact of playing action games on working memory was assessed. Working memory is a cognitive system involved in the active processing of information, unlike short-term memory which involves the recall of information following a short delay (Baddeley et al 2003). In this study, the researchers tested groups of participants who either did not play action games or did play action games. The researchers tested the participants’ working memory by utilising a cognitive test known as the “n-back test”. This test involves watching a sequence of squares that are displayed on a screen in alternating positions. As the test progresses the participants have to remember the position of the squares on the screen from the previous trials whilst memorising the squares being shown to them at that moment.  The researchers observed that people who did play action games outperformed those who did not on this test; they were better able to remember the previous trials, whilst simultaneously memorising the current trials (Colzato et al 2013). From these studies, it appears that action games may have some benefit on the cognitive abilities of the players, leading to increased short-term processing of information in those who play them.

A screen grab from first person shooter games: Call of Duty: WW2 (left), and Halo (right). These fast-paced games involve quickly reacting to stimuli and making quick decisions to bypass enemies and progress in the game.

So, for the worried parents, and the individuals who enjoy indulging in video games, maybe it’s not all bad. As long as you are not suffering from a form of gaming addiction (and if you think you might be please see a health expert) maybe all these hours gaming may actually not be as bad for your brain as it might seem. But ultimately, much more research is needed to understand how a broader range of games played over childhood development, and for time periods of years and decades, affects our brains and mental health.

If you think you may be suffering from a gaming addiction, see the NHS page  for more information.

Edited by Lauren Revie & Monika śledziowska

References:

  • American Psychiatric Association, 2013. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub
  • Baddeley, A., 2003. Working memory: looking back and looking forward. Nature reviews neuroscience4(10), p.829
  • Cai, C., Yuan, K., Yin, J., Feng, D., Bi, Y., Li, Y., Yu, D., Jin, C., Qin, W. and Tian, J., 2016. Striatum morphometry is associated with cognitive control deficits and symptom severity in internet gaming disorder. Brain imaging and behavior10(1), pp.12-20.
  • Colzato, L.S., van den Wildenberg, W.P., Zmigrod, S. and Hommel, B., 2013. Action video gaming and cognitive control: playing first person shooter games is associated with improvement in working memory but not action inhibition. Psychological research77(2), pp.234-239
  • Greitemeyer, T. and Mügge, D.O., 2014. Video games do affect social outcomes: A meta-analytic review of the effects of violent and prosocial video game play. Personality and Social Psychology Bulletin40(5), pp.578-589.
  • Ko, C.H., Liu, G.C., Hsiao, S., Yen, J.Y., Yang, M.J., Lin, W.C., Yen, C.F. and Chen, C.S., 2009. Brain activities associated with gaming urge of online gaming addiction. Journal of psychiatric research43(7), pp.739-747
  • Kühn, S., Gleich, T., Lorenz, R.C., Lindenberger, U. and Gallinat, J., 2014. Playing Super Mario induces structural brain plasticity: gray matter changes resulting from training with a commercial video game. Molecular psychiatry19(2), p.265
  • Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S. and Frith, C.D., 2000. Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences97(8), pp.4398-4403.
  • Markey, P.M., Markey, C.N. and French, J.E., 2015. Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture4(4), p.277
  • Robbins, T.W. and Everitt, B.J., 2002. Limbic-striatal memory systems and drug addiction. Neurobiology of learning and memory78(3), pp.625-636
  • West, G.L., Zendel, B.R., Konishi, K., Benady-Chorney, J., Bohbot, V.D., Peretz, I. and Belleville, S., 2017. Playing Super Mario 64 increases hippocampal grey matter in older adults. PloS one12(12), p.e0187779.
  • Wilms, I.L., Petersen, A. and Vangkilde, S., 2013. Intensive video gaming improves encoding speed to visual short-term memory in young male adults. Acta psychologica142(1), pp.108-118
  • World Health Organization [WHO]. 2018. ICD-11 beta draft – Mortality and morbidity statistics. Mental, behavioural or neurodevelopmental disorders.

Can’t hear someone at a loud party? Just turn your head!

Josh Stevenson-Hoare | 8 AUG 2018

At one point or another, we have all been in this uncomfortable situation. You are talking to someone at a party, and then had the realisation – you can’t understand a word they are saying. You smile and nod along until they get to what sounds like a question. You ask them to repeat it, and you can’t hear what they say. So you ask again. We all know how it goes from there, and how awkward it gets on the fifth time of asking.

Fortunately, there is a way to avoid this happening.
Researchers at Cardiff University have developed a way to maximise your chances of hearing someone at a party on the first try. You don’t need fancy equipment, or to start learning sign language. All you have to do is turn your head. 

If you are trying to listen to something difficult to hear, it makes sense to turn your head 90 degrees from it. This is the same angle as when someone is whispering in your ear, or when you only have one working earbud. If you do this, then one ear will get most of the useful sounds, and the other ear will only get un-useful sounds. You can then ignore any sounds from the un-useful ear, and focus on the sounds you want – the words.

However, for speech, the sounds made by the person speaking are not the only useful piece of information. Lip-reading is something we can all do a little bit, without training. You may not be able to follow a whole conversation, but what you can do is tell the difference between small sounds. These are called phonemes. 

A phoneme is the smallest bit of sound that has meaning. It’s the difference between bog and bag, or tend and mend. If you miss hearing part of a word you often use the shape of the speaker’s mouth to help you figure out what the word was. This can lead to some interesting illusions, such as the McGurk effect. This is when the same sound paired with different mouth shapes causes you to hear different syllables.

If you want to read someone’s lips while they talk, you have to be able to see their face. Unless you have a very wide field of vision, the best way to lip-read is to face someone.

So, to improve hearing you should face away from someone, and for best lip-reading you should face towards them. Not the easiest things to do at the same time.

To find out which is more useful, the researchers played video clips of Barack Obama speaking to some participants. Over this they played white noise, which sounds like a radio tuned between stations. The participants turned their heads to different angles as instructed by the researchers. The white noise was then made louder and quieter until participants could only just hear what Obama was saying.

Participants found it easier to understand the speech when they pointed their heads away from the speech. The best angle, though, changed depending on where the white noise was coming from. 

You might expect that pointing away from the white noise would be best strategy. But, the researchers found that the best strategy was to point around halfway between the speech and the white noise. Not so far that participants couldn’t see Obama’s face, but far enough to have one ear pointed at him.
y

The best angle to face when trying to hear someone in a loud environment. Picture Credit: Josh Stevenson-Hoare.  

In real-life, noise comes from lots of different angles at once. It would be very difficult to work out what angle would be the best all the time. But, if there is one major source of noise such as a pneumatic drill or a music speaker, it can be useful to remember this technique. 

It might look a little silly at first, but it could save you from an embarrassing faux pas next time you go to a party. So now you can hear all the details of Uncle Jimmy’s fishing trip with ease. For the third time this year. You’re welcome.

Edited by Lucy Lewis & Sophie Waldron

Microglia: Guardians of the Brain

Ruth Jones | 9 JUL 2018

Our Immune System works hard to protect us from unwanted bugs and help repair damage to our bodies. In our brains, cells called Microglia are the main immune players. Microglia are part of a group of immune cells called Macrophages, which translates from Greek as “big-eaters”. Much like students faced with free food, macrophages will eat just about anything, or at least anything out of the ordinary. Whether it’s cell debris, dying cells or an unknown entity, macrophages will eat and digest it. Microglia were first discovered in 1920’s by W. Ford Robertson and Pio del Rio-Hortega. They saw that these unidentified cells seemed to act as a rubbish disposal system. Research into these mysterious cells was side-lined during WWII, but was eventually picked up again in the early 1990’s.

When I think about how microglia exist in the brain, I am always reminded of the blockade scene in Marvel’s Guardians of the Galaxy. Each “ship” or cell body stays in its own territory while its branches extend out to touch other cells and the surrounding environment. The microglia ships can then use their branches to touch other cells or secrete chemicals to communicate, together monitoring the whole brain.

ruthNova Core Blockade Scene © Guardians of the Galaxy. (2014). [DVD] Directed by J. Gunn. USA: Marvel Studios.

Microglia have evolved into brilliant multi-taskers. They squirt out chemicals called chemokines that affect the brain’s environment by quickly promoting inflammation to help remove damaged cells, before dampening inflammation to protect the brain from further damage. They also encourage the growth of new nerve cells in the brain and can remove old “unused” connections between neurones, just like clearing out your Facebook friend list. Microglia keep the brain functioning smoothly by taking care of and cleaning up after the other cells day in, day out.

Today, microglia are a trending topic, believed to play a part in multiple brain diseases. Genetic studies have linked microglia to both neuropsychiatric disorders like autism and neurodegenerative diseases such as Alzheimer’s disease.

Do microglia help or hurt in Alzheimer’s disease? This is a complicated question. Scientists have found microglia can both speed-up and slow-down Alzheimer’s disease. Scientists have thought for a while that in disease these microglia “ships” are often destroyed or are too aggressive in their efforts to remove sticky clumps of Αmyloid-β protein (a big problem in Alzheimer’s disease).

What is going on in the Alzheimer’s disease brain? Recent advances in technology mean scientists are now starting to discover more answers. One problem when working with microglia is they have a whole range of personalities, resulting in a spectrum of protein expression and behaviour. Therefore, when you just look at the entire microglial population you may miss smaller, subtle differences between cells. It is likely that microglia don’t start out as “aggressive” or harmful, so how can we see what causes microglia behaviour to change?

Luckily a new process called “single-cell sequencing” has been able to overcome this. An individual cell is placed into a dish where the mRNA, the instructions to make protein, can be extracted and measured. This means you can compare the variation in the entire microglia population, which could make finding a “trouble-maker” microglia easier in disease. This process could also be used to see how individual cell types change across the course of Alzheimer’s disease.

In the future, by looking in detail at how these individual microglia “guardians” behave, scientists can hopefully begin to unravel some of the mysteries surrounding these fascinating and hugely important cells in both health and across all disease stages.

Edited by Sam Berry & Chiara Casella

My reflections on studying Parkinson’s Disease

Dr. Lucia Fernandez Cardo | 18 JUN 2018

Parkinson’s disease (PD) owes its name to Doctor James Parkinson, who in 1817 described the disorder in his manuscript “An essay on the shaking palsy”. It has been 200 years since we began to study this disease, and despite the advances in understanding, we are still far from finding a cure.

PD is the second most common neurodegenerative disorder affecting 1-2% of the worldwide population. The pathological hallmark is the loss of dopaminergic neurons in a very small part of the brain called the Sustantia nigra, and the presence of protein depositions called Lewy bodies (Spillantini et al., 1997). The loss of dopamine leads to a number of motor symptoms: bradykinesia (slowness of movement), rigidity, resting tremor, and postural instability. For the clinical diagnosis of PD, the patient must present bradykinesia plus one of the other three signs. Along with these motor signs, some non-motor symptoms are often present, amongst them anxiety, depression, dementia, sleeping disturbances, intestinal problems or hallucinations (Postuma et al., 2015).

PD can be due to both genetic and environmental risk factors. 10-15% of the cases are described as familial PD with a clear genetic origin (mutations in SNCA, LRRK2 or Parkin genes are the main cause), the remaining cases are considered ‘sporadic’ or ‘idiopathic PD’ and are due to a possible combination of multiple genetic (risk polymorphisms) and environmental risk factors (toxins, exposure to pesticides, side effects of drugs, brain lesions, etc.) (Billingsley et al., 2018; Fleming, 2017; Lee & Gilbert, 2016).

In my experience, researching PD is something that I find challenging, but also motivational and rewarding. Every 11th of April (James Parkinson’s birthday), the regional and national associations of PD patients and their relatives have a day of celebration. They obviously do not celebrate having the disease, but celebrate being together in this battle and never giving up. This day they put aside the pain and the struggling, and celebrate that they are alive by gathering together, laughing, eating, and dancing.

When I was doing my PhD dissertation, I was lucky to be invited to this celebration as part of a group of scientists working on PD. Our group study the molecular genetics of PD, amongst other disorders, and my thesis was focused on studying genetic risk variants in sporadic patients. I will never forget how nervous I was, having to deliver a very brief talk, explaining the genetic component of PD and our current projects at the time. Some years later, though they may not remember me, I still clearly remember the smiling faces in the audience and the cheering and nice words that were said after I finished. Perhaps they did not grasp the most technical concepts, but for them, the mere fact of knowing that there were people researching their disease, working on understanding the mechanisms, and fighting to find a cure, was more than enough to garner many thanks and smiles.

Sadly, there is not yet a cure for PD, but medications, surgery, and physical treatment can provide relief to patients and improve their symptoms. The most common treatments (e.g.  levodopa, dopamine agonists, MAO-B inhibitors) all restore the dopamine levels in the brain (Fox et al., 2018). Levodopa is usually the most successful treatment but the side effects, appearance of dyskinesia (involuntary movement) and fluctuations in the effectiveness can be an issue with long term use. Some patients can be candidates for a very successful surgery treatment called Deep Brain Stimulation (DBS) which involves the implantation of a neurostimulator, usually in the top chest area, and a set of electrodes in specific parts of the brain. The electrical pulses stop the over excitation in the brain and the reduction of motor symptoms is astonishing. Follow the links below to check out some videos of the effects.

https://www.youtube.com/watch?v=wZZ4Vf3HinA https://www.youtube.com/watch?v=Uohp7luuwJI

Despite the treatments, these people can struggle daily due to the difficulty of finding the right drug combinations, the on and off phases of the medication, or the issues of carrying an internal battery to control the electrode pulses in their brain – and this is not even mentioning the possible non-motor symptoms of the disease. After spending a whole day with them I felt overwhelmed by their energy and good sense of humour and definitely saw things from a different perspective.

Finally, I just want to say that getting the chance to meet real people who have the disorder can be so important for us scientists. It helps to remind us why we dedicate so much of our lives researching a disorder. It is why during the hardest moments of failed experiments, struggling with new techniques, and so many extra hours of work, we can keep on going and will not give up. We do it so that we can see patients smile and keep up their hopes that we will one day find a cure for a disease as debilitating as PD.

Edited by Sam Berry & Chiara Casella

References:

If you are interested in learning a little bit more about PD and helping us to demystify this disease, here is the link to some useful websites:

https://www.parkinsons.org.uk/

http://www.parkinson.org/understanding-parkinsons/

https://www.michaeljfox.org/understanding-parkinsons/index.html?navid=understanding-pd

The healing power of companionship

Shireene Kalbassi | 11 MAY 2018

When it comes to the recovery of wounds and other medical conditions, most people probably think of hospital beds, antibiotics, and maybe some stitches. What probably doesn’t come to mind is the role that companionship may play in speeding up the healing process.

And yet, studies in humans have shown a link between increased companionship and enhanced recovery prospects (Bae et al 2001, Boden-Albala et al 2005).

So why should it be that social interaction influences the recovery process? Well, in social species, social interaction leads to the release of the hormone known as oxytocin, AKA the “love hormone”. This hormone is released from the pituitary gland, located in the brain. Increased levels of oxytocin have been associated with lower levels of stress response hormones, such as cortisol and corticosterone, and high levels of these stress response hormones have been shown to lead to impaired healing (Padgett et al 1998, DeVries  et al 2002, Heinrichs et al 2003, Ebrecht et al 2004).

This link between social interaction, oxytocin, stress hormones and recovery has been explored in studies, such as the work of Detillion et al (2004). Here, the authors investigated how companionship impacts wound healing in stressed and non-stressed hamsters. The role of companionship was explored by comparing socially isolated hamsters to ‘socially housed’ hamsters, that share a home environment with another hamster. Stressed hamsters were physically restrained to induce a stress response, while non-stressed hamsters did not undergo physical restraint.

In order to understand how these factors relate, the authors therefore compared four different groups: hamsters that were socially isolated and stressed, hamsters that were socially housed and stressed, hamsters that were socially isolated and non-stressed, and hamsters that were socially housed and non-stressed.

The hamsters that were socially isolated and stressed showed decreased wound healing and increased cortisol levels, when compared to socially housed hamsters or non-stressed socially isolated hamsters. Furthermore, when a blocker of oxytocin was given to socially housed hamsters decreased wound healing was observed, while supplementing stressed hamsters with oxytocin lead to increased wound healing and lower levels of cortisol.

So it seems that when social animals interact oxytocin is released, which reduces the levels of stress hormones, leading to increased wound healing.

But what if there is more to the story than this? These studies, and others like it, demonstrate a relationship between companionship and wound healing, but how might factors relating to social interaction impact recovery?

Venna et al (2014) explored the recovery of mice that were given a brain occlusion, where part of the blood supply of the brain is shut off to try to replicate the damage seen in stroke. However, in this study the mice were either socially isolated, paired with another stroke mouse, or paired with a healthy partner. When assessing recovery, the authors looked at multiple parameters including death rates, recovery of movement, and new neuron growth. The authors observed that, as expected, socially isolated stroke mice showed the lowest rates of recovery. Interestingly, stroke mice that were housed with other stroke mice showed decreased recovery rates when compared to stroke mice that were housed with a healthy partner.

So why should the health status of the partner influence the healing process of the mice? The work of Venna et al did not assess if the amount of social contact between stroke mice that were housed with another stroke mouse was equal to that of stroke mice that were housed with a healthy partner, which may explain the discrepancy seen between the two groups. Exploration of this could lead to better understanding of whether the quantity of social interaction may be leading to decreased recovery rates in the stroke mice housed with other stroke mice groups, or if the decreased recovery may be due to other factors.

Regardless, it appears that social interaction may not be a simple box to tick when it comes to enhancing the recovery process but is instead dynamic in nature. And while nothing can replace proper medical care and attention, companionship may have a role in speeding up the recovery process.  

If you want to know more about the use of animals in research, please click here.

Edited By Sophie Waldron & Monika śledziowska

References:

  • Bae, S.C., Hashimoto, H.I.D.E.K.I., Karlson, E.W., Liang, M.H. and Daltroy, L.H., 2001. Variable effects of social support by race, economic status, and disease activity in systemic lupus erythematosus. The Journal of Rheumatology, 28(6), pp.1245-125
  • Boden-Albala, B., Litwak, E., Elkind, M.S.V., Rundek, T. and Sacco, R.L., 2005. Social isolation and outcomes post stroke. Neurology, 64(11), pp.1888-1892
  • Padgett, D.A., Marucha, P.T. and Sheridan, J.F., 1998. Restraint stress slows cutaneous wound healing in mice. Brain, behavior, and immunity, 12(1), pp.64-73.
  • DeVries, A.C., 2002. Interaction among social environment, the hypothalamic–pituitary–adrenal axis, and behavior. Hormones and Behavior, 41(4), pp.405-413.
  • Heinrichs, M., Baumgartner, T., Kirschbaum, C. and Ehlert, U., 2003. Social support and oxytocin interact to suppress cortisol and subjective responses to psychosocial stress. Biological psychiatry, 54(12), pp.1389-1398.
  • Ebrecht, M., Hextall, J., Kirtley, L.G., Taylor, A., Dyson, M. and Weinman, J., 2004. Perceived stress and cortisol levels predict speed of wound healing in healthy male adults. Psychoneuroendocrinology, 29(6), pp.798-809.
  • Detillion, C.E., Craft, T.K., Glasper, E.R., Prendergast, B.J. and DeVries, A.C., 2004. Social facilitation of wound healing. Psychoneuroendocrinology, 29(8), pp.1004-1011.
  • Glasper, E.R. and DeVries, A.C., 2005. Social structure influences effects of pair-housing on wound healing. Brain, behavior, and immunity, 19(1), pp.61-68
  • Venna, V.R., Xu, Y., Doran, S.J., Patrizz, A. and McCullough, L.D., 2014. Social interaction plays a critical role in neurogenesis and recovery after stroke. Translational psychiatry, 4(1), p.e351