Why do we Parent? Ancient Brain Circuits for Parental Care

Matt Higgs | 03 MAR 2021

I for one predicted that all the extra time that couples have spent inside last year would result in a 2021 baby boom. In fact, PwC predicts that we might actually be facing a baby bust, since couples are postponing their pregnancy plans to later dates. While the factors causing couples to delay or pushforward pregnancy are interesting, the question that interests me as a neuroscientist is perhaps more fundamental – I want to understand what happens in our brain to motivate us to care for our children. Essentially – what is parenting and why do we do it?

Now this may seem a silly question but bear with me. It is currently estimated that a human child will cost £152,747 – £185,413 (Hirsch, 2020), consume an estimated 13 million calories, require a lot of attention, and cost you many hours of sleep over an 18-year period (Kohl, 2018). By the numbers, parenting is tough. However, parenting is obviously not without upsides, since – quite predictably – sociological research shows that being a parent “has a substantial and enduring positive effect on life satisfaction” (Pollmann‐Schult, 2014). But for many other people, the work of being a parent tends to outweigh the positives and, on average, parents tend to be similarly or less happy than their childless peers (Glass et al., 2016). Yet many still willingly choose the burden of parenthood. What in our brain motivates us to do this?

When we look across the animal kingdom, we see that many animals have a set of behaviours directed at their offspring to support their survival. This is particularly true for the class of animals that humans belong to – mammals. Mammalian offspring are particularly helpless and mammals are the only animals that feed their young directly from the teat. This means that both the parent and the offspring become tied into an intimate relationship from birth and to further see their offspring to adulthood requires a lot of parental motivation. Since this intensive parenting behaviour is crucial to the continuation of a species, yet is poorly rewarding and distinctly sacrificial for the caregiver, parenting is seen as an innate behaviour in most mammals (i.e. a neurally hardwired behaviour that an animal is able to perform, at least partially, in advance of experience). This is particularly telling since most mammals are able to take up parenting with little training or experience. They suddenly become motivated to care for their young and know how to feed and protect them. This combination of motivation for and instinctual knowledge of parenting suggests to neuroscientists that the basis of this behaviour across mammals is likely evolutionary shaped neural circuits ready to motivate us when we first become parents.

And this is exactly what has been found.

By focusing on mice and rats, and utilising the abundance of behavioural neuroscience technology available to them, scientists have been able to identify what is happening in rodent brains while performing parenting behaviour. Like humans and most other mammals, mice are heavily motivated to care for their offspring post-birth. This involves a more modest 3-5 weeks of feeding, grooming and protection but they perform these behaviours diligently in spite of the costs. So, what exactly is happening in their brains?

Several decades of research on rodents has shown that a structure within the hypothalamus of the brain (a region commonly associated with regulating core function such as body temperature, sleep and appetite – see image below for location of hypothalamus in the human brain) called the medial preoptic area (MPOA) is of central importance for some of the most fundamental motivated behaviours such as mating and parenting (Numan & Insel, 2003). When researchers specifically destroy this area of the brain, parenting behaviour is abolished, whilst other behaviours are left intact (Lee et al., 2000). This is all well and good, but scientists Johannes Kohl and Catherine Dulac recently went one step further and identified the specific neuronal populations within the MPOA in mice that are active during parenting behaviour – aka the parenting neurons.

These neurons were found to express the protein Galanin, which has become a useful marker to identify these cells. They make up a comparatively small population (10,000 neurons), especially when compared to the 100 million neurons in the whole mouse brain (Wu et al., 2014). Are these neurons really the hub of such a crucial behaviour? Specifically inactivating these neurons in mice caused disruption in parenting behaviour similar to destroying the whole MPOA, which was a good start. Taking it further, Kohl et al. (2018) used viruses to infect the Galanin neurons in the MPOA which can then infect, and allow us to visualise, the input neurons and the output neurons to the MPOA. This revealed the brain regions connected to the MPOA which formed the basis of a parenting circuit in the brain. The Galanin neurons were the hub of this circuit and, when tested, were active during all forms of parenting behaviour, while other parts of the circuit were only active for distinct parts of the parenting repertoire. For example, MPOA neurons projecting to the Ventral Tegmental Area and Nucleus Accumbens (key dopamine regions of the brain) are responsible for ‘motivating’ the parent to care for their offspring, while others projecting to the Periaqueductal gray (a region associated with motor control) are involved in mechanical behaviours such as grooming pups.

This parenting circuit appears to be present in both males and females, but in females the MPOA is heavily influenced by the rising concentrations of hormones of late pregnancy (e.g. estrogen, prolactin). These hormones travel through the maternal bloodstream and are detected by receptors in the MPOA. This hormonal signal primes the MPOA, and the mother, for caregiving behaviour just prior to birth (Rilling & Young, 2014). Life experience and physiological state also go a long way to changing these behaviours/circuits despite their evolutionary origin (Kohl, 2018). Yet this core motivational circuit is a powerful driver and likely does a lot of the heavy lifting to convince a mouse to negate its own wellbeing in favour of its offspring.

One of the core goals for behavioural neuroscientists is to understand the neural mechanisms and the evolution of complex social behaviours. The discovery of this neural network orchestrating parenting behaviour,  with a central coordinating hub (the MPOA) and pools of neurons responsible for producing distinct aspects of this complex behaviour, gives an insight into how neural circuits for complex behaviours can be organized. Hence, this finding can be utilised when it comes to investigating other motivated behaviours such as mating and feeding (Kohl, 2020).

However, to return to our question – why do we parent? Humans are not mice, and the existence of this parenting hub in the human hypothalamus is not confirmed. But since the hypothalamus is deeply conserved across mammals, and it is of central importance to all non-human mammals studied (Numan, 2017), it is highly likely this circuit exists in humans and is performing a similar function.

This combination of motivation for and instinctual knowledge of parenting suggests to neuroscientists that the basis of this behaviour across mammals is likely evolutionary shaped neural circuits ready to motivate us when we first become parents.

For humans, parenting behaviour was never going to be as simple as the output of a highly conserved hypothalamic behaviour circuit. For example, we know there is a strong sense of intentional and chosen effort to rise to the task of parenting. We also know from brain imaging studies that parenting behaviour in humans additionally relies on the cortical areas of the brain, likely infusing our parenting behaviour with feelings of love, devotion and care (Numan 2017; Rilling, 2013). However, this work suggests that an instinctual urge to care for children is driven by an evolutionarily conserved brain structure and not only is it fundamental to most mammals but perhaps to humans as well.  

This research is still a way off having clinical applications but considering the impact that aberrant parenting can have on the parent and offspring (Joseph & John, 2008; Letourneau et al., 2012), understanding this behaviour is a crucial step in the right direction. As to why we parent, naturally we have the capability to decide whether to be a parent or not, and many people chose to skip the ordeal completely, but it is highly likely that deep in your brain lies the ancient circuit that will contribute to the great reward and meaning that being a parent will likely bring you if you choose that path in life.

Editors: Ian Fox and Uroosa Chughtai


  • Glass, J., Simon, R. W., & Andersson, M. A. (2016). Parenthood and happiness: Effects of work-family reconciliation policies in 22 OECD countries. American Journal of Sociology, 122(3), 886-929.
  • Hirsch, D (2020) The Cost of a Child in 2020, Child Poverty Action Group
  • Joseph, M. V., & John, J. (2008). Impact of parenting styles on child development. Global Academic Society Journal: Social Science Insight1(5), 16-25.
  • Kohl, J. (2018). Circuits for care. Science, 362(6411), 168-169.
  • Kohl J, Babayan BM, Rubinstein ND, Autry AE, Marin-Rodriguez B, Kapoor V, Miyamichi K, Zweifel LS, Luo L and Dulac C (2018). Functional circuit architecture underlying parental behaviour. Nature 556(7701):326-331.
  • Kohl J. (2020) Parenting – a paradigm for investigating the neural circuit basis of behavior. Current Opinions Neurobiology. 60:84-91.
  • Lee, A., Clancy, S., & Fleming, A. S. (2000). Mother rats bar-press for pups: effects of lesions of the MPOA and limbic sites on maternal behavior and operant responding for pup-reinforcement. Behavioural brain research, 108(2), 215-231
  • Letourneau, N. L., Dennis, C. L., Benzies, K., Duffett-Leger, L., Stewart, M., Tryphonopoulos, P. D., … & Watson, W. (2012). Postpartum depression is a family affair: addressing the impact on mothers, fathers, and children. Issues in mental health nursing33(7), 445-457.
  • Numan, M. (2017). Reference Module in Neuroscience and Biobehavioral Psychology: Parental Behavior.
  • Numan, M., & Insel, T. R. (2003). The Neurobiology of Parental Behavior (Vol. 1). Springer Science & Business Media.
  • Pollmann‐Schult, M. (2014). Parenthood and life satisfaction: Why don’t children make people happy? Journal of Marriage and Family, 76(2), 319-336.
  • Rilling, J. K. (2013). The neural and hormonal bases of human parental care. Neuropsychologia51(4), 731-747.
  • Rilling, J. K., & Young, L. J. (2014). The biology of mammalian parenting and its effect on offspring social development. science345(6198), 771-776
  • Wu, Z., Autry, A. E., Bergan, J. F., Watabe-Uchida, M., & Dulac, C. G. (2014). Galanin neurons in the medial preoptic area govern parental behaviour. Nature509(7500), 325-330.

The Sixth Sense: How Your Brain Tells Time

Steliana Yanakieva | 17 FEB 2021

Every day we experience the world through our senses – we see colours, hear sounds, taste, and smell food, and feel the sun or the rain on our skin. But how do we sense time? It is certain that we do experience a sense of time, both consciously, for example when we look at a clock, and subconsciously (e.g. in the order in which we do things), and that our sense of time is highly integrated with our other senses. However, even if we lost our ability to see, smell or hear, we would still have a sense of time passing.

Sensing time (time perception) seems to be a product of evolution. As far as we know, humans are the only species consciously aware of the passage of time and our own mortality. Despite how it might seem, we do not perceive time itself, but rather we perceive changes in events occurring in time. Hence, unlike our other senses, time perception does not have a dedicated sensory system. Instead, time is a construction of the brain that enables us to perceive a unified sensory picture of the world, which underlies our conscious experience. This idiosyncratic sixth sense is fundamental to our understanding of sequential events, allowing us to perceive our lives as an uninterrupted stream of events.

We are only truly aware of a few seconds of time at any one moment, a phenomenon termed “specious present” by E.R. Clayand and later elaborated by William James (James, 1890). For example, whilst we can plan for events that have not yet occurred, we are incapable of perceiving durations in the future. In fact, durations of events (intervals) can only be perceived after they have ended, so technically the “specious present” moment you are aware of has already happened. David Eagleman (2009) explains this in his famous essay “Brain Time”. He argues that different types of information are not only processed by distinct neural pathways, but also at different speeds. In order to perceive a continuous unified picture, our brain has to overcome this difference by waiting for the slowest sensory information to arrive before making us aware of what is happening ‘now’. This delay of around 100 milliseconds allows us to watch TV unaware of the fact that our brain processes auditory stimuli faster than visual stimuli. So, if you have ever experienced the frustration of unsynchronised TV audio and video, there is a delay of over 100 milliseconds, that your brain is programmed to pay attention to.

In a cognitive sense, attention is a mental process that allows you to selectively attend to information relative to completing a task. Hence, if you are in a boring class, thinking about how long you have got to the end of it – you will be more aware of the passage of time and therefore overestimate its duration (e.g. time appears to pass slower). On the other hand, time will appear to pass much faster when you are having fun. This goes to show that intact perception of small intervals of times is essential to our day-to-day functioning.

Mechanisms of Time Perception

Durations in the milliseconds to seconds ranges, in psychology, are referred to as interval timing. Impairments have been observed in psychiatric disorders marked by disruptions of consciousness, such as schizophrenia (Allman & Meck, 2012), dissociative disorders (Simeon, et al., 2007; Spiegel et al., 2013), Parkinson’s disease (te Woerd et al., 2014; Gulberti et al., 2015) and Huntington’s disease (Beste et al., 2007). Specifically, impairments in time perception are associated with symptoms such as tremor and hallucinations. Therefore, understanding the neural mechanisms of interval timing would allow scientists to develop new therapies for these symptoms underlined by timing deficits. Over the years, there have been several theories about the neural mechanism of time perception (Gibbon, 1977; Matell & Meck, 2004), and even though scientists cannot agree on a unified model of interval timing, one thing we know for sure is that time perception is a multifaceted process, dependant on other cognitive process, particularly attention.

Both schizophrenia and Parkinson’s disease are associated with aberrant dopamine concentrations in the brain (Brisch et al., 2014; Davie, 2008), which, interestingly, have been linked to the speed of our “internal-clock” (Cheng et al., 2007). Excessive dopamine, as seen in schizophrenia, appears to lead to overestimation of time intervals whilst dopamine depletion, as seen in Parkinson’s disease, appears to lead to its underestimation (Hass & Durstewitz, 2016; Meck, 1996). We can see these effects without relying on neurological conditions because stimulant drugs, such as caffeine, cocaine and amphetamines, increase brain dopamine levels and can lead to overestimating time intervals, while depressant drugs, such as ketamine, have the opposite effect, likely through the effects such psychoactive substances have on attention.

One way of understanding this phenomenon is that psychoactive drugs will either excite or inhibit the firing of dopaminergic neurons in the brain. Whilst stimulants increase the rate of neuronal firing, allowing the brain to register more events within a given time interval and leading to the perception of time speeding up, inhibitory drugs decrease the firing rate of neurons, resulting in a slowing down perceived time. However, since such drugs also impact attention, it is difficult to disentangle whether the observed effects on timings are due to the dopaminergic manipulation, per se, or if they are caused indirectly due to increased/decreased attention to time.

A promising solution to this problem appears to be microdosing of hallucinogens, such as LSD, which appear to alter interval time
perception without marked disturbances to attention, concentration, and memory (Yanakieva, et al., 2018).

In a cognitive sense, attention is a mental process that allows you to selectively attend to information relative to completing a task.

Overall, our perception of time is one of the most fascinating sensory experiences. Despite research literature on timing dating back 150 years, how our brains process time is still a mystery. Can understanding the neural basis of time perception unravel the hard problem of consciousness? Can it explain the altered states of consciousness observed in schizophrenia and the dissociative disorders? Are animals aware of passage of time and does this make them conscious? These are just a few of the questions that remain to be answered. However, each scientific experiment raises more questions than answers, all highly intriguing and deserving of attention.

Editors: Matt Higgs and Uroosa Chughtai


  • Allman, M. J., & Meck, W. H. (2012). Pathophysiological distortions in time perception and timed performance. Brain, 135(3), 656–677.doi:/10.1093/brain/awr210
  • Beste, C., Saft, C., Andrich, J., Müller, T., Gold, R., Falkenstein, M., (2007). Time processing in Huntington’s disease: a group-control study. PLoS One, 2, e1263.
  • Brisch, R., Saniotis, A., Wolf, R. Bielau, H., Bernstein, H., Steiner, J., et al. (2014). The Role of Dopamine in Schizophrenia from a Neurobiological and Evolutionary Perspective: Old Fashioned, but Still in Vogue. Frontiers of Psychiatry, 5, 47. doi: 10.3389/fpsyt.2014.00047
  • Cheng, R.K., Ali, Y.M. &, Meck, W.H (2007). Ketamine “unlocks” the reduced clock-speed effect of cocaine following extended training: evidence for dopamine-glutamate interactions in timing and time perception. Neurobiolology of Learning and Memory, 88,149-159.
  • Davie, C.A. (2008). A review of Parkinson’s disease. British Medical Bulletin, 86(1), 109-127. Doi:10.1093/bmb/ldn013.
  • Dormal, V., Javadi, A.H., Pesenti, M., Walsh, V., & Cappelletti, M. (2016). Enhancing duration processing with parietal brain stimulation. Neuropsychologia, 85, 272-277.
  • Eagleman, D.M. (2009) Brain Time, in What’s Next? Dispatches on the Future of Science. Ed. M. Brockman. New York: Vintage.
  • Gibbon, J. (1977). Scalar expectancy theory and Weber’s law in animal timing. Psychological Review, 84, 279–325.
  • Gulberti, A., Moll, C. K. E., Hamel, W., Buhmann, C., Koeppen, J. A., Boelmans, K., Zittel, S., Gerloff, C., Westphal, M., Schneider, T. R., & Engel, A. K. (2015). Predictive timing functions of cortical beta oscillations are impaired in Parkinson’s disease and influence by L-DOPA and deep brain stimulation of the subthalamic nucleus. NeuroImage: Clinical, 9, 436-449.
  • Hass, J., & Durstewitz, D. (2016). Time at the center, or time at the side? Assessing current models of time perception. Current Opinion in Behavioral Sciences, 8, 238–244. doi.org/10.1016/j.cobeha.2016.02.030
  • James, W. (1890). The Principles of Psychology, New York: Henry Holt.
  • Mattel, M. S., & Meck, W. H. (2004). Cortico-striatal circuits and interval timing: coincident detection of oscillatory processes. Brain Research. Cognitive Brain Research, 21(2), 139-170.
  • Simeon, D., Hwu, R., & Knutelska, M. (2007). Temporal disintegration in depersonalization disorder. Journal of Trauma & Dissociation, 8(1), 11-24. doi: 10.1300/J229v08n01_02.
  • Spiegel, D., Lewis-Fernandez, R., Lanius, R., Vermetten, E., Simeon, D., & Friedman, M. (2013). Dissociative disorders in DSM-5. Annual review of clinical psychology, 9, 299-326. doi: 10.1146/annurev-clinpsy-050212-185531
  • te Woerd, E.S., Oostenveld, R., de Lange, F. P., & Praamstra, P. (2014). A shift from prospective to reactive modulation of beta-band oscillations in Parkinson’s disease. NeuroImage, 100, 507 -519.
  • Yanakieva, S., Polychroni, N., Family, N., Williams, L.T.J., Luke, D.P., Terhune, D.B. (2018). The effects of microdose LSD on time perception: a randomised, double-blind, placebo-controlled trial. Psychopharmacology, 236(4), 1159–1170. 10.1007/s00213-018-5119-x

Life of Prion

Or What Links Cannibalism to Foot and Mouth Disease?

Simona Zahova | 3 APR 2019

A peculiar group of proteins, prions, have earned a mythical status in sci-fi due to their unorthodox properties and unusual history. These deadly particles often play a villainous role in fiction, appearing in the Jurassic Park franchise, and countless zombie stories. Even putting apocalyptic conspiracies aside, prions are one of the wackiest products of nature, with a history so remarkable it needs no embellishment. Tighten your seatbelts, we are going on a journey!

Our story begins in Papua New Guinea, with the Fore tribe. The Fore people engaged in ritualistic funerary cannibalism, consisting of cooking and eating deceased family members. This tradition was considered necessary for liberating the spirits of the dead. Unfortunately, around the middle of the 20thcentury, the tribe experienced a mysterious deadly epidemic, that threatened to wipe them out of existence. A few thousand deaths were estimated to have taken place between the 50s and the 60s, with the diseased exhibiting tremors, mood swings, dementia and uncontrollable bursts of laughter. Collectively, these are symptoms indicative of neurodegeneration, which is the process of progressive death of nerve cells. Inevitably, all who contracted the disease died within a year (Lindenbaum 1980). The Fore people called the disease Kuru after the local word for “tremble”, and believed it was the result of witchery.

Meanwhile, Australian medics sent to investigate the disease reported that it was psychosomatic. In other words, the medics believed that the tribe’s fear of witchcraft had caused massive hysteria that actually had an effect on health (Lindenbaum 2015). In the 60s, a team of Australian scientists proposed that the cannibalistic rituals might be leading to the spreading of a bug causing the disease. Since the Fore tribe learned about the possible association between cannibalism and Kuru, they ceased the tradition and the disease rates drastically reduced (Collinge et al. 2006). However, the disease didn’t disappear completely, and the nature of the mysterious pathogen eluded scientific research.

Around the same time, on the other side of the globe, another epidemic was taking place. The neurodegenerative disease “scrapie” (aka “foot and mouth disease”) was killing flocks of sheep in the UK. The affected animals exhibited tremors and itchiness, along with unusual nervous behaviour. The disease appeared to be infectious, yet no microbe had been successfully extracted from any of the diseased cadavers. A member of the agricultural research council tentatively noted that there were a few parallels between “scrapie” and Kuru (Hadlow 1959). For one, they were the only known infectious neurodegenerative diseases. More importantly, both were caused by an unknown pathogen, which eluded the normal methods of studying infectious diseases. However, due to the distance in geography and species between the two epidemics, this suggestion didn’t make much of a splash at the time.

The identity of this puzzling pathogen remained unknown until 1982, when Stanley Prusiner published an extensive study on the brains of “scrapie” infected sheep. It turned out that the culprit behind this grim disease wasn’t a virus, a bacterium, or any other known life form (Prusiner 1982). Primarily, the pathogen consisted of protein, but didn’t have any DNA or RNA, which are considered a main requirement for life. To the dismay of the science community, Prusiner proposed that the scrapie “bug” was a new form of protein-based pathogen and coined the term “prion”,short for “proteinaceous infectious particle”. He also suggested that prions might be the cause not only of scrapie, but also of other diseases associated with neurodegeneration like Alzheimer’s and Parkinson’s. Prusiner was wrong about the latter two but was right to think the association with “scrapie” would not be the last we hear of prions. Eventually, the prion protein was confirmed to also be the cause of Kuru and a few similar diseases, like “mad cow” and Creutzfeldt-Jacob disease (Collins et al. 2004).

Even more curiously, susceptibility to prion diseases was observed to vary between individuals, leading to the speculation that there might be a genetic component as well.  The mechanism behind this property of the pathogen remained a mystery until the 90s. Once biotechnological development allowed the genetic code of life to be studied in detail, scientists demonstrated that the prion protein is actually encoded in some animal genomes and is expressed in the brain. The normal function of prions is still unclear, but some studies suggest they may play a role in protecting neurons from damage in adverse situations (Westergard et al. 2007).

How does a protein encoded into our own DNA for a beneficial purpose act as an infectious pathogen? Most simply put, the toxicity and infectiousness only occur if the molecular structure of the prion changes its shape (referred to as unfoldingin biological terms). This is where heritability plays a part. Due to genetic variation, one protein can have multiple different versions within a population. The different versions of the prion protein have the same function, but their molecular architecture is slightly different.

Imagine that the different versions of prion proteins are like slightly different architectural designs of the same house. Some versions might have more weight-bearing columns than others. Now let’s say that an earthquake hits nearby. The houses with the extra weight-bearing columns are more likely to survive the disaster, while the other houses are more likely to collapse.

What can we take away from this analogy? A person’s susceptibility to prion diseases depends on whether they have inherited a more or less stable version of the prion protein from their parents. In this case, the weight-bearing column is a chemical bond that slightly changes the molecular architecture of the prion, making it more stable. Different prion diseases like Kuru and “scrapie” are caused by slightly different unstable versions of the prion protein, and their symptoms and methods of transmission also differ.

Remarkably, a study on the Fore people from 2015 discovered that some members of the tribe carry a novel variant of the prion protein, that gives them complete resistance to Kuru (Asante et al. 2015). Think of it this way: if people inherit houses of differing stability, then some members of the Fore tribe have inherited indestructible bunkers. Evolution at its finest! It isn’t quite clear what is the triggering event behind the “collapsing” or unfolding of prions. Once a prion protein has unfolded, it leads to a domino effect, causing the other prions within the organism to also collapse. As a result, a bunch of unfolded proteins accumulate in the brain, which causes neurodegeneration and eventually death.

One explanation of why neurons die in response to prions “collapsing” is that cells sense and dislike
unfolded proteins, triggering a chain of events called the unfolded protein response. This response stops all protein production in the affected cells until the problem is sorted out. However, the build-up of pathogenic prions is an irreversible process and it happens quite quickly, so the problem is too big to be solved by stopping protein production. In fact, it is a problem so big that protein production remains switched off indefinitely, and consequently neurons starve to death (Hetz and Soto 2006).

We have established that prions are integral to some animal genomes and can turn toxic in certain cases, but how can they be infectious too? Parkinson’s and Alzheimer’s are also neurodegenerative diseases caused by the accumulation of an unfolded protein, but they aren’t infectious. The difference is that prions have a mechanism of spreading comparable to viruses or bacteria.  One might wonder why one of our own proteins has a trait that allows it to turn into a deadly pathogen. Perhaps this trait allowed proteins to replicate themselves before the existence of DNA and RNA. Or, in other words, this might be remainder from before the existence of life itself (Ogayar and Sánchez-Pérez 1998).

To wrap things up, prion diseases are a group of deadly neurodegenerative diseases that occur when our very own prion proteins change their molecular structure and accumulate in the brain. What makes prions unique, is that once they unfold, they become infectious and can be transmitted between individuals. The study of their biomolecular mechanism has not only equipped us with enough knowledge to prevent potential future epidemics, but also offers an exciting glimpse into some of the secrets of pathogenesis, neurodegenerative diseases, evolution and life. Most importantly, we don’t need to worry about the zombies anymore. Let them come, we can take ‘em!

Edited by Jon Fagg & Sophie Waldron


  • Asante, E. A. et al. 2015. A naturally occurring variant of the human prion protein completely prevents prion disease. Nature522(7557), pp. 478-481.
  • Collinge, J. et al. 2006. Kuru in the 21st century—an acquired human prion disease with very long incubation periods. The Lancet367(9528), pp. 2068-2074. doi: https://doi.org/10.1016/S0140-6736(06)68930-7
  • Collins, S. J. et al. 2004. Transmissible spongiform encephalopathies. The Lancet363(9402), pp. 51-61.
  • Hadlow W.J. Scrapie and kuru. Lancet. 1959:289–290.
  • Hetz, C. A. and Soto, C. 2006. Stressing out the ER: a role of the unfolded protein response in prion-related disorders. Current molecular medicine6(1), pp. 37-43.
  • Lindenbaum, S. 1980. On Fore Kinship and Kuru Sorcery. American Anthropologist82(4), pp. 858-859.
  • Lindenbaum, S. 2015. Kuru sorcery: disease and danger in the New Guinea highlands. Routledge.
  • Ogayar, A. and Sánchez-Pérez, M. 1998. Prions: an evolutionary perspective. Springer-Verlag Ibérica.
  • Prusiner, S. B. 1982. NOVEL PROTEINACEOUS INFECTIOUS PARTICLES CAUSE SCRAPIE. Science216(4542), pp. 136-144. doi: 10.1126/science.6801762
  • Westergard, L. et al. 2007. The cellular prion protein (PrP C): its physiological function and role in disease.Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease1772(6), pp. 629-644.

Can’t or Won’t – An Introduction To Apathy.

 Megan Jackson | 19 DEC 2018

Often, when a person hears the word apathy, an image comes to mind. A glassy-eyed teenager scrolling vacantly through their phone while their parent looks on in despair. While comical, it does not reflect what apathy really is clinically: a complex symptom that has important clinical significance.

In 1956, a study was published describing a group of Americans released from Chinese prison camps following the Korean war1. As a reaction to the severe stress they had suffered during their time in prison, the men were observed to be ‘listless’, ‘indifferent’ and ‘lacking emotion’. The scientists decided to call this pattern of behaviours apathy. However, at this point, there was no formal way to measure apathy. It was acknowledged that it could manifest in varying degrees, but that was the extent of it. It was over 30 years before apathy was given a proper definition and recognised as a true clinical construct.

As time went on, scientists noticed that apathy doesn’t just arise from times of extreme stress, like time in a prison camp, but that it also appears in a variety of clinical disorders. A proper definition and a way of assessing apathy was needed. In 1990, Robert Marin defined apathy as ‘a loss of motivation not attributable to current emotional distress, cognitive impairment, or diminished level of consciousness’. As this is a bit of a mouthful, it was summarised as ‘A measurable reduction in goal-directed behaviour’. This definition makes it easy to imagine an individual who no longer cares about, likes, or wants anything and therefore does nothing. However, this is not always the case. There are different subtypes of apathy that each involve different brain regions and thought-processes, these are:

Cognitive – in which the individual does not have the cognitive ability to put a plan into action. This may be due to disruption to the dorsolateral prefrontal cortex.

Emotional-affective – in which the individual can’t link their behaviour or the behaviour of others with emotions. This may be due to disruption to the orbital-medial prefrontal cortex.

Auto-activation – in which the individual can no longer self-initiate actions. This may be due to disruption to parts of the globus pallidus.

It’s much easier to picture how the different types of apathy affect behaviour. Take Bob for example. Bob has apathy, yet Bob likes cake. When somebody asks Bob whether he would like cake he responds with a yes. However, Bob makes no move to go and get it. Bob still likes cake, but he can no longer process how to obtain that cake. He has cognitive apathy. In another example, Bob may want cake but does not want to get up and get it. However if someone told him to, he probably would go and get it. This is auto-activation apathy and is the most severe and the most common kind. If Bob could no longer associate cake with the feeling of happiness or pleasure, then he has emotional-affective apathy.

So, whatever subtype of apathy Bob has, he doesn’t get his cake. A shame, but this seems a little trivial. Should we really care about apathy? Absolutely! Imagine not being able to get out of your chair and do the things you once loved. Imagine not being able to feel emotions the way you used to. Love, joy, interest, humour – all muted. Think of the impact it would have on your family and friends. It severely diminishes quality of life, and greatly increases caregiver burden. It is extremely common in people with neurodegenerative diseases like dementia2, psychiatric disorders like schizophrenia3, and in people who’ve had a stroke4. It can even occur in otherwise healthy individuals.

Elderly people are particularly at risk, though scientists haven’t yet figured out why. Could it be altered brain chemistry? Inevitable degeneration of important brain areas? One potential explanation is that apathy is caused by a disruption to the body clock. Every person has a body clock, a tiny area of the brain called the suprachiasmatic nucleus, which controls the daily rhythms of our entire bodies like when we wake up, go to sleep, and a load of other really important physiological processes like hormone release. Disruption to the body clock can cause a whole host of health problems, from diabetes to psychiatric disorders like depression. Elderly people have disrupted daily rhythms compared to young, healthy people and it is possible that the prevalence of apathy in the elderly is explained by this disrupted body clock. Much more research is needed to find out if this is indeed the case and why!

Figuring out how or why apathy develops is a vital step in developing a treatment for it, and it’s important that we do. While apathy is often a symptom rather than a disease by itself, there’s now a greater emphasis on treating neurological disorders symptom by symptom rather than as a whole, because the underlying disease mechanisms are so complex. So, developing a treatment for apathy will benefit a whole host of people, from the elderly population, to people suffering from a wide range of neurological disorders.

Edited by Sam Berry & Chiara Casella


  • Chase, T.N., (2011) Apathy in neuropsychiatric disease: diagnosis, pathophysiology, and treatment Neurotox Res 266-78.
  • Chow, T.W., (2009) Apathy Symptom Profile and Behavioral Associations in Frontotemporal Dementia vs. Alzheimer’s Disease, Arch Neurol, 66(7); 88-83
  • Gillette M.U., (1999) Suprachiasmatic nucleus: the brain’s circadian clock, Recent Prog Horm Res. 54:33-58
  • Stassman, H.D. (1956) A Prisoner of War Syndrome: Apathy as a Reaction to Severe Stress, Am.J.Psyc., 112(12):998-1003
  • Willem van Dalen, J., (2013) Poststroke apathy, Stroke, 44:851-860.

The Story of Adult Human Neurogenesis or: How I learned to Stop Worrying and Love The Bomb

Dr Niels Haan | 5 DEC 2018

Recently, the debate about adult human neurogenesis seems to be just a dame short of a panto. Do adult humans form new neurons? Oh no, they don’t! Oh yes, they do! There are not many fields where people debate the very existence of the phenomenon they are studying. What do nuclear bombs have to do with it? We’ll come to that later.

What is the big deal?

For many decades, neuroscience dogma was that once the brain was formed after childhood, that was it. All you could do was lose cells. This was first challenged by Robert Altman in the 1960s, when he showed new neurons were forms in adult rodents, but his work was largely ignored at the time. A second wave of evidence came along in the late 1980s and 90s, first starting in songbirds, and later on the conformation that adult neurogenesis does take place in rodents.

In the years that followed, it has been shown that rodent adult neurogenesis takes place in two main areas of the brain, the wall of the lateral ventricles, and the hippocampus. The real importance is the function of these new neurons. In rodents, these cells are involved in things like discrimination of similar memories, spatial navigation, and certain forms of fear and anxiety.

Obviously, the search for adult neurogenesis in humans started pretty much immediately, but decades years later we still haven’t really reached a conclusion.

Why is there so much controversy?

To definitively show adult neurogenesis, you need to be able to show that any given neuron was born in the adult animal or human, rather than in the womb or during childhood. This means using a way to show cell division, as birth of a neuron requires a stem cell to divide and produces at least one daughter cell that ends up being a neuron.

In animals, this is straightforward. Cell division requires the copying of a cell’s DNA. You inject a substance that gets built into new DNA, detect this later once the new neuron has matured, and say “this cell was born after the injection”. To test what these cells are used for, we tend to reduce the numbers the stem cells with chemicals or genetic tricks, and see what the effect on the behaviour of the animal is.

However, injecting chemicals into the brains of humans tends to be frowned upon. Similarly, killing off all their stem cells and doing behavioural tests doesn’t tend to go down well with volunteers. So, we can’t use our standard methods. What we’re left with then is to detect certain proteins that are only found in stem cells or newly born neurons, to show they are present in the adult brain. However, that’s easier said than done.

Although there are proteins that mark mainly things like dividing cells, stem cells, or newly born neurons, those are not necessarily always only found in those cells. All these markers have been found time and again in the human hippocampus. However, because they are not always unique to stem cells and newly born neurons, there is endless debate on which proteins – or indeed which combinations of proteins – to look at, and what it means when cells have them.

What is the evidence?

Dozens, if not hundreds of papers have tried to address this question, and I don’t have the space – or the energy – to discuss all of them. Let’s look at some influential and recent studies that made the headlines, to show how radically different some people are thinking about this subject.

One of the major influential studies in the field came from the lab of Jonas Frisen in 2015. They used a way to get round the problem of detecting dividing cells. When DNA is copied to make a new cell, a lot of carbon is used. Nuclear bomb testing in the 50s and 60s introduced small amounts of (harmless) radioactive carbon into the atmosphere, and so eventually into the DNA of cells born during that time. The end of nuclear testing has lead to a slow decline of that radioactive carbon. So, by measuring how much radioactive carbon is in a cells DNA, you can determine when it was born. Frisen and his group did just this, and showed that people had neurons born throughout their lives in their hippocampus, with about 700 cells being born per day.

This didn’t convince everyone though. This was shown earlier this year when a widely publicised paper came out in Nature. This group did not do any birthdating of cells, but looked for the characteristic markers of stem cells and immature neurons in brains from people of a wide range or ages. According to them, the only way to reliably detect a newly born neuron is to look for two different markers on the same cell. They could only find one of the markers in adults, so by this measure, they found no new neurons after childhood.

The very next month, a very similar paper came out, using essential identical methods, and showed the exact opposite. They did find the new neurons in brains of a wide range of ages, and when counting them, found very similar rates of neurogenesis as Frisen did with his completely different methods.

So, who is right?

That depends on who you ask, and it depends on the question you’re asking (isn’t science fun?). The majority of studies have shown evidence for some neurogenesis in one form or another. How convincing this evidence is comes down to seemingly petty technical arguments, and biases of who you’re asking. The biggest questions are about which markers to use to find the cells, as shown by the two studies mentioned above, and nobody agrees on this yet.

Barring some spectacular technical breakthrough that gives us the same sorts of tools in humans as we have in animals, this debate will undoubtedly keep going for some years yet. The bigger question, which we haven’t addressed at all yet, is whether these adult born cells actually do anything in humans. That’s the next big debate to have…..

Edited by Lauren Revie, Chiara Casella, and Rae Pass

Reading Without Seeing

Melissa Wright | 13 NOV 2018

When the seeing brain goes blind

In the late 90’s, a blind 63-year old woman was admitted to a university hospital emergency room. After complaining to co-workers of light-headedness that morning, she had collapsed and become unresponsive. Within the 48 hours following her admission, after what was found to be a bilateral occipital stroke, she recovered with no apparent motor or neurological problems. It was only when she tried to read that an extraordinary impairment became apparent: despite the damage only occurring in the occipital lobe, which is typically devoted to vision, she had completely and specifically lost the ability to read Braille. Braille is a tactile substitution for written letters, consisting of raised dots that can be felt with the fingertips. Before this, she had been a proficient Braille reader with both hands, which she had used extensively during her university degree and career in radio (Hamilton, Keenan, Catala, & Pascual-Leone, 2000). So what happened?

The Visual Brain

It is estimated that around 50% of the primate cortex is devoted to visual functions (Van Essen, Anderson, & Felleman, 1992), with the primary visual areas located right at the back of the brain within the occipital lobe (also known as the visual cortex). Visual information from the retina first enters the cortex here, in an area named V1. Within V1, this information is organised to reflect the outside world, with neighbouring neurons responding to neighbouring parts of the visual field. This map (called a retinotopic map) is biased towards the central visual field (the most important part!) and is so accurate that researchers have even managed to understand which letters a participant is reading, simply by looking at their brain activity (Polimeni, Fischl, Greve, & Wald, 2010). These retinotopic maps are found in most visual areas in some form. As information is passed forward in the brain, the role of these visual areas becomes more complex, from motion processing, to face recognition, to visual attention. Even basic visual actions, like finding a friend in the crowd, requires a hugely complex chain of processes. With so much of the cortex devoted to processing visual information, what happens when visual input from the retina never occurs? Cases such as the one above, where a person is blind, suggest that the visual cortex is put to use in a whole new way.

Cortical Changes

In sighted individuals, lexical and phonological reading processes activate frontal and parietal-temporal areas (e.g. Rumsey et al., 1997), while touch involves the somatosensory cortex. It was thought that braille reading activated these areas, causing some reorganisation of the somatosensory cortex. However, as the case above suggests, this does not seem to be the whole story (Burton et al., 2002). Remember, in this instance, the unfortunate lady had  damage to the occipital lobe, which is normally involved in vision, but as the lady was born blind it had never received any visual information. Although you might expect that damage to this area would not be a problem for someone who is blind, it turned out instead to impair abilities associated with language and touch! This seriously went against what scientists had understood about brains and their specialised areas, and had to be investigated.

Neuroimaging, such as functional Magnetic Resonance Imaging (fMRI), allows us to look inside the brain and see what area is activated when a person performs a certain task. Using this technique, researchers have found that in early blind individuals, large portions of the visual cortex are recruited when reading Braille (H. Burton et al., 2002). This activity was less apparent for those who became blind in their later years, though was still present, and it wasn’t there at all for sighted subjects. That the late-blind individuals had less activity in this region seems to show that as we get older and as brain regions become more experienced, they become less adaptable to change. A point to note however – fMRI works by correlating increases in blood oxygen (which suggests an increase in energy demand and therefore neural activity) with a task, such as Braille reading. As any good scientist will tell you, correlation doesn’t equal causation! Perhaps those who cannot see are still somehow ‘visualising’ the characters?

So is there any other evidence that the visual areas can change their primary function? Researchers have found that temporarily disrupting the neural activity at the back of the brain (using a nifty technique called Transcranial Magnetic Stimulation) can impair Braille reading, or even induce tactile sensations on the reading fingertips (e.g. Kupers et al., 2007; Ptito et al., 2008)!

Other fMRI studies have investigated the recruitment of the occipital lobe in non-visual tasks and found it also occurs in a variety of other domains, such as in hearing (e.g. Burton, 2003) and working memory (Harold Burton, Sinclair, & Dixit, 2010). This reorganisation seems to have a functional benefit, as researchers have found that the amount of reorganisation during a verbal working memory task is correlated with performance (Amedi, Raz, Pianka, Malach, & Zohary, 2003). As well, it has been reported that blind individuals can perform better on tasks such as sound localisation (though not quite as good as Marvel’s Daredevil!) (Nilsson & Schenkman, 2016).

But Is It Reorganisation?

This is an awesome example of the ability of the brain to change and adapt, and this seems true also for areas that are so devoted to one modality. How exactly this happens is still unknown, and could fill several reviews on its own! One possibility is that neuronal inputs from other areas grow and invade the occipital lobe, although this is difficult to test non-invasively in humans because we can’t look at individual neurons with an MRI scan. The fact that much more occipital lobe activity is seen in early-blind than late-blind individuals (e.g. H. Burton et al., 2002) suggests that whatever is changing is much more accessible to a developing brain. However, findings show that some reorganisation can still occur in late-blind, and even in sighted individuals who undergo prolonged blindfolding or sensory training (Merabet et al., 2008). This rapid adaptation suggests that the mechanism involved may be making use of some pre-existing multi-sensory connections that multiply and reinforce following sensory deprivation.

Cases of vision restoration in later life are rare, but one such example came from a humanitarian project in India, which found and helped a person called SK (Mandavilli, 2006). SK was born with Aphakia, a rare condition in which his eye developed without a lens. He grew up near blind, until the age of 29 when project workers gave him corrective lenses. 29 years with nearly no vision! Conventional wisdom said there was no way his visual cortex could have developed properly, having missed the often cited critical period that occurs during early development. Indeed, his acuity (ability to see detail, tested with those letter charts at the optometrists) showed initial improvement after correction, but this did not improve over time suggesting his visual cortex was not adapting to the new input. However, they also looked at other forms of vision, and there they found exciting improvements. For example, when shown a cow, he was unable to integrate the patches of black and white into a whole until it moved. After 18 months, he was able to recognise such objects even without movement. While SK had not been completely without visual input (he had still been able to detect light and movement), this suggests that perhaps some parts of the visual cortex are more susceptible to vision restoration. Or perhaps multi-sensory areas, that seem able to reorganise in vision deprivation, are more flexible to regaining vision?

So Much left to Find Out!

From this whistle-stop tour, the most obvious conclusion is that the brain is amazing and can show huge amounts of plasticity in the face of input deprivation (see the recent report of a boy missing the majority of his visual cortex who can still see well enough to play football and video games; https://tinyurl.com/yboqjzlx). The question of what exactly happens in the brain when it’s deprived of visual input is incredibly broad. Why do those blind in later life have visual hallucinations (see Charles Bonnet Syndrome)? Can we influence this plasticity? What of deaf or deaf-blind individuals? Within my PhD, I am currently investigating how the cortex reacts to another eye-related disease, glaucoma. If you want to read more on this fascinating and broad topic, check out these reviews by Merabet and Pascual (2010), Ricciardi et al. (2014) or Proulx (2013).

Edited by Chiara Casella & Sam Berry


  • Amedi, A., Raz, N., Pianka, P., Malach, R., & Zohary, E. (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience, 6(7), 758–766. https://doi.org/10.1038/nn1072
  • Burton, H. (2003). Visual cortex activity in early and late blind people. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(10), 4005–4011.
  • Burton, H., Sinclair, R. J., & Dixit, S. (2010). Working memory for vibrotactile frequencies: Comparison of cortical activity in blind and sighted individuals. Human Brain Mapping, NA-NA. https://doi.org/10.1002/hbm.20966
  • Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., & Raichle, M. E. (2002). Adaptive Changes in Early and Late Blind: A fMRI Study of Braille Reading. Journal of Neurophysiology, 87(1), 589–607. https://doi.org/10.1152/jn.00285.2001
  • Fine, I., Wade, A. R., Brewer, A. A., May, M. G., Goodman, D. F., Boynton, G. M., … MacLeod, D. I. A. (2003). Long-term deprivation affects visual perception and cortex. Nature Neuroscience, 6(9), 915–916. https://doi.org/10.1038/nn1102
  • Hamilton, R., Keenan, J. P., Catala, M., & Pascual-Leone, A. (2000). Alexia for Braille following bilateral occipital stroke in an early blind woman. Neuroreport, 11(2), 237–240.
  • Kupers, R., Pappens, M., de Noordhout, A. M., Schoenen, J., Ptito, M., & Fumal, A. (2007). rTMS of the occipital cortex abolishes Braille reading and repetition priming in blind subjects. Neurology, 68(9), 691–693. https://doi.org/10.1212/01.wnl.0000255958.60530.11
  • Mandavilli, A. (2006). Look and learn: Visual neuroscience. Nature, 441(7091), 271–272. https://doi.org/10.1038/441271a
  • Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T., Pitskel, N. B., … Pascual-Leone, A. (2008). Rapid and Reversible Recruitment of Early Visual Cortex for Touch. PLoS ONE, 3(8), e3046. https://doi.org/10.1371/journal.pone.0003046
  • Merabet, L. B., & Pascual-Leone, A. (2010). Neural reorganization following sensory loss: the opportunity of change. Nature Reviews Neuroscience, 11(1), 44–52. https://doi.org/10.1038/nrn2758
  • Nilsson, M. E., & Schenkman, B. N. (2016). Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hearing Research, 332, 223–232. https://doi.org/10.1016/j.heares.2015.09.012
  • Park, H.-J., Lee, J. D., Kim, E. Y., Park, B., Oh, M.-K., Lee, S., & Kim, J.-J. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47(1), 98–106. https://doi.org/10.1016/j.neuroimage.2009.03.076
  • Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010). Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1. NeuroImage, 52(4), 1334–1346. https://doi.org/10.1016/j.neuroimage.2010.05.005
  • Proulx, M. (2013, February). Blindness: remapping the brain and the restoration of vision. Retrieved 28 March 2018, from http://www.apa.org/science/about/psa/2013/02/blindness.aspx
  • Ptito, M., Fumal, A., de Noordhout, A. M., Schoenen, J., Gjedde, A., & Kupers, R. (2008). TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Experimental Brain Research, 184(2), 193–200. https://doi.org/10.1007/s00221-007-1091-0
  • Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77. https://doi.org/10.1016/j.neubiorev.2013.10.006
  • Rumsey, J. M., Horwitz, B., Donohue, B. C., Nace, K., Maisog, J. M., & Andreason, P. (1997). Phonological and orthographic components of word recognition. A PET-rCBF study. Brain: A Journal of Neurology, 120 ( Pt 5), 739–759.
  • Van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information processing in the primate visual system: an integrated systems perspective. Science (New York, N.Y.), 255(5043), 419–423.

It’s Fright Night!

Lucy Lewis | 31 OCT 2018

Halloween is the time for all things scary, and the best thing for scaring people is to put on a good horror movie, preferably something with creepy dark figures appearing suddenly after a long, tense build up. But what is it about these films that gets us so freaked out?

Fear is an integral part of our survival mechanisms, it is a part of the ‘fight or flight’ response to prepare us for combating or escaping from a potential threat. Deep down, you know the demons from Insidious aren’t real, and the clown from It isn’t going to appear behind the sofa, and yet we jump at their appearance on screen, hide behind pillows when the music goes tense and even continue thinking about them once the film is finished.


Scene from ‘A Nightmare on Elm Street’ (1984).

This is because horror movies tap in to this innate survival mechanism by presenting us with a situation/character that looks like a threat. When we see something potentially threatening, this activates a structure in the brain known as the amygdala, which associates this threat with something to be fearful of (Adolphs 2013). The amygdala acts as a messenger to inform several other areas of the brain that there is a threat present, thus resulting in multiple physiological changes, including the release of adrenaline (Steimer, 2002) which causes the typical bodily functions we experience during fear, e.g. increased heart rate.

Even though we know the movie isn’t real, the amygdala takes over our logical reasoning to produce these responses, a phenomenon known as the “amygdala hijack” (Goleman, 1995).  This is when the amygdala stimulates these physiological changes to prepare us against this fearful threat before the prefrontal cortex of the brain, responsible for executive functions, can assess the situation and regulate our reactions accordingly (Steimer, 2002).

In fact, studies looking into fear conditioning, which is when associations are made between a neutral stimulus like a clicking sound and an aversive stimulus like an electric shock (Gilmartin et al 2015), have shown that inactivation of the medial prefrontal cortex results in a deficit in the ability to disconnect this association, suggesting that the prefrontal cortex is necessary for the extinction of a fear memory (Marek et al, 2013).


A schematic diagram of the fear pathway, adapted from a VideoBlocks image.

You’ve probably noticed that you can also remember something that scared you so much better than even remembering what you had for breakfast. Forming memories of the events that cause us to feel fear is important for our survival mechanisms, because it helps us to avoid threats in the future. Evidence suggests that the stress hormones that are released following exposure to fearful events contribute to the consolidation of memories, for example Okuda et al (2004) gave rats an injection of corticosterone (the major stress hormone for rodents) immediately following one training session of a task requiring the animals to remember objects presented to them in an arena. They showed that, 24 hours later, the animals that received this injection showed significantly greater recognition of these objects than controls, suggesting that this stress hormone increased their ability to remember the objects around them. Therefore, the scarier the film, the more we remember it.

So, don’t worry if you are a big scaredy-cat when it comes to horror films, it’s actually natural! It’s just your brains response to a threatening situation, and you can’t control it no matter how hard you try. But if you are planning a film night full of ghosts and ghouls this Halloween, go and get yourself a big pillow to hide behind and prepare for that good old amygdala to get into action!

Edited By Lauren Revie & Sophie Waldron


  • Adolphs, R. 2013. The Biology of Fear. Current Biology, 23:R79-R93.
  • Gilmartin, M. R., Balderston, N. L., Helmstetter, F. J. 2015. Prefrontal cortical regulation of fear learning. Trends in Neuroscience, 37:455-464.
  • Goleman, D. 1995. Emotional Intelligence: Why It Can Matter More Than IQ. New York: Bantam Books.
  • Marek, R., Strobel, C., Bredy, T. W., Sah, P. 2013. The amygdala and medial prefrontal cortex: partners in the fear circuit. Journal of Physiology, 591: 2381-2391.
  • McIntyre, C. K. and Roozendaal, B. 2007. Adrenal Stress Hormones and Enhanced Memory for Emotionally Arousing Experiences. In: Bermudez-Rattoni, F. Neural Plasticity and Memory: From Genes to Brain Imaging. Boca Raton (FL):CRC Press/Taylor & Francis, Chapter 13.
  • Okuda, S., Roozendaal, B., McGaugh, J. L. 2004. Glucocorticoid effects on object recognition memory require training-associated emotional arousal. Proceedings of the National Academy of Science, USA, 101:853-858.
  • Steimer, T. 2002. The biology of fear- and anxiety-related behaviors. Dialogues in Clinical Neuroscience, 4:231-249.
  • VideoBlocks, AeMaster still photo from: 3D Brain Rotation with alpha. https://www.videoblocks.com/video/3d-brain-rotation-with-alpha-4u23dfufgik3slexf

Square eyes and a fried brain, or a secret cognitive enhancer- how do video games affect our brain?

Shireene Kalbassi | 3 SEP 2018

If, like me, you spent your childhood surrounded by Gameboys and computer games, you have probably heard warnings from your parents that your eyes will turn square, and that your brain will turn to mush. While we can safely say that we are not suffering from an epidemic of square-eyed youths, it is less clear what gaming is doing to our brain.

In the support of worried parents all around the world, there is a disorder associated with gaming. Internet gaming disorder is defined as being an addictive behaviour, characterised by an uncontrollable urge to play video games. In 2013, internet gaming disorder was added to the Diagnostic and Statistical Manual of Mental Disorders (DSM), with a footnote, saying that more research on the matter is needed (American Psychiatric Association, 2013). Similarly, in 2018, the world health organisation (WHO) included internet gaming disorder to the section ‘disorders due to addictive behaviours’ (World Health Organization, 2018).

There is evidence to suggest that internet gaming does lead to changes in brain regions associated with addiction. Structurally, it has been shown that individuals diagnosed with internet gaming disorder show an increase in the size of a brain region known as the striatum, a region associated with pleasure, motivation, and drug addiction (Cai et al 2016, Robbins et al 2002). The brains of those with internet gaming disorder also show altered responses to stimuli related to gaming. In one study, two groups of participants were assessed: one with internet gaming addiction, and the other without. All the participants with internet gaming disorder were addicted to the popular multiplayer online role-playing game, World of Warcraft. The participants were shown a mixture of visual cues, some being associated with World of Warcraft, and others being neutral. Whilst being shown the visual cues, the brains of the participants were scanned for brain activation using an fMRI machine. It was observed that when being shown visual cues relating to gaming, the participants with internet gaming disorder showed increased activation of brain regions associated with drug addiction, including the striatum and the prefrontal cortex.  The activation of these brain regions was positively correlated with self-reported ‘craving’ for these games; the higher the craving for the game, the higher the levels of activation (Ko et al 2009). These studies, among others, do suggest that gaming does have a place in joining the list of non-substance related addictive disorders.

But don’t uninstall your games yet; it is important to note that not everyone who plays computer games will become addicted. And what if there is a brighter side to gaming? What if all those hours of grinding away on World of Warcraft, thrashing your friends on Mario Kart, or chilling on Minecraft might actually benefit you in some way? There is a small, but growing, amount of research that suggests that gaming might be good for your brain.

What we have learnt about how the brain responds to the real world, is being applied to how the brain responds to the virtual world. In the famous work of Maguire et al (2000), it was demonstrated that the taxi drivers of London showed an increased volume of the hippocampus, a region associated with spatial navigation and awareness. This increased volume was attributed to the acquisition of a spatial representation of London. Following from this, some researchers asked how navigation through a virtual world may impact the hippocampus.

In one of these studies, the researchers investigated how playing Super Mario 64, a game in which you spend a large amount of time running and jumping around a virtual world (sometimes on a giant lizard) impacts the hippocampus. When compared to a group that did not train on Super Mario 64, the group that trained on Super Mario 64 for 2 months showed increased volumes of the hippocampus and the prefrontal cortex. As reduced volumes of the hippocampus and the prefrontal cortex are associated with disorders such as post-traumatic stress disorder, schizophrenia and neurodegenerative diseases, the researchers speculate that video game training may have a future in their treatment (Kühn et al 2014). In another study, the impact of training on Super Mario 64 on the hippocampus of older adults, who are particularly at risk of hippocampus-related pathology, was assessed. It was observed that the group that trained by playing Super Mario 64 for 6 months showed an increased hippocampal volume and improved memory performance compared to participants who did not train on Super Mario 64 (West et al 2017). So, it appears that navigating virtual worlds, as well as the real world, may lead to hippocampal volume increase and may have positive outcomes on cognition.


A screenshot of Super Mario 64. This game involves exploration of a virtual world. Image taken from Kühn et al 2014[1]

Maybe it makes sense that the world being explored doesn’t have to be real to have an effect on the hippocampus, and games like Super Mario 64 have plenty to offer in terms of world exploration and navigation. But what about the most notorious of games, those first-person shooter action games? It has been suggested that first-person shooter games can lead to increased aggressive behaviours in those who play them, however researchers do not agree that this effect exists (Markey et al 2014 Greitemeyer et al 2014). Nevertheless, can these action games also have more positive effects on the cognitive abilities of the brain? Unlike Super Mario 64, these games require the player to quickly respond to stimuli and rapidly switch between different weapons and devices to use, depending upon the given scenario. Some researchers have investigated how playing action games, such as Call of Duty, Red Dead Redemption, or Counterstrike, impact short-term memory. Participants who either did not play action games, causally played action games, or were experienced in playing action games were tested for visual attention capabilities. The participants were tested on numerous visual attention tests, involving recall and identification of cues that were flashed briefly on a screen. The researchers observed that those who played action games showed significantly better encoding of visual information to short-term memory, dependent on their gaming experience, compared to those who did not (Wilms et al 2013).

In another study, the impact of playing action games on working memory was assessed. Working memory is a cognitive system involved in the active processing of information, unlike short-term memory which involves the recall of information following a short delay (Baddeley et al 2003). In this study, the researchers tested groups of participants who either did not play action games or did play action games. The researchers tested the participants’ working memory by utilising a cognitive test known as the “n-back test”. This test involves watching a sequence of squares that are displayed on a screen in alternating positions. As the test progresses the participants have to remember the position of the squares on the screen from the previous trials whilst memorising the squares being shown to them at that moment.  The researchers observed that people who did play action games outperformed those who did not on this test; they were better able to remember the previous trials, whilst simultaneously memorising the current trials (Colzato et al 2013). From these studies, it appears that action games may have some benefit on the cognitive abilities of the players, leading to increased short-term processing of information in those who play them.

A screen grab from first person shooter games: Call of Duty: WW2 (left), and Halo (right). These fast-paced games involve quickly reacting to stimuli and making quick decisions to bypass enemies and progress in the game.

So, for the worried parents, and the individuals who enjoy indulging in video games, maybe it’s not all bad. As long as you are not suffering from a form of gaming addiction (and if you think you might be please see a health expert) maybe all these hours gaming may actually not be as bad for your brain as it might seem. But ultimately, much more research is needed to understand how a broader range of games played over childhood development, and for time periods of years and decades, affects our brains and mental health.

If you think you may be suffering from a gaming addiction, see the NHS page  for more information.

Edited by Lauren Revie & Monika śledziowska


  • American Psychiatric Association, 2013. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub
  • Baddeley, A., 2003. Working memory: looking back and looking forward. Nature reviews neuroscience4(10), p.829
  • Cai, C., Yuan, K., Yin, J., Feng, D., Bi, Y., Li, Y., Yu, D., Jin, C., Qin, W. and Tian, J., 2016. Striatum morphometry is associated with cognitive control deficits and symptom severity in internet gaming disorder. Brain imaging and behavior10(1), pp.12-20.
  • Colzato, L.S., van den Wildenberg, W.P., Zmigrod, S. and Hommel, B., 2013. Action video gaming and cognitive control: playing first person shooter games is associated with improvement in working memory but not action inhibition. Psychological research77(2), pp.234-239
  • Greitemeyer, T. and Mügge, D.O., 2014. Video games do affect social outcomes: A meta-analytic review of the effects of violent and prosocial video game play. Personality and Social Psychology Bulletin40(5), pp.578-589.
  • Ko, C.H., Liu, G.C., Hsiao, S., Yen, J.Y., Yang, M.J., Lin, W.C., Yen, C.F. and Chen, C.S., 2009. Brain activities associated with gaming urge of online gaming addiction. Journal of psychiatric research43(7), pp.739-747
  • Kühn, S., Gleich, T., Lorenz, R.C., Lindenberger, U. and Gallinat, J., 2014. Playing Super Mario induces structural brain plasticity: gray matter changes resulting from training with a commercial video game. Molecular psychiatry19(2), p.265
  • Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S. and Frith, C.D., 2000. Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences97(8), pp.4398-4403.
  • Markey, P.M., Markey, C.N. and French, J.E., 2015. Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture4(4), p.277
  • Robbins, T.W. and Everitt, B.J., 2002. Limbic-striatal memory systems and drug addiction. Neurobiology of learning and memory78(3), pp.625-636
  • West, G.L., Zendel, B.R., Konishi, K., Benady-Chorney, J., Bohbot, V.D., Peretz, I. and Belleville, S., 2017. Playing Super Mario 64 increases hippocampal grey matter in older adults. PloS one12(12), p.e0187779.
  • Wilms, I.L., Petersen, A. and Vangkilde, S., 2013. Intensive video gaming improves encoding speed to visual short-term memory in young male adults. Acta psychologica142(1), pp.108-118
  • World Health Organization [WHO]. 2018. ICD-11 beta draft – Mortality and morbidity statistics. Mental, behavioural or neurodevelopmental disorders.

Can’t hear someone at a loud party? Just turn your head!

Josh Stevenson-Hoare | 8 AUG 2018

At one point or another, we have all been in this uncomfortable situation. You are talking to someone at a party, and then had the realisation – you can’t understand a word they are saying. You smile and nod along until they get to what sounds like a question. You ask them to repeat it, and you can’t hear what they say. So you ask again. We all know how it goes from there, and how awkward it gets on the fifth time of asking.

Fortunately, there is a way to avoid this happening.
Researchers at Cardiff University have developed a way to maximise your chances of hearing someone at a party on the first try. You don’t need fancy equipment, or to start learning sign language. All you have to do is turn your head. 

If you are trying to listen to something difficult to hear, it makes sense to turn your head 90 degrees from it. This is the same angle as when someone is whispering in your ear, or when you only have one working earbud. If you do this, then one ear will get most of the useful sounds, and the other ear will only get un-useful sounds. You can then ignore any sounds from the un-useful ear, and focus on the sounds you want – the words.

However, for speech, the sounds made by the person speaking are not the only useful piece of information. Lip-reading is something we can all do a little bit, without training. You may not be able to follow a whole conversation, but what you can do is tell the difference between small sounds. These are called phonemes. 

A phoneme is the smallest bit of sound that has meaning. It’s the difference between bog and bag, or tend and mend. If you miss hearing part of a word you often use the shape of the speaker’s mouth to help you figure out what the word was. This can lead to some interesting illusions, such as the McGurk effect. This is when the same sound paired with different mouth shapes causes you to hear different syllables.

If you want to read someone’s lips while they talk, you have to be able to see their face. Unless you have a very wide field of vision, the best way to lip-read is to face someone.

So, to improve hearing you should face away from someone, and for best lip-reading you should face towards them. Not the easiest things to do at the same time.

To find out which is more useful, the researchers played video clips of Barack Obama speaking to some participants. Over this they played white noise, which sounds like a radio tuned between stations. The participants turned their heads to different angles as instructed by the researchers. The white noise was then made louder and quieter until participants could only just hear what Obama was saying.

Participants found it easier to understand the speech when they pointed their heads away from the speech. The best angle, though, changed depending on where the white noise was coming from. 

You might expect that pointing away from the white noise would be best strategy. But, the researchers found that the best strategy was to point around halfway between the speech and the white noise. Not so far that participants couldn’t see Obama’s face, but far enough to have one ear pointed at him.

The best angle to face when trying to hear someone in a loud environment. Picture Credit: Josh Stevenson-Hoare.  

In real-life, noise comes from lots of different angles at once. It would be very difficult to work out what angle would be the best all the time. But, if there is one major source of noise such as a pneumatic drill or a music speaker, it can be useful to remember this technique. 

It might look a little silly at first, but it could save you from an embarrassing faux pas next time you go to a party. So now you can hear all the details of Uncle Jimmy’s fishing trip with ease. For the third time this year. You’re welcome.

Edited by Lucy Lewis & Sophie Waldron

Microglia: Guardians of the Brain

Ruth Jones | 9 JUL 2018

Our Immune System works hard to protect us from unwanted bugs and help repair damage to our bodies. In our brains, cells called Microglia are the main immune players. Microglia are part of a group of immune cells called Macrophages, which translates from Greek as “big-eaters”. Much like students faced with free food, macrophages will eat just about anything, or at least anything out of the ordinary. Whether it’s cell debris, dying cells or an unknown entity, macrophages will eat and digest it. Microglia were first discovered in 1920’s by W. Ford Robertson and Pio del Rio-Hortega. They saw that these unidentified cells seemed to act as a rubbish disposal system. Research into these mysterious cells was side-lined during WWII, but was eventually picked up again in the early 1990’s.

When I think about how microglia exist in the brain, I am always reminded of the blockade scene in Marvel’s Guardians of the Galaxy. Each “ship” or cell body stays in its own territory while its branches extend out to touch other cells and the surrounding environment. The microglia ships can then use their branches to touch other cells or secrete chemicals to communicate, together monitoring the whole brain.

ruthNova Core Blockade Scene © Guardians of the Galaxy. (2014). [DVD] Directed by J. Gunn. USA: Marvel Studios.

Microglia have evolved into brilliant multi-taskers. They squirt out chemicals called chemokines that affect the brain’s environment by quickly promoting inflammation to help remove damaged cells, before dampening inflammation to protect the brain from further damage. They also encourage the growth of new nerve cells in the brain and can remove old “unused” connections between neurones, just like clearing out your Facebook friend list. Microglia keep the brain functioning smoothly by taking care of and cleaning up after the other cells day in, day out.

Today, microglia are a trending topic, believed to play a part in multiple brain diseases. Genetic studies have linked microglia to both neuropsychiatric disorders like autism and neurodegenerative diseases such as Alzheimer’s disease.

Do microglia help or hurt in Alzheimer’s disease? This is a complicated question. Scientists have found microglia can both speed-up and slow-down Alzheimer’s disease. Scientists have thought for a while that in disease these microglia “ships” are often destroyed or are too aggressive in their efforts to remove sticky clumps of Αmyloid-β protein (a big problem in Alzheimer’s disease).

What is going on in the Alzheimer’s disease brain? Recent advances in technology mean scientists are now starting to discover more answers. One problem when working with microglia is they have a whole range of personalities, resulting in a spectrum of protein expression and behaviour. Therefore, when you just look at the entire microglial population you may miss smaller, subtle differences between cells. It is likely that microglia don’t start out as “aggressive” or harmful, so how can we see what causes microglia behaviour to change?

Luckily a new process called “single-cell sequencing” has been able to overcome this. An individual cell is placed into a dish where the mRNA, the instructions to make protein, can be extracted and measured. This means you can compare the variation in the entire microglia population, which could make finding a “trouble-maker” microglia easier in disease. This process could also be used to see how individual cell types change across the course of Alzheimer’s disease.

In the future, by looking in detail at how these individual microglia “guardians” behave, scientists can hopefully begin to unravel some of the mysteries surrounding these fascinating and hugely important cells in both health and across all disease stages.

Edited by Sam Berry & Chiara Casella