Life of Prion

Or What Links Cannibalism to Foot and Mouth Disease?

By Simona Zahova

A peculiar group of proteins, prions, have earned a mythical status in sci-fi due to their unorthodox properties and unusual history. These deadly particles often play a villainous role in fiction, appearing in the Jurassic Park franchise, and countless zombie stories. Even putting apocalyptic conspiracies aside, prions are one of the wackiest products of nature, with a history so remarkable it needs no embellishment. Tighten your seatbelts, we are going on a journey!

Our story begins in Papua New Guinea, with the Fore tribe. The Fore people engaged in ritualistic funerary cannibalism, consisting of cooking and eating deceased family members. This tradition was considered necessary for liberating the spirits of the dead. Unfortunately, around the middle of the 20thcentury, the tribe experienced a mysterious deadly epidemic, that threatened to wipe them out of existence. A few thousand deaths were estimated to have taken place between the 50s and the 60s, with the diseased exhibiting tremors, mood swings, dementia and uncontrollable bursts of laughter. Collectively, these are symptoms indicative of neurodegeneration, which is the process of progressive death of nerve cells. Inevitably, all who contracted the disease died within a year (Lindenbaum 1980). The Fore people called the disease Kuru after the local word for “tremble”, and believed it was the result of witchery.

Meanwhile, Australian medics sent to investigate the disease reported that it was psychosomatic. In other words, the medics believed that the tribe’s fear of witchcraft had caused massive hysteria that actually had an effect on health (Lindenbaum 2015). In the 60s, a team of Australian scientists proposed that the cannibalistic rituals might be leading to the spreading of a bug causing the disease. Since the Fore tribe learned about the possible association between cannibalism and Kuru, they ceased the tradition and the disease rates drastically reduced (Collinge et al. 2006). However, the disease didn’t disappear completely, and the nature of the mysterious pathogen eluded scientific research.

Around the same time, on the other side of the globe, another epidemic was taking place. The neurodegenerative disease “scrapie” (aka “foot and mouth disease”) was killing flocks of sheep in the UK. The affected animals exhibited tremors and itchiness, along with unusual nervous behaviour. The disease appeared to be infectious, yet no microbe had been successfully extracted from any of the diseased cadavers. A member of the agricultural research council tentatively noted that there were a few parallels between “scrapie” and Kuru (Hadlow 1959). For one, they were the only known infectious neurodegenerative diseases. More importantly, both were caused by an unknown pathogen, which eluded the normal methods of studying infectious diseases. However, due to the distance in geography and species between the two epidemics, this suggestion didn’t make much of a splash at the time.

The identity of this puzzling pathogen remained unknown until 1982, when Stanley Prusiner published an extensive study on the brains of “scrapie” infected sheep. It turned out that the culprit behind this grim disease wasn’t a virus, a bacterium, or any other known life form (Prusiner 1982). Primarily, the pathogen consisted of protein, but didn’t have any DNA or RNA, which are considered a main requirement for life. To the dismay of the science community, Prusiner proposed that the scrapie “bug” was a new form of protein-based pathogen and coined the term “prion”,short for “proteinaceous infectious particle”. He also suggested that prions might be the cause not only of scrapie, but also of other diseases associated with neurodegeneration like Alzheimer’s and Parkinson’s. Prusiner was wrong about the latter two but was right to think the association with “scrapie” would not be the last we hear of prions. Eventually, the prion protein was confirmed to also be the cause of Kuru and a few similar diseases, like “mad cow” and Creutzfeldt-Jacob disease (Collins et al. 2004).

Even more curiously, susceptibility to prion diseases was observed to vary between individuals, leading to the speculation that there might be a genetic component as well.  The mechanism behind this property of the pathogen remained a mystery until the 90s. Once biotechnological development allowed the genetic code of life to be studied in detail, scientists demonstrated that the prion protein is actually encoded in some animal genomes and is expressed in the brain. The normal function of prions is still unclear, but some studies suggest they may play a role in protecting neurons from damage in adverse situations (Westergard et al. 2007).

How does a protein encoded into our own DNA for a beneficial purpose act as an infectious pathogen? Most simply put, the toxicity and infectiousness only occur if the molecular structure of the prion changes its shape (referred to as unfoldingin biological terms). This is where heritability plays a part. Due to genetic variation, one protein can have multiple different versions within a population. The different versions of the prion protein have the same function, but their molecular architecture is slightly different.

Imagine that the different versions of prion proteins are like slightly different architectural designs of the same house. Some versions might have more weight-bearing columns than others. Now let’s say that an earthquake hits nearby. The houses with the extra weight-bearing columns are more likely to survive the disaster, while the other houses are more likely to collapse.

What can we take away from this analogy? A person’s susceptibility to prion diseases depends on whether they have inherited a more or less stable version of the prion protein from their parents. In this case, the weight-bearing column is a chemical bond that slightly changes the molecular architecture of the prion, making it more stable. Different prion diseases like Kuru and “scrapie” are caused by slightly different unstable versions of the prion protein, and their symptoms and methods of transmission also differ.

Remarkably, a study on the Fore people from 2015 discovered that some members of the tribe carry a novel variant of the prion protein, that gives them complete resistance to Kuru (Asante et al. 2015). Think of it this way: if people inherit houses of differing stability, then some members of the Fore tribe have inherited indestructible bunkers. Evolution at its finest! It isn’t quite clear what is the triggering event behind the “collapsing” or unfolding of prions. Once a prion protein has unfolded, it leads to a domino effect, causing the other prions within the organism to also collapse. As a result, a bunch of unfolded proteins accumulate in the brain, which causes neurodegeneration and eventually death.

One explanation of why neurons die in response to prions “collapsing” is that cells sense and dislike
unfolded proteins, triggering a chain of events called the unfolded protein response. This response stops all protein production in the affected cells until the problem is sorted out. However, the build-up of pathogenic prions is an irreversible process and it happens quite quickly, so the problem is too big to be solved by stopping protein production. In fact, it is a problem so big that protein production remains switched off indefinitely, and consequently neurons starve to death (Hetz and Soto 2006).

We have established that prions are integral to some animal genomes and can turn toxic in certain cases, but how can they be infectious too? Parkinson’s and Alzheimer’s are also neurodegenerative diseases caused by the accumulation of an unfolded protein, but they aren’t infectious. The difference is that prions have a mechanism of spreading comparable to viruses or bacteria.  One might wonder why one of our own proteins has a trait that allows it to turn into a deadly pathogen. Perhaps this trait allowed proteins to replicate themselves before the existence of DNA and RNA. Or, in other words, this might be remainder from before the existence of life itself (Ogayar and Sánchez-Pérez 1998).

To wrap things up, prion diseases are a group of deadly neurodegenerative diseases that occur when our very own prion proteins change their molecular structure and accumulate in the brain. What makes prions unique, is that once they unfold, they become infectious and can be transmitted between individuals. The study of their biomolecular mechanism has not only equipped us with enough knowledge to prevent potential future epidemics, but also offers an exciting glimpse into some of the secrets of pathogenesis, neurodegenerative diseases, evolution and life. Most importantly, we don’t need to worry about the zombies anymore. Let them come, we can take ‘em!

Edited by Jon & Sophie

References:

Asante, E. A. et al. 2015. A naturally occurring variant of the human prion protein completely prevents prion disease. Nature522(7557), pp. 478-481.

Collinge, J. et al. 2006. Kuru in the 21st century—an acquired human prion disease with very long incubation periods. The Lancet367(9528), pp. 2068-2074. doi: https://doi.org/10.1016/S0140-6736(06)68930-7

Collins, S. J. et al. 2004. Transmissible spongiform encephalopathies. The Lancet363(9402), pp. 51-61.

Hadlow W.J. Scrapie and kuru. Lancet. 1959:289–290.

Hetz, C. A. and Soto, C. 2006. Stressing out the ER: a role of the unfolded protein response in prion-related disorders. Current molecular medicine6(1), pp. 37-43.

Lindenbaum, S. 1980. On Fore Kinship and Kuru Sorcery. American Anthropologist82(4), pp. 858-859.

Lindenbaum, S. 2015. Kuru sorcery: disease and danger in the New Guinea highlands. Routledge.

Ogayar, A. and Sánchez-Pérez, M. 1998. Prions: an evolutionary perspective. Springer-Verlag Ibérica.

Prusiner, S. B. 1982. NOVEL PROTEINACEOUS INFECTIOUS PARTICLES CAUSE SCRAPIE. Science216(4542), pp. 136-144. doi: 10.1126/science.6801762

Westergard, L. et al. 2007. The cellular prion protein (PrP C): its physiological function and role in disease.Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease1772(6), pp. 629-644.

The Story of Adult Human Neurogenesis or: How I learned to Stop Worrying and Love The Bomb

By Dr Niels Haan

Recently, the debate about adult human neurogenesis seems to be just a dame short of a panto. Do adult humans form new neurons? Oh no, they don’t! Oh yes, they do! There are not many fields where people debate the very existence of the phenomenon they are studying. What do nuclear bombs have to do with it? We’ll come to that later.

What is the big deal?

For many decades, neuroscience dogma was that once the brain was formed after childhood, that was it. All you could do was lose cells. This was first challenged by Robert Altman in the 1960s, when he showed new neurons were forms in adult rodents, but his work was largely ignored at the time. A second wave of evidence came along in the late 1980s and 90s, first starting in songbirds, and later on the conformation that adult neurogenesis does take place in rodents.

In the years that followed, it has been shown that rodent adult neurogenesis takes place in two main areas of the brain, the wall of the lateral ventricles, and the hippocampus. The real importance is the function of these new neurons. In rodents, these cells are involved in things like discrimination of similar memories, spatial navigation, and certain forms of fear and anxiety.

Obviously, the search for adult neurogenesis in humans started pretty much immediately, but decades years later we still haven’t really reached a conclusion.

Why is there so much controversy?

To definitively show adult neurogenesis, you need to be able to show that any given neuron was born in the adult animal or human, rather than in the womb or during childhood. This means using a way to show cell division, as birth of a neuron requires a stem cell to divide and produces at least one daughter cell that ends up being a neuron.

In animals, this is straightforward. Cell division requires the copying of a cell’s DNA. You inject a substance that gets built into new DNA, detect this later once the new neuron has matured, and say “this cell was born after the injection”. To test what these cells are used for, we tend to reduce the numbers the stem cells with chemicals or genetic tricks, and see what the effect on the behaviour of the animal is.

However, injecting chemicals into the brains of humans tends to be frowned upon. Similarly, killing off all their stem cells and doing behavioural tests doesn’t tend to go down well with volunteers. So, we can’t use our standard methods. What we’re left with then is to detect certain proteins that are only found in stem cells or newly born neurons, to show they are present in the adult brain. However, that’s easier said than done.

Although there are proteins that mark mainly things like dividing cells, stem cells, or newly born neurons, those are not necessarily always only found in those cells. All these markers have been found time and again in the human hippocampus. However, because they are not always unique to stem cells and newly born neurons, there is endless debate on which proteins – or indeed which combinations of proteins – to look at, and what it means when cells have them.

What is the evidence?

Dozens, if not hundreds of papers have tried to address this question, and I don’t have the space – or the energy – to discuss all of them. Let’s look at some influential and recent studies that made the headlines, to show how radically different some people are thinking about this subject.

One of the major influential studies in the field came from the lab of Jonas Frisen in 2015. They used a way to get round the problem of detecting dividing cells. When DNA is copied to make a new cell, a lot of carbon is used. Nuclear bomb testing in the 50s and 60s introduced small amounts of (harmless) radioactive carbon into the atmosphere, and so eventually into the DNA of cells born during that time. The end of nuclear testing has lead to a slow decline of that radioactive carbon. So, by measuring how much radioactive carbon is in a cells DNA, you can determine when it was born. Frisen and his group did just this, and showed that people had neurons born throughout their lives in their hippocampus, with about 700 cells being born per day.

This didn’t convince everyone though. This was shown earlier this year when a widely publicised paper came out in Nature. This group did not do any birthdating of cells, but looked for the characteristic markers of stem cells and immature neurons in brains from people of a wide range or ages. According to them, the only way to reliably detect a newly born neuron is to look for two different markers on the same cell. They could only find one of the markers in adults, so by this measure, they found no new neurons after childhood.

The very next month, a very similar paper came out, using essential identical methods, and showed the exact opposite. They did find the new neurons in brains of a wide range of ages, and when counting them, found very similar rates of neurogenesis as Frisen did with his completely different methods.

So, who is right?

That depends on who you ask, and it depends on the question you’re asking (isn’t science fun?). The majority of studies have shown evidence for some neurogenesis in one form or another. How convincing this evidence is comes down to seemingly petty technical arguments, and biases of who you’re asking. The biggest questions are about which markers to use to find the cells, as shown by the two studies mentioned above, and nobody agrees on this yet.

Barring some spectacular technical breakthrough that gives us the same sorts of tools in humans as we have in animals, this debate will undoubtedly keep going for some years yet. The bigger question, which we haven’t addressed at all yet, is whether these adult born cells actually do anything in humans. That’s the next big debate to have…..

Edited by Lauren, Chiara, and Rae

Reading Without Seeing

By Melissa Wright

When the seeing brain goes blind

In the late 90’s, a blind 63-year old woman was admitted to a university hospital emergency room. After complaining to co-workers of light-headedness that morning, she had collapsed and become unresponsive. Within the 48 hours following her admission, after what was found to be a bilateral occipital stroke, she recovered with no apparent motor or neurological problems. It was only when she tried to read that an extraordinary impairment became apparent: despite the damage only occurring in the occipital lobe, which is typically devoted to vision, she had completely and specifically lost the ability to read Braille. Braille is a tactile substitution for written letters, consisting of raised dots that can be felt with the fingertips. Before this, she had been a proficient Braille reader with both hands, which she had used extensively during her university degree and career in radio (Hamilton, Keenan, Catala, & Pascual-Leone, 2000). So what happened?

The Visual Brain

It is estimated that around 50% of the primate cortex is devoted to visual functions (Van Essen, Anderson, & Felleman, 1992), with the primary visual areas located right at the back of the brain within the occipital lobe (also known as the visual cortex). Visual information from the retina first enters the cortex here, in an area named V1. Within V1, this information is organised to reflect the outside world, with neighbouring neurons responding to neighbouring parts of the visual field. This map (called a retinotopic map) is biased towards the central visual field (the most important part!) and is so accurate that researchers have even managed to understand which letters a participant is reading, simply by looking at their brain activity (Polimeni, Fischl, Greve, & Wald, 2010). These retinotopic maps are found in most visual areas in some form. As information is passed forward in the brain, the role of these visual areas becomes more complex, from motion processing, to face recognition, to visual attention. Even basic visual actions, like finding a friend in the crowd, requires a hugely complex chain of processes. With so much of the cortex devoted to processing visual information, what happens when visual input from the retina never occurs? Cases such as the one above, where a person is blind, suggest that the visual cortex is put to use in a whole new way.

Cortical Changes

In sighted individuals, lexical and phonological reading processes activate frontal and parietal-temporal areas (e.g. Rumsey et al., 1997), while touch involves the somatosensory cortex. It was thought that braille reading activated these areas, causing some reorganisation of the somatosensory cortex. However, as the case above suggests, this does not seem to be the whole story (Burton et al., 2002). Remember, in this instance, the unfortunate lady had  damage to the occipital lobe, which is normally involved in vision, but as the lady was born blind it had never received any visual information. Although you might expect that damage to this area would not be a problem for someone who is blind, it turned out instead to impair abilities associated with language and touch! This seriously went against what scientists had understood about brains and their specialised areas, and had to be investigated.

Neuroimaging, such as functional Magnetic Resonance Imaging (fMRI), allows us to look inside the brain and see what area is activated when a person performs a certain task. Using this technique, researchers have found that in early blind individuals, large portions of the visual cortex are recruited when reading Braille (H. Burton et al., 2002). This activity was less apparent for those who became blind in their later years, though was still present, and it wasn’t there at all for sighted subjects. That the late-blind individuals had less activity in this region seems to show that as we get older and as brain regions become more experienced, they become less adaptable to change. A point to note however – fMRI works by correlating increases in blood oxygen (which suggests an increase in energy demand and therefore neural activity) with a task, such as Braille reading. As any good scientist will tell you, correlation doesn’t equal causation! Perhaps those who cannot see are still somehow ‘visualising’ the characters?

So is there any other evidence that the visual areas can change their primary function? Researchers have found that temporarily disrupting the neural activity at the back of the brain (using a nifty technique called Transcranial Magnetic Stimulation) can impair Braille reading, or even induce tactile sensations on the reading fingertips (e.g. Kupers et al., 2007; Ptito et al., 2008)!

Other fMRI studies have investigated the recruitment of the occipital lobe in non-visual tasks and found it also occurs in a variety of other domains, such as in hearing (e.g. Burton, 2003) and working memory (Harold Burton, Sinclair, & Dixit, 2010). This reorganisation seems to have a functional benefit, as researchers have found that the amount of reorganisation during a verbal working memory task is correlated with performance (Amedi, Raz, Pianka, Malach, & Zohary, 2003). As well, it has been reported that blind individuals can perform better on tasks such as sound localisation (though not quite as good as Marvel’s Daredevil!) (Nilsson & Schenkman, 2016).

But Is It Reorganisation?

This is an awesome example of the ability of the brain to change and adapt, and this seems true also for areas that are so devoted to one modality. How exactly this happens is still unknown, and could fill several reviews on its own! One possibility is that neuronal inputs from other areas grow and invade the occipital lobe, although this is difficult to test non-invasively in humans because we can’t look at individual neurons with an MRI scan. The fact that much more occipital lobe activity is seen in early-blind than late-blind individuals (e.g. H. Burton et al., 2002) suggests that whatever is changing is much more accessible to a developing brain. However, findings show that some reorganisation can still occur in late-blind, and even in sighted individuals who undergo prolonged blindfolding or sensory training (Merabet et al., 2008). This rapid adaptation suggests that the mechanism involved may be making use of some pre-existing multi-sensory connections that multiply and reinforce following sensory deprivation.

Cases of vision restoration in later life are rare, but one such example came from a humanitarian project in India, which found and helped a person called SK (Mandavilli, 2006). SK was born with Aphakia, a rare condition in which his eye developed without a lens. He grew up near blind, until the age of 29 when project workers gave him corrective lenses. 29 years with nearly no vision! Conventional wisdom said there was no way his visual cortex could have developed properly, having missed the often cited critical period that occurs during early development. Indeed, his acuity (ability to see detail, tested with those letter charts at the optometrists) showed initial improvement after correction, but this did not improve over time suggesting his visual cortex was not adapting to the new input. However, they also looked at other forms of vision, and there they found exciting improvements. For example, when shown a cow, he was unable to integrate the patches of black and white into a whole until it moved. After 18 months, he was able to recognise such objects even without movement. While SK had not been completely without visual input (he had still been able to detect light and movement), this suggests that perhaps some parts of the visual cortex are more susceptible to vision restoration. Or perhaps multi-sensory areas, that seem able to reorganise in vision deprivation, are more flexible to regaining vision?

So Much left to Find Out!

From this whistle-stop tour, the most obvious conclusion is that the brain is amazing and can show huge amounts of plasticity in the face of input deprivation (see the recent report of a boy missing the majority of his visual cortex who can still see well enough to play football and video games; https://tinyurl.com/yboqjzlx). The question of what exactly happens in the brain when it’s deprived of visual input is incredibly broad. Why do those blind in later life have visual hallucinations (see Charles Bonnet Syndrome)? Can we influence this plasticity? What of deaf or deaf-blind individuals? Within my PhD, I am currently investigating how the cortex reacts to another eye-related disease, glaucoma. If you want to read more on this fascinating and broad topic, check out these reviews by Merabet and Pascual (2010), Ricciardi et al. (2014) or Proulx (2013).

Edited by Chiara & Sam

References:

Amedi, A., Raz, N., Pianka, P., Malach, R., & Zohary, E. (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience, 6(7), 758–766. https://doi.org/10.1038/nn1072

Burton, H. (2003). Visual cortex activity in early and late blind people. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 23(10), 4005–4011.

Burton, H., Sinclair, R. J., & Dixit, S. (2010). Working memory for vibrotactile frequencies: Comparison of cortical activity in blind and sighted individuals. Human Brain Mapping, NA-NA. https://doi.org/10.1002/hbm.20966

Burton, H., Snyder, A. Z., Conturo, T. E., Akbudak, E., Ollinger, J. M., & Raichle, M. E. (2002). Adaptive Changes in Early and Late Blind: A fMRI Study of Braille Reading. Journal of Neurophysiology, 87(1), 589–607. https://doi.org/10.1152/jn.00285.2001

Fine, I., Wade, A. R., Brewer, A. A., May, M. G., Goodman, D. F., Boynton, G. M., … MacLeod, D. I. A. (2003). Long-term deprivation affects visual perception and cortex. Nature Neuroscience, 6(9), 915–916. https://doi.org/10.1038/nn1102

Hamilton, R., Keenan, J. P., Catala, M., & Pascual-Leone, A. (2000). Alexia for Braille following bilateral occipital stroke in an early blind woman. Neuroreport, 11(2), 237–240.

Kupers, R., Pappens, M., de Noordhout, A. M., Schoenen, J., Ptito, M., & Fumal, A. (2007). rTMS of the occipital cortex abolishes Braille reading and repetition priming in blind subjects. Neurology, 68(9), 691–693. https://doi.org/10.1212/01.wnl.0000255958.60530.11

Mandavilli, A. (2006). Look and learn: Visual neuroscience. Nature, 441(7091), 271–272. https://doi.org/10.1038/441271a

Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T., Pitskel, N. B., … Pascual-Leone, A. (2008). Rapid and Reversible Recruitment of Early Visual Cortex for Touch. PLoS ONE, 3(8), e3046. https://doi.org/10.1371/journal.pone.0003046

Merabet, L. B., & Pascual-Leone, A. (2010). Neural reorganization following sensory loss: the opportunity of change. Nature Reviews Neuroscience, 11(1), 44–52. https://doi.org/10.1038/nrn2758

Nilsson, M. E., & Schenkman, B. N. (2016). Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hearing Research, 332, 223–232. https://doi.org/10.1016/j.heares.2015.09.012

Park, H.-J., Lee, J. D., Kim, E. Y., Park, B., Oh, M.-K., Lee, S., & Kim, J.-J. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. NeuroImage, 47(1), 98–106. https://doi.org/10.1016/j.neuroimage.2009.03.076

Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010). Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1. NeuroImage, 52(4), 1334–1346. https://doi.org/10.1016/j.neuroimage.2010.05.005

Proulx, M. (2013, February). Blindness: remapping the brain and the restoration of vision. Retrieved 28 March 2018, from http://www.apa.org/science/about/psa/2013/02/blindness.aspx

Ptito, M., Fumal, A., de Noordhout, A. M., Schoenen, J., Gjedde, A., & Kupers, R. (2008). TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Experimental Brain Research, 184(2), 193–200. https://doi.org/10.1007/s00221-007-1091-0

Ricciardi, E., Bonino, D., Pellegrini, S., & Pietrini, P. (2014). Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neuroscience & Biobehavioral Reviews, 41, 64–77. https://doi.org/10.1016/j.neubiorev.2013.10.006

Rumsey, J. M., Horwitz, B., Donohue, B. C., Nace, K., Maisog, J. M., & Andreason, P. (1997). Phonological and orthographic components of word recognition. A PET-rCBF study. Brain: A Journal of Neurology, 120 ( Pt 5), 739–759.

Van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information processing in the primate visual system: an integrated systems perspective. Science (New York, N.Y.), 255(5043), 419–423.

Can we solve problems in our sleep?

By Sam Berry

Have you heard the song “Scrambled Eggs”? You know:

“Scrambled eggs. Oh my baby how I love your legs.”

No? Perhaps you would recognize the tune.

A young Paul McCartney woke up one morning with an amazing melody in his head. He sat at the piano by his bed and played it out, and he liked it so much he couldn’t quite believe it had come to him in a dream. The tune was there, but he just couldn’t find the right words to fit. For several months he tried, but he couldn’t get past “Scrambled Eggs” as a working title.

So how did the famous Beatle complete his masterpiece? He did some more sleeping. Another fateful day, he woke up and the song was there, fully formed with lyrics and the now famous title “Yesterday.”

“Yesterday, all my troubles seemed so far away.”

Recognise it now? A critically acclaimed worldwide hit had formed itself in his sleep. Boom. A chart smashing phenomenon.

—— —– —– —– —— ——

It may seem obvious, but not sleeping is extremely bad for you. Symptoms of sleep deprivation include a marked decline in the ability to concentrate, learn, and retain new information. It can affect your emotions, self-control, and cause visual and auditory hallucinations.

Whether not sleeping at all would actually kill you has not yet been established. The record time for someone staying awake is 11 days and 25 minutes during a science experiment in 1965. The subject was kept awake by two ‘friends’ as they observed him become a drooling delusional mess. Yet there are plenty of studies that demonstrate serious detrimental health effects of both short and long-term sleep deprivation.

Being mentally and physically alert will certainly help you to solve problems, but many scientists think something much more interesting is going on during sleep. Your brain is still learning whilst you are snoring.  

You are only coming through in waves…

Okay, so do we know how sleep can help us to learn? We’re getting there. Using brain imaging technology like fMRI scanners (giant magnets that use blood flow changes to see how different parts of the brain react to things) and EEG (funky hats with electrodes that measure how our neurons are firing in real time), we can have a look at what the brain is doing while we’re dozing off.

Our brains remain active while we sleep. Sleep can be split into different stages, and what happens during these stages is important for memory and learning. Broadly speaking, your sleep is split into non-REM (Stage 1, 2, and Slow Wave) and REM (Rapid Eye Movement) stages. These are traditionally separated depending on what the pattern of electrical output from the EEG is showing. I’ll briefly take you through what these different stages are and how our neuron activity changes as we go through them:

Stage One sleep is when we start to doze off and have our eyes closed. Have you ever noticed a grandparent falling asleep in their chair, but when you ask them to stop snoring they wake up insisting they were never asleep in the first place? That’s stage one sleep; you can be in it without even knowing.

Stage Two is still a light sleep, but when brain activity is viewed using EEG you can see an increase in spiking brain activity known as sleep spindles.

Slow Wave Sleep is so called because in this stage neurons across the brain activate together in unison, creating a slow, large coordinated electrical pattern. This makes the EEG output look like a wave. Slow wave sleep also contains some of Stage Two’s sleep spindles, and as well has something called sharp wave ripples. This is where a brain area called the Hippocampus (involved in memory and navigation) sends bursts of information to the Neocortex (involved in our senses, motor movement, language, and planning to name a few).

REM sleep is when our bodies are paralysed but our eyes dart around. Our blood pressure fluctuates and blood flow to the brain increases. While we dream throughout sleep, our dreams during REM become vivid and our brain activity looks similar to when we’re awake.

We cycle through these stages in 90 -120 minute intervals throughout the night, our sleep becoming deeper and more REM-based as the cycle progresses. Disruptions to the sleep cycle are associated with decreases in problem-solving ability as well as psychiatric and neurodegenerative disorders like Alzheimer’s.

Spikey learning

Problem solving requires memory: you need to use information you already have and apply it to the problem at hand. You also need to remember what you tried before so that you don’t keep making the same mistakes (like singing “Scrambled Eggs” over the same tune forever). The stages of sleep most relevant to helping us keep hold of our memories are the non-REM ones, and in particular Slow Wave Sleep.

Recent research reveals that sleep spindles, slow waves, and sharp wave ripples work together so when a slow wave is at its peak the brain cells are all excited, creating the perfect environment for the sleep spindles to activate. When the wave is crashing down, the sharp wave ripples from the Hippocampus are more likely to fire to the Neocortex. Recent research tells us this coupling of spindles and slow waves is associated with how well you retain memories overnight. Interestingly, in older adults spindles can fire prematurely before the wave reaches its peak, suggesting a possible reason why memory gets worse with age.

Researchers say this pattern of brain activity is a sign of the brain consolidating, or crystallising, what was learned or experienced whilst awake. This process strengthens the neural connections of the brain. Studies show that the pattern of neurons that get excited when we learn something are reactivated during sleep. This could mean that during sleep our brains replay experiences and strengthen newly formed connections.

Getting freaky

So what do our dreams mean? We’ve all had bizarre ones—how about that common dream where all your teeth fall out?

During REM sleep, our brain activity looks similar to when we’re awake. Scientist Deirdre Barrett suggested we think of REM sleep like merely a different kind of thinking. This type of thinking uses less input from the outside world or from the frontal parts of our brain in charge of logical thinking. REM is thought to be involved in consolidating our emotional memories, but it is also when we tend to have the vivid visual dreams that may defy logic. This combination enables REM “thinking” to be creative or even weird. REM sleep may allow us to form connections between ideas that are only distantly related.

Recently, a team in Germany suggested that Non-REM sleep helps put together what we know while REM breaks it up and puts it back together in new ways.

Thoughts before bed

So “sleeping on it” really can help solve problems. It strengthens the memories you make during the day and it helps learn and see things more clearly when you wake up. REM sleep may also allow thinking to be unconstrained by logic and divide and reshape ideas during REM. If reading this article made you sleepy, go ahead and take a nap. You might learn something.

Edited by Becca Loux. Becca is a guest editor for Brain Domain and an avid fan of science, technology, literature, art and sunshine–something she appreciates more than ever now living in Wales. She is studying data journalism and digital visualisation techniques and building a career in unbiased, direct journalism.

References:

Barrett, D. (2017). Dreams and creative problem-solving: Dreams and creative problem-solving. Annals of the New York Academy of Sciences, 1406(1), 64–67. https://doi.org/10.1111/nyas.13412

Carskadon, M. A., & Dement, W. C. (2005). Normal human sleep: an overview. Principles and Practice of Sleep Medicine, 4, 13–23.

Chambers, A. M. (2017). The role of sleep in cognitive processing: focusing on memory consolidation: The role of sleep in cognitive processing. Wiley Interdisciplinary Reviews: Cognitive Science, 8(3), e1433. https://doi.org/10.1002/wcs.1433

Haus, E. L., & Smolensky, M. H. (2013). Shift work and cancer risk: Potential mechanistic roles of circadian disruption, light at night, and sleep deprivation. Sleep Medicine Reviews, 17(4), 273–284. https://doi.org/10.1016/j.smrv.2012.08.003

Helfrich, R. F., Mander, B. A., Jagust, W. J., Knight, R. T., & Walker, M. P. (2018). Old Brains Come Uncoupled in Sleep: Slow Wave-Spindle Synchrony, Brain Atrophy, and Forgetting. Neuron, 97(1), 221–230.e4. https://doi.org/10.1016/j.neuron.2017.11.020

Klinzing, J. G., Mölle, M., Weber, F., Supp, G., Hipp, J. F., Engel, A. K., & Born, J. (2016). Spindle activity phase-locked to sleep slow oscillations. NeuroImage, 134, 607–616. https://doi.org/10.1016/j.neuroimage.2016.04.031

Landmann, N., Kuhn, M., Maier, J.-G., Spiegelhalder, K., Baglioni, C., Frase, L., … Nissen, C. (2015). REM sleep and memory reorganization: Potential relevance for psychiatry and psychotherapy. Neurobiology of Learning and Memory, 122, 28–40. https://doi.org/10.1016/j.nlm.2015.01.004

Lewis, P. A., & Durrant, S. J. (2011). Overlapping memory replay during sleep builds cognitive schemata. Trends in Cognitive Sciences, 15(8), 343–351. https://doi.org/10.1016/j.tics.2011.06.004

Ólafsdóttir, H. F., Bush, D., & Barry, C. (2018). The Role of Hippocampal Replay in Memory and Planning. Current Biology, 28(1), R37–R50. https://doi.org/10.1016/j.cub.2017.10.073

Sio, U. N., Monaghan, P., & Ormerod, T. (2013). Sleep on it, but only if it is difficult: Effects of sleep on problem solving. Memory & Cognition, 41(2), 159–166. https://doi.org/10.3758/s13421-012-0256-7

Staresina, B. P., Bergmann, T. O., Bonnefond, M., van der Meij, R., Jensen, O., Deuker, L., … Fell, J. (2015). Hierarchical nesting of slow oscillations, spindles and ripples in the human hippocampus during sleep. Nature Neuroscience, 18(11), 1679–1686. https://doi.org/10.1038/nn.4119

The Neuroscience of Mindfulness: What Happens When We Meditate?

By Joseph Holloway

Joe is a guest writer for The Brain Domain, and is currently pursuing an MSc in Mindfulness-based Cognitive Therapies and Approaches, as well as an MA in 18th Century Literary Studies, at the University of Exeter.

‘Mindfulness’ is a word that has gathered momentum over the last decade. It has grown beyond associations of yoga and alternative therapies and moved into the realms of corporate culture, education, and mental health. Mindfulness has become such a prevalent aspect of our culture that there was even a Ladybird Books for Grown-Ups dedicated to it. When a phenomenon becomes this prominent and when it enters such fundamental spheres of our lives it is good to review its evidence base. What is Mindfulness meditation? How is it employed in a therapy context? What happens in the brain when we meditate? What evidence do we have that it is effective? This article attempts to answer these questions.

A Brief History of Mindfulness and Therapy

Firstly, what is Mindfulness? The term has an interesting history of development (Analayo, 2006, pp. 15-41) that is beyond the scope of this article, but a commonly accepted contemporary definition is: “moment-to-moment awareness” (Kabat-Zinn, 1990, p.2). Participants deliberately pay attention to thoughts, feelings, and sensations in the body, bringing their mind back to the task at hand when it wanders. This form of meditation is entrenched in many of the oldest religions and can be traced back to early canonical buddhist texts such as the Satipaṭṭhāna-sutta and the Mahāsatipatṭhāna Sutta. Contemporary Western understandings of Mindfulness meditation are a repackaging of the teachings of these texts in a secular context. They focus on the insights about the workings of the mind and the teachings on how to reduce the amount of distress that we cause ourselves.

A key example of such repackaging was Jon Kabat-Zinn’s Mindfulness Based Stress Reduction (MBSR) course originally developed at MIT in the 1970’s. This is an 8 week group course teaching participants how to engage with Mindfulness meditation and is open to all that feel (i) that they have too much stress in their lives, or (ii) that they are not relating to their stress healthily. In the 1990’s Mark Williams, John Teasdale and Zindel Segal combined Kabat-Zinn’s successful model with Beck’s Cognitive Behavior Therapy (CBT) to create a more specialised programme called Mindfulness-based Cognitive Therapy (MBCT). This programme is specifically designed to treat recurrent depression, and largely only open to those referred by their primary medical consultant. These two arms, the general MBSR and the specific MBCT, are the constituents of the Mindfulness-based interventions available on the NHS in the UK and through other providers around the world. They are widely used both as complementary and sole treatments for a variety of mental and physical health diagnoses including depression, generalised anxiety disorder, post-traumatic stress disorder, insomnia and eating disorders.

What evidence is there that Mindfulness is effective?

The effectiveness of Mindfulness-based interventions has been demonstrated through longitudinal studies, tracking the same people over time. An important early example found depressive participants in the MBCT programme to have half the amount of relapses one year after treatment compared to depressive participants that had treatment as usual (Teasdale et al, 2000). This finding was reinforced by the replication trial (Ma and Teasdale, 2004) concluding that there is ‘further evidence that MBCT is a cost-efficient and efficacious intervention to reduce relapse/recurrence in patients with recurrent major depressive disorder’ (ibid, p. 39). In these studies, the pool of participants in recovery from depression were randomly allocated into either the experimental or the control group. This was done by an external statistician and participants were matched for ‘age, gender, date of assessment, number of previous episodes of depression, and severity of last episode’ (ibid, p. 32). The results were important confirmation for the effectiveness of Mindfulness-based Interventions as therapy.

Whilst this was great news, it wasn’t until 2008 that Mindfulness-based interventions were compared to the gold standard for treatment of recurrent depression (Kuyken et al, 2008). This is maintenance antidepressive medication (m-ADM), requiring the participant to take antidepressive medication even when there are no indications of a relapse. Importantly, the 2008 study found that patients treated with MBCT were less likely to relapse than those treated with the gold standard after 15 months (47% compared to 60% of the m-ADM group). This was also replicated in a follow up study (Segle et al, 2010) where MBCT was compared against m-ADM and also against a placebo. Once participants were in remission they were given either MBCT, m-ADM or discontinued their active medication and given a placebo. Participants for all groups were randomly distributed by an external statistician, ensuring a close control on factors not being investigated. The MBCT and m-ADM group here showed the same levels of prevention from recurrence (73%), both much higher than the placebo group. Over a short term (15 months) Mindfulness-based interventions were thus shown to be better than m-ADM, and equally effective over an even longer period. In addition, it is arguably cheaper to administer Mindfulness-based interventions than m-ADM, there are no issues with drug tolerance, and unlike many antidepressants Mindfulness meditation can be utilised whilst pregnant or breastfeeding.

How does Mindfulness work?

When the brain is not responding to any particular task and is ‘at rest’, areas collectively known as the Default Mode Network (DMN) are activated (Berger, 1929), (Ingvar, 1974), (Andreasen et al, 1995).  This was found to be closely associated with mind wandering (Mason et al, 2007). It was also found it to be consistent with “internally focused tasks including autobiographical memory retrieval, envisioning the future, and conceiving the perspectives of others ” (Bruckner, 2008, p. 1). When our mind is wandering and not focused on a task we are normally either lost in personal memories or running through a scenario in our head, predicting, anticipating or worrying.

More frequent and more automatic activation of this network is associated with depressed individuals (Greicius et al, 2007); (Zhang et al, 2010) (Berman et al, 2011). Regularly wallowing in old memories or worrying about the future are perfect foundations for conditions that may lead to depression. These two functions, conducive to ‘living on autopilot’’, are the exact opposite to the definition of Mindfulness meditation given above: “moment-to-moment awareness.” Indeed, studies have shown that activation of the DMN can be regulated by Mindfulness meditation (Hasenkamp et al, 2012). Participants were observed meditating, and whenever they noticed their mind wandering they had to press a button. Immediately before this action the participants were unconsciously mind wandering. When the participants noticed that their mind had wandered, (indicated by the button press) the researchers regularly observed a deactivation of the DMN. The act of practising mindfulness-meditation was here regularly associated with a deactivation of the DMN.  A correlation between self-reported meditation experience and lower levels of DMN activation was also observed (Way et al, 2010).

Of course, the brain is never ‘doing nothing’ and a counter-network was regularly activated when participants weren’t mind-wandering: when they were paying attention to a task. This network in part consists of the anterior cingulate cortex (ACC), which is known to be instrumental in task monitoring (Carter et al, 1998). Activation of the ACC is closely associated with ‘executive control’ (Van Veen & Carter, 2002, p. 593) which detects incompatibilities or conflicts between a predicted outcome, and the observed reality. In this way the ACC functions as error-reporting or quality management. The ACC does not attempt to remedy the situation, but instead highlights it to other areas of the brain. This all happens before the subject is cognitively aware that there is a conflict.

Crucially, an association has been shown between meditation and activation of the ACC. A positive correlation between AAC thickness and meditation experience (Grant et al, 2010) and between mindfulness meditation and activation of the ACC (Zeidan et al, 2013), has been demonstrated. Mindfulness meditation is reliably shown to activate the ACC and improve the relative ease and likelihood of it being activated. Activation of the ACC prevents the mind from wandering, and prevents activation of the DMN. Mind wandering and activation of the DMN is related to depressive symptoms either developing or recurring. This is how Mindfulness-based interventions are thought to help those at a neurological level.

Conclusions

Mindfulness meditation has been around for 3500 years. It has been utilised in the West for nearly 40 years. We have had good evidence that it works for nearly 20 years but we are only just starting to explore how it works. The recent findings above help outline the process of change that the brain goes through whilst a regular Mindfulness-meditation practise is established, but they are by no means the full picture. We are also investigating how Mindfulness meditation facilitates people to more regularly respond instead of instinctively react. We are investigating how Mindfulness meditation enables decentering, and how it reduces the connectivity to the emotional areas of the brain. Research into the nuts and bolts of Mindfulness has never been so intense, and exciting results just like those depicted in this article are sure to arise soon.

Joe teaches a 10 week course devised by the Mindfulness in Schools Project (see details here). He teaches all levels and abilities, from College to University, and finds that it has had an overwhelmingly positive impact on level of well-being, achievement, and attendance of his students. If this is something that interests you, he can be contacted at joseph.c.holloway@gmail.com is now taking bookings for autumn term 2017, and for 2018.

Edited by Jonathan and Rachael

References:

  • Analyo (2003). Satipaṭṭhāna: The Direct Path to Realisation. Birmingham: Windhorse Publishing
  • Andreasen, N. et al. (1995). Remembering the past: two facets of episodic memory explored with positron emission tomography. Annals of the Journal of Psychiatry, 152, (1), pp 1576- 1585.
  • Berger, H. (1929). Über das elektrenkephalogramm des menschen. Archiv für Psychiatrie und Nervenkrankheiten, 87, (1), pp 527-570.
  • Berman, M. et al. (2011). Depression, rumination and the default network. Social Cognitive & Affective Neuroscience, 6, (1), pp 548-555.
  • Bruckner, R. (2008). The brain’s default network: anatomy, function, and relevance to disease. New York Academy of Sciences, 1124, (1), pp 1-38.
  • Carter, C. et al. (1998). Anterior cingulate cortex, error detection, and the online monitoring of performance. Science, 280, (1), pp 748-749.
  • Greicius, M. (2007). Resting-state functional connectivity in major depression: abnormally increased contributions from subgenual cingulate cortex and thalamus. Biological Psychiatry, 62, (5), pp 429-437.
  • Grant, J. et al. (2010). Cortical thickness and pain sensitivity in zen meditators. American Psychological Association, 10, (1) pp 43-53.
  • Hasenkamp, W. (2012). Mind wandering and attention during focused meditation: a fine-grained temporal analysis of fluctuating cognitive states. NeuroImage, 59, (1,) pp 750-760.
  • Holzel, B. et al. (2011). Mindfulness practise leads to increases in regional brain grey matter density. Psychiatry Research, 191, (1), pp 36-43.
  • Ingvar, D. (1974). Patterns of brain activity revealed by measurements of regional cerebral blood flow. Copenhagen: Alfred Benzon Symposium.
  • Kabat-Zinn, J. (1990). Full Catastrophe Living. New York: Dell Publishing.
  • Kuyken, W. et al. (2008). Mindfulness-Based Cognitive Therapy to prevent relapse in recurrent depression. Journal of Consulting and Clinical Psychology, 76, (6), pp 966-978.
  • Ma, S. & Teasdale, J. (2004). Mindfulness-Based Cognitive Therapy for depression: replication and exploration of differential relapse prevention effects. Journal of Consulting and Clinical Psychology, 72, (1), pp 31-40.
  • Mason, M. et al (2007). Wandering mind: the default network and stimulus-independent thought. Science, 315, (19), pp 393-395.
  • Teasdale, J. et al. (2000). Prevention of relapse/recurrence in major depression by Mindfulness-Based Cognitive Therapy. Journal of Consulting and Clinical Psychology, 68 (4), pp 615-623.
  • Segle, Z. et al (2010). Antidepressant monotherapy vs sequential pharmacotherapy and Mindfulness-Based Cognitive Therapy, or placebo, for relapse prophylaxis in recurrent depression. Archives of General Psychiatry, 67, (12), pp 1256-1264.
  • Van Veen, V. & Carter, C. (2002). The timing of action-monitoring processes in the anterior cingulate cortex. Journal of Cognitive Neuroscience, 14, (4), pp 593-602.
  • Way, B. et al (2010). Dispositional mindfulness and depressive symptomatology. Correlations with limbic and self-referential neural activity during rest. Emotion, 10, (1), pp 12-24.
  • Zeidan, F. et al. (2013). Neural correlates of mindfulness meditation-related anxiety relief. Social Cognitive and Affective Neuroscience, 9, (6), pp 751-759.
  • Zhang, D. et al. (2010). Noninvasive functional and structural connectivity of the human thalamocortical system. Cerebral Cortex, 20, (1), pp 1187-1194.

Perceptions of mental illness: Do biological explanations reduce stigma?

If you haven’t already, read my related article ‘Perceptions of mental illness: The media and mental health’.

By Rae 

Over the last few years there has been a drive in mental health research to find biological explanations for mental illnesses, both to better understand the disorders themselves and to counteract the associated stigma. The hope is that if we can demonstrate that these conditions arise from faulty biology, people would be more understanding and compassionate, and the associated stigma would diminish. Logically, why would you blame someone for something they cannot control?

At first glance, this approach seems promising. A meta-analysis of studies, conducted over the last 20 years, into the beliefs and attitudes of the general population found that increased public understanding of biological explanations lead to greater acceptance of those seeking professional treatment[1]. When mental health disorders are framed as ‘brain diseases’, due to faulty genetics and biology, people tended to blame the sufferer less[2].

Unfortunately, these positive findings are in the minority, as surprisingly it appears that biological explanations do not reduce stigma, and may potentially increase it. Although the public appeared more accepting of the need for professional treatment overall stigma endured. The social rejection of sufferers was persistent and attitudes towards them remained negative, including stereotyping them as dangerous[1].  However, this study was conducted in western cultures and so the conclusions cannot be applied to all countries due to different societal norms. For example in some African tribes mental illness symptoms are misinterpreted as witchcraft. Additionally, the studies included in the analysis examined long-term impacts at a national level and not the short term impacts of anti-stigma campaigns.

stigma

Anti-stigma campaign poster from Time to Change.

In 2014, a study explored the impact of the chemical imbalance hypothesis on the sufferer’s self-stigma. This dominant, but controversial, hypothesis of depression states it is the result of an imbalance of neurotransmitters. Participants currently suffering, or who had previously suffered, a depressive episode were told their cause of the depression using a bogus test. Some were told their illness was caused by a chemical imbalance. Those given this biological explanation showed no reduction in blame (self-stigma), and an increased prognostic pessimism and worsened perceived self-efficacy[3]. This study demonstrates a surprising example where providing a biological explanation actually increased stigma, even if that stigma emanates from the victim themselves and not others. The study also found participants given the chemical imbalance theory viewed pharmaceutical intervention as more appropriate than therapy.

Biological explanations of mental illness seem to exacerbate the ‘us v them’ mentality, increasing distinction between ‘normal’ people and ‘abnormal’ sufferers[4]. Additionally it increases avoidance of sufferers, who are portrayed as dangerous and not in control. A genetic cause may dehumanise sufferers by implying they are defective and distinct from others. It can also lead to stigmatisation of the entire family[5] as family members are labelled as at risk or carriers, and potential partners may not want to pass on a genetic predisposition to their children. A Canadian survey in 2008 found that 55% people asked wouldn’t marry someone suffering from a mental illness. Even clinicians, the very people trying to help sufferers, appear to display decreased empathy for those suffering from a mental disorder when the patient’s disorder is described in biological terms[4].

Overall, a greater understanding of the biological causes of mental health conditions did lead people to blame the sufferer less for their condition, but reactions towards sufferers remained negative. Additionally, the sufferers themselves were more pessimistic about their recovery. It increased deterministic thinking which is extremely unhelpful, and untrue. Certain mutations guarantee you will develop a disease, as in Huntington’s disease, but this is rare. Other mutations do not always result in disease, but do significantly increase your risk: those who inherit two copies of the APOe4 allele are 10 fold more likely to develop Alzheimer’s disease, whilst those inheriting one copy have a 3 fold risk.

Genes do not act in isolation, and you will not develop schizophrenia because you have the ‘schizophrenia gene’: there is no such thing. Instead, it will be the interaction of different risk factors, both biological and environmental, that may result in you developing the disease. The interaction between different genes, and your environment, influences your responses to life events. A leading hypothesis in depression research focuses on the involvement of serotonin, the so called ‘happy chemical’. Serotonin is a chemical often believed to be at abnormal levels according to the chemical imbalance theory mentioned earlier. The gene SERT regulates how much serotonin is produced in your brain but its role is more complicated than simply not producing enough. A study published in 2015 found that a variation in the SERT gene moderated the development of depression in people abused as children[6]. Only those with a specific version of SERT and had suffered abuse developed depression, whilst those with the same version but had not been subjected to abuse were reported to be the happiest participants.

This interaction highlights how a combination of factors collude to cause psychiatric diseases, and so the ideal method of treatment combines medication and therapy.  Medication alleviates symptoms and allows patients to benefit from psychotherapy, which facilitates learning of more healthy coping methods. Unfortunately, this is not always a viable option available to people, due to costs of services and difficulties accessing them. If patients are given a biological explanation for their illness they are more likely to view drugs as their best treatment option, and may not seek therapeutic help. This is despite the fact that pharmacological treatment can have a limited impact on their condition. No psychiatric drug works for all sufferers, potentially due to individual variation in disease diagnosis and symptoms, and thus response to treatment. Around 40% of depression is considered drug resistant and the negative symptoms of schizophrenia (e.g. social withdrawal, apathy) aren’t currently treatable with drugs. Indeed, medication is not a cure but a symptomatic treatment as patients relapse if they stop taking them, and the side effects are often debilitating.

facebook-ads-kenneth2-1024x536

A campaign poster from an American mental health association. Image source.

Another consideration, easily overlooked by well meaning scientists and clinicians, is not everyone with a condition considers themselves ‘diseased’ and may not want to be ‘cured’. These beliefs will vary between individuals and so it is important to take people’s own beliefs surrounding their conditions into account. Defining them by their disease is akin to defining a disabled person by their disability; defining them by what they cannot do. When it comes to mental health clinicians and researchers must avoid only thinking in pathological terms, and failing to consider the whole person. If not we risk perpetuating an unconscious us v them stigma, between those studying the disease and those living with it. Someone who has fully embraced her condition and sought to change how people think of it is Touretteshero. She is informative, delightfully hilarious and her website should definitely be checked out.

Clearly, emphasising the biological causes above all else is not the way to reduce stigma. Only focusing on these causes may actually increase stigma, and it ignores the fact that the environment is also crucial in mental health. That is not to say biology is not involved-it is! These conditions would not run in families if it was not. But, the environment you grow up and live in is also hugely influential.

gengraph

Classic graph depicted % risk of developing schizophrenia first published in Gottesman, 1991. Image source.

A good example to end on is schizophrenia. This is often held up as a largely genetic based mental health condition. The classical illustration above depicts increasing likelihood for developing schizophrenia, as demonstrated by increased risk with increased genetic similarity. If you identical twin has schizophrenia your risk for also developing it is almost 50%.  Clearly, however, this genetic risk it is not 100%. Environmental factors will also hugely influence your risk, such as viral infection during the second trimester or suffering abuse as a child. In order to understand psychiatric diseases we need to consider the interaction of our environment and our biology. Only with better understanding of all aspects which interact and result in these diseases, rather than focusing on specific contributions, will we have a solid basis from which to combat mental health stigma.

Edited by Jonathan

References:

  1. Schomerus, G., et al. (2012). Acta Psychiatrica Scandinavica, 125 (6), 440-452.
  2. Kvaale, E., Gottdiener, W., & Haslam, N. (2013). Soc Sci Med, 96, 95-103. 
  3. Kemp, J., Lickel, J., & Deacon, B. (2014). Behav Res Ther, 56, 47-52.
  4. Lebowitz, M., & Ahn, W. (2014). PNAS, 111 (50), 17786-17790.
  5. Phelan, J. (2002). Trends Neurosci, 25 (8), 430-1.
  6. Nguyen, T., et al. (2015). British Journal of Psychiatry, 1 (1), i104-109.

A Step-by-Step Guide to convincing Mom it’s Dad’s fault–with Science!

(You do want to seem like the reasonable one here, right?)

mrsfields-sandwich-product
Link to Image Source

Let’s say it’s a hot summer day (As an American, the chances of such a day seem slim in Britain, but let’s roll with the hypothetical here), and you have in your hand an exquisite ice cream sandwich–cool and sweet, with the mist of a sub-zero freezer still rising off it.

Now let’s say a pesky younger sibling didn’t think to get his own, and you now stand (ahem–fairly) accused of not sharing, under threat of dire punishment.

How do you convince Mom your behaviour is Dad’s fault?

(Preferably before this lovely ice cream sandwich melts away!)

STEP 1: Argue behaviour has some genetic roots

Behaviour is a difficult trait to pin to a genetic origin. It is a complex, emergent property of the brain, influenced by many other confounding factors, like culture, experience, and social context. However, we do have experimental models that demonstrate behaviour does have some genetic roots. For example, some knock-out models, in which we delete a gene from a model organism such as a rodent, demonstrate altered fundamental behaviours.

The behaviours we can measure in model organisms are simple compared to the behaviours Mom is paying that child psychologist to sort out. We need to focus on measures we can easily quantify. Research in behavioural genetics includes measures of dominance vs subordinance, ease of movement and levels of activity, time exploring novel environments, anxiety, sexual behaviour, satiety, impulsivity, and compulsivity, among others.

We also know that some of the genes or clusters of genes, when missing in humans, cause neurodevelopmental disorders with characteristic behavioural changes. For instance, Prader Willi Syndrome (PWS), in which the gene rich chromosome region 15q11-q13 (paternal origin) is disrupted, is associated with a distinct behavioral profile. This includes mild cognitive deficits, insensitivity to pain, tantrums, obsessive tendencies, a compulsive desire to eat, and in some cases psychosis [Davies 2007, Perez 2010].

But missing (or indeed extra) bits of chromosomes aren’t the only reason we might see behavioural variation in humans. Natural genetic variation may explain some (but not all) of the statistically normal range of human behaviours. Someone may have a higher or lower IQ, be more or less impulsive, seek more or less novelty, or be more or less anxious and still be within a range considered ‘typical’, and not pathological [Nuffield 2002, Plomin 2016]. Some might, conceivably, be more or less likely to share their ice cream with their younger sibling…

STEP 2: Point out that Mom’s & Dad’s genomes contribute differently to the brain.

Some behaviours, notably those which relate to mothering behaviour, and altruism (how likely you are to share your ice cream sandwich), may be more Dad’s fault than Mom’s.

But wait! Both Mom and Dad give you a copy of each gene… shouldn’t they contribute equally to how you turn out?

As it turns out, a subset of genes, called imprinted genes, will selectively express (use) only the copy from one parent! Remember Prader Willi Syndrome (PWS)? The particular set of symptoms for this disease only appear if the disrupted chromosome region was inherited from Dad. If this same disrupted region was instead inherited from Mom, it manifests as Angelman’s Syndrome (AS), which has a very different character. AS is characterized by mental retardation, ataxia, epilepsy, a ‘happy’ disposition and repetitive or stereotyped behaviours [Davies 2007, Perez 2010, Bird 2014].

These diseases each result from disruptions in the same DNA region, but the results differ depending on whether this disruption is in the copy of the region from Mom or Dad. This is because some genes in the region normally only express one parent’s version to make what the gene encodes. If the copy the gene normally doesn’t use is missing, no big deal! You weren’t using it anyway. BUT, if the copy the gene exclusively uses is missing, BIG DEAL. The other copy of the gene won’t get the cue to come up to bat, and you’ll use neither version.

Because some genes in the PWS/AS region of the DNA are only expressed from Mom’s copy and others are only expressed from Dad’s copy, problems with the region inherited from one parent will cause a different set of symptoms than problems with the region inherited from the other [McNamara 2013, Cassidy 2000].

Theoretically, then, this could distinguish the impact Mom and Dad each have on your failure to share that ice cream sandwich.

Evolutionarily speaking, imprinting would seem to be a disadvantage. Why would you limit yourself to using only one copy (haploid) when you could use two (diploid) and have a backup? That imprinted genes exist among many species implies a strong natural selection for this mechanism is present, overcoming the disadvantage of functional haploidy (when, for functional purposes, you appear to have only one copy) [Wilkins 2016]. If expression of a particular gene benefits the survival and reproductive success of those using Dad’s copy more than those using Mom’s, natural selection will favor silencing Mom’s and using Dad’s.

But remember, they’re the same gene! Why would Dad’s help you out more than Mom’s (or vice versa)? Often, this is because the same gene can help out the propagation of Dad’s genetic line more than Mom’s (or vice versa).

First of all, let’s look at the arms race going on in the placenta. Mom’s genome and Dad’s genome both want this kid to survive, but Mom has to use the same machinery to produce as many of her kids (with her genetics) as she can, whereas Dad can piggyback the machinery of multiple women to produce kids with his genetics (as much as Mom may disapprove). Therefore, Mom’s and Dad’s genomes have very different strategies during pregnancy. It’s in the best interests of Dad’s genes to suck as many resources out of Mom as possible, to ensure the survival and success of his kid during pregnancy. Mom’s genes, on the other hand, need to carefully parse out what resources she has, so she doesn’t spend it all on just one kid. If she only has one ice cream sandwich, she wants both kids to be happy, so she’s forcing you to share.

What results from the placental arms race is a method of imprinting referred to as intra-locus conflict, where one copy of a gene is active and the other silent. For example, the gene Ifg2 increases the ability of nutrients to passively diffuse across of the placenta. The more nutrients that pass from Mom, through the placenta, to the kid, the more the kid can grow [Sibley 2004]. This is great for Dad’s genes, but potentially damaging for Mom’s! Dad’s genes will be propagated best, according to the restrictions of natural selection, if it keeps this gene ‘on’, producing more protein product and getting as much out of Mom for this kid as possible. Mom’s genes, however, will be propagated best (by limiting the resources she doles out), if she doesn’t allow two copies of Igf2 to be running in the placenta at the same time. Dad’s is already on, so Mom shuts her copy down.

So Mom & Dad contribute copies of the same genes, but these contributions aren’t functionally equivalent–they are complementary. They also contribute differently to different tissues. Relevant to your ice-cream behavioural argument, Dad’s genes seem to contribute more in the brain! In the adult mouse brain, only 37% of the sum total of imprinted genes (whose use is biased towards only one parent’s copy) use Mom’s copy exclusively [Wilkins 2016]. The distribution of parental contribution within regions of the brain exaggerates this difference even more: Mom’s genome appears to contribute more towards the cerebral cortex (important for planning, executive decisions, and higher brain function), whereas Dad’s contributes more towards the hypothalamus and other deep midbrain structures (important for more ‘primitive’ behaviours such as reward-response, motivation, and homeostasis) [Keverne 1996].

Let’s say Mom invites you to proceed with your argument.

STEP 3: Illustrate which behaviours are Dad’s fault.

Let’s also consider sex-biased dispersal patterns in populations [Wilkins 2016, Ubeda 2010]. Mom’s and Dad’s genomes might also have different success rates if a population is more likely to share one parent than the other. A pride of lions, for example, is made up of many females and one male. The cubs in the pride all share genes through their Dad’s side, so most of the genetic differences between them come from Mom’s side. In this case, once the cubs are born (after Dad’s genes demand as much from Mom’s resources as possible), Dad’s genes will be propagated best if the cubs behave cooperatively. Cooperation tends to increase the group’s overall survival rate, and because the group shares Dad’s genes, Dad’s genes do well if the group does well. Mom’s genes, on the other hand, are competing with those of all the other Moms in the group. This competition means Mom’s genes have the best chance of being passed on to the next generation if they give the individual cub an advantage over the other cubs in the pride.

Lion_Pride_leader
Sex biased dispersal in a pride of lions means you are more likely to share dad than mom. Link to Source.

This difference between group and individual success creates a battle between Mom’s and Dad’s genomes. One way they can battle is through aspects of behaviour. In this pride of lions, Dad’s genes will promote altruism–the sharing of resources among the group to promote the survival of paternal siblings–and Mom’s genes will promote more selfish behaviour–benefiting the individual [Wilkins 2016].


This is where you can see your argument starting to fall apart in Mom’s eyes…

STEP 4: While grounded, sans-ice cream sandwich, consider where you went wrong.

First, consider that humans do not display the particularly female-biased dispersal pattern of a pride of lions, and this train of thought may have been influenced by your recent viewing of “The Lion King”.

Not only might this comparison to a patriarchal system offend your mother’s feminist sensibilities (female lions do most of the work in the pride anyway), but such a suggestion implies you are arguing is your selfish behaviour is really her fault (Though Úbeda et al 2010 do predict this is the case for hominids).

Alternatively. you could have tried the reverse argument, that Dad’s genes cause your selfish behaviour and Mom’s your altruistic behaviour. In the case of multiple paternity, it is in the interest of Mom’s genes to keep the siblings working together while Dad’s genes help them compete [Wilkins 2016, Haig 1992] . Unfortunately this could also have landed you in trouble, because you would then have suggested that you and your sibling don’t share the same Dad, which may or may not disturb your family dynamic.

Secondly, while these genes appear to contribute to some of the basic fundamentals of behaviour, human behaviour is ultimately complex, and we are unlikely to be able to use biology to predict its intricacies at the social level.

Piecing out the genetic contribution to behaviour is tough: one gene may contribute to many behaviours, and multiple genes may contribute to the same behaviour (polygenic). It is highly unlikely we will find “a gene for X”, where “X” is criminality, mothering, or hyperactivity etc. Even where different variants of a gene (alleles) can be shown to impact behaviour, factors such as environmental context, including early life stress, training, social environment, and culture can mediate this impact. Your genes may predispose you to a certain range in the spectrum of normal behaviours, but your outcome is alterable, and this predisposition will not dictate your fate [Nuffield 2002].

Imprinted genes introduce even more complexity. Sometimes the imprinting mark doesn’t result in simply the whole gene being singularly maternally or paternally expressed. Genes can produce several different messages, called transcripts, from the same sequence of code. Imagine this as a recipe for ice cream with optional ingredients (chocolate syrup, strawberries, peppermint dust). Including or excluding different combinations of those ingredients generates slightly different products from the same recipe. Transcripts can have their own imprinting sub-status, which manipulates the relative abundance of the different output versions. Depending on how the marks themselves change, the ratios of these different messages can change dynamically throughout development and between different tissue types [Wilkins et al 2016].

Step-By-Step Guide to Convincing Mom it's Dad's fault Ice Cream Diagrams 21-8-16 250 jpeg
While the original recipe calls for four ingredients, like a gene lays out all of its exons, you can get different products at the end by leaving out some items. Original Image.

Finally, the human genome and the human brain are quite robust systems.

The law treats one’s actions as autonomous and willed. Any criminal defense (including charges of failure to share ice cream) relying upon behavioural genetics must demonstrate the force of a genetic deficiency or variant to be greater than one’s autonomy [Nuffield 2002].  Your genome (the collection of all genes in your DNA) and your brain have many redundant systems in place to compensate for things that may go wrong–they are robust to minor variations and disruptions. Unless you have a clear neurodevelopmental or mental health disorder with distinct behavioural differences demonstrated across other patients, you are likely within the normal range of human behaviour, and therefore can’t use this argument as an excuse. Under the fundamental assumptions of the criminal justice system, you have adequate knowledge of right and wrong as well as control of your own actions and thus responsibility for your decisions. Your melted ice cream is the result of your long winded argument; Mom’s not going to clean up this sticky mess you’ve found yourself in, as it is entirely your own doing.

Regardless of your inability to use this science to argue your way out of trouble, the field of Behavioural Genetics is invaluable. It helps us further our understanding of the brain and contributes to the wealth of knowledge we draw on to address issues such as neurodevelopmental disorders. With this research, we can get closer to genetic, pharmaceutical, and environmental interventions for diseases affecting behaviour outside the statistically normal range–an area of medicine with a history of murky understanding, social stigma, and emotional turmoil. This field is sensitive and important because it helps us connect our biology to our understanding of our identity as humans. We should take care to use this field to nurture a healthy identity and social sphere, rather than distort it to subvert responsibility.

STEP 5:  Realize arguing to Mom that she is responsible for your selfish behaviours is clearly not a way to win the argument and keep your ice cream. Try this argument on Dad next time.

Lion Cub
Image Source

References:

BIOETHICS NC. Genetics and human behaviour: The ethical context. Nuffield Council on Bioethics, London. 2002.

Bird LM. Angelman syndrome: review of clinical and molecular aspects. Application of Clinical Genetics. 2014 Jan 1;7.

Cassidy SB, Dykens E, Williams CA. Prader‐Willi and Angelman syndromes: Sister imprinted disorders. American journal of medical genetics. 2000 Jun 1;97(2):136-46.

Haig D. Genomic imprinting and the theory of parent-offspring conflict. Semin. Dev. Biol. 1992 Jan;3:153-60.

Keverne EB, Martel FL, Nevison CM. Primate brain evolution: genetic and functional considerations. Proceedings of the Royal Society of London B: Biological Sciences. 1996 Jun 22;263(1371):689-96.

Sibley CP, Coan PM, Ferguson-Smith AC, Dean W, Hughes J, Smith P, Reik W, Burton GJ, Fowden AL, Constancia M. Placental-specific insulin-like growth factor 2 (Igf2) regulates the diffusional exchange characteristics of the mouse placenta. Proceedings of the National Academy of Sciences of the United States of America. 2004 May 25;101(21):8204-8.

Úbeda F, Gardner A. A model for genomic imprinting in the social brain: juveniles. Evolution. 2010 Sep 1;64(9):2587-600.

Wilkins JF, Ubeda F, Van Cleve J. The evolving landscape of imprinted genes in humans and mice: Conflict among alleles, genes, tissues, and kin. Bioessays. 2016 May 1;38(5):482-9.