Life of Prion

Or What Links Cannibalism to Foot and Mouth Disease?

By Simona Zahova

A peculiar group of proteins, prions, have earned a mythical status in sci-fi due to their unorthodox properties and unusual history. These deadly particles often play a villainous role in fiction, appearing in the Jurassic Park franchise, and countless zombie stories. Even putting apocalyptic conspiracies aside, prions are one of the wackiest products of nature, with a history so remarkable it needs no embellishment. Tighten your seatbelts, we are going on a journey!

Our story begins in Papua New Guinea, with the Fore tribe. The Fore people engaged in ritualistic funerary cannibalism, consisting of cooking and eating deceased family members. This tradition was considered necessary for liberating the spirits of the dead. Unfortunately, around the middle of the 20thcentury, the tribe experienced a mysterious deadly epidemic, that threatened to wipe them out of existence. A few thousand deaths were estimated to have taken place between the 50s and the 60s, with the diseased exhibiting tremors, mood swings, dementia and uncontrollable bursts of laughter. Collectively, these are symptoms indicative of neurodegeneration, which is the process of progressive death of nerve cells. Inevitably, all who contracted the disease died within a year (Lindenbaum 1980). The Fore people called the disease Kuru after the local word for “tremble”, and believed it was the result of witchery.

Meanwhile, Australian medics sent to investigate the disease reported that it was psychosomatic. In other words, the medics believed that the tribe’s fear of witchcraft had caused massive hysteria that actually had an effect on health (Lindenbaum 2015). In the 60s, a team of Australian scientists proposed that the cannibalistic rituals might be leading to the spreading of a bug causing the disease. Since the Fore tribe learned about the possible association between cannibalism and Kuru, they ceased the tradition and the disease rates drastically reduced (Collinge et al. 2006). However, the disease didn’t disappear completely, and the nature of the mysterious pathogen eluded scientific research.

Around the same time, on the other side of the globe, another epidemic was taking place. The neurodegenerative disease “scrapie” (aka “foot and mouth disease”) was killing flocks of sheep in the UK. The affected animals exhibited tremors and itchiness, along with unusual nervous behaviour. The disease appeared to be infectious, yet no microbe had been successfully extracted from any of the diseased cadavers. A member of the agricultural research council tentatively noted that there were a few parallels between “scrapie” and Kuru (Hadlow 1959). For one, they were the only known infectious neurodegenerative diseases. More importantly, both were caused by an unknown pathogen, which eluded the normal methods of studying infectious diseases. However, due to the distance in geography and species between the two epidemics, this suggestion didn’t make much of a splash at the time.

The identity of this puzzling pathogen remained unknown until 1982, when Stanley Prusiner published an extensive study on the brains of “scrapie” infected sheep. It turned out that the culprit behind this grim disease wasn’t a virus, a bacterium, or any other known life form (Prusiner 1982). Primarily, the pathogen consisted of protein, but didn’t have any DNA or RNA, which are considered a main requirement for life. To the dismay of the science community, Prusiner proposed that the scrapie “bug” was a new form of protein-based pathogen and coined the term “prion”,short for “proteinaceous infectious particle”. He also suggested that prions might be the cause not only of scrapie, but also of other diseases associated with neurodegeneration like Alzheimer’s and Parkinson’s. Prusiner was wrong about the latter two but was right to think the association with “scrapie” would not be the last we hear of prions. Eventually, the prion protein was confirmed to also be the cause of Kuru and a few similar diseases, like “mad cow” and Creutzfeldt-Jacob disease (Collins et al. 2004).

Even more curiously, susceptibility to prion diseases was observed to vary between individuals, leading to the speculation that there might be a genetic component as well.  The mechanism behind this property of the pathogen remained a mystery until the 90s. Once biotechnological development allowed the genetic code of life to be studied in detail, scientists demonstrated that the prion protein is actually encoded in some animal genomes and is expressed in the brain. The normal function of prions is still unclear, but some studies suggest they may play a role in protecting neurons from damage in adverse situations (Westergard et al. 2007).

How does a protein encoded into our own DNA for a beneficial purpose act as an infectious pathogen? Most simply put, the toxicity and infectiousness only occur if the molecular structure of the prion changes its shape (referred to as unfoldingin biological terms). This is where heritability plays a part. Due to genetic variation, one protein can have multiple different versions within a population. The different versions of the prion protein have the same function, but their molecular architecture is slightly different.

Imagine that the different versions of prion proteins are like slightly different architectural designs of the same house. Some versions might have more weight-bearing columns than others. Now let’s say that an earthquake hits nearby. The houses with the extra weight-bearing columns are more likely to survive the disaster, while the other houses are more likely to collapse.

What can we take away from this analogy? A person’s susceptibility to prion diseases depends on whether they have inherited a more or less stable version of the prion protein from their parents. In this case, the weight-bearing column is a chemical bond that slightly changes the molecular architecture of the prion, making it more stable. Different prion diseases like Kuru and “scrapie” are caused by slightly different unstable versions of the prion protein, and their symptoms and methods of transmission also differ.

Remarkably, a study on the Fore people from 2015 discovered that some members of the tribe carry a novel variant of the prion protein, that gives them complete resistance to Kuru (Asante et al. 2015). Think of it this way: if people inherit houses of differing stability, then some members of the Fore tribe have inherited indestructible bunkers. Evolution at its finest! It isn’t quite clear what is the triggering event behind the “collapsing” or unfolding of prions. Once a prion protein has unfolded, it leads to a domino effect, causing the other prions within the organism to also collapse. As a result, a bunch of unfolded proteins accumulate in the brain, which causes neurodegeneration and eventually death.

One explanation of why neurons die in response to prions “collapsing” is that cells sense and dislike
unfolded proteins, triggering a chain of events called the unfolded protein response. This response stops all protein production in the affected cells until the problem is sorted out. However, the build-up of pathogenic prions is an irreversible process and it happens quite quickly, so the problem is too big to be solved by stopping protein production. In fact, it is a problem so big that protein production remains switched off indefinitely, and consequently neurons starve to death (Hetz and Soto 2006).

We have established that prions are integral to some animal genomes and can turn toxic in certain cases, but how can they be infectious too? Parkinson’s and Alzheimer’s are also neurodegenerative diseases caused by the accumulation of an unfolded protein, but they aren’t infectious. The difference is that prions have a mechanism of spreading comparable to viruses or bacteria.  One might wonder why one of our own proteins has a trait that allows it to turn into a deadly pathogen. Perhaps this trait allowed proteins to replicate themselves before the existence of DNA and RNA. Or, in other words, this might be remainder from before the existence of life itself (Ogayar and Sánchez-Pérez 1998).

To wrap things up, prion diseases are a group of deadly neurodegenerative diseases that occur when our very own prion proteins change their molecular structure and accumulate in the brain. What makes prions unique, is that once they unfold, they become infectious and can be transmitted between individuals. The study of their biomolecular mechanism has not only equipped us with enough knowledge to prevent potential future epidemics, but also offers an exciting glimpse into some of the secrets of pathogenesis, neurodegenerative diseases, evolution and life. Most importantly, we don’t need to worry about the zombies anymore. Let them come, we can take ‘em!

Edited by Jon & Sophie

References:

Asante, E. A. et al. 2015. A naturally occurring variant of the human prion protein completely prevents prion disease. Nature522(7557), pp. 478-481.

Collinge, J. et al. 2006. Kuru in the 21st century—an acquired human prion disease with very long incubation periods. The Lancet367(9528), pp. 2068-2074. doi: https://doi.org/10.1016/S0140-6736(06)68930-7

Collins, S. J. et al. 2004. Transmissible spongiform encephalopathies. The Lancet363(9402), pp. 51-61.

Hadlow W.J. Scrapie and kuru. Lancet. 1959:289–290.

Hetz, C. A. and Soto, C. 2006. Stressing out the ER: a role of the unfolded protein response in prion-related disorders. Current molecular medicine6(1), pp. 37-43.

Lindenbaum, S. 1980. On Fore Kinship and Kuru Sorcery. American Anthropologist82(4), pp. 858-859.

Lindenbaum, S. 2015. Kuru sorcery: disease and danger in the New Guinea highlands. Routledge.

Ogayar, A. and Sánchez-Pérez, M. 1998. Prions: an evolutionary perspective. Springer-Verlag Ibérica.

Prusiner, S. B. 1982. NOVEL PROTEINACEOUS INFECTIOUS PARTICLES CAUSE SCRAPIE. Science216(4542), pp. 136-144. doi: 10.1126/science.6801762

Westergard, L. et al. 2007. The cellular prion protein (PrP C): its physiological function and role in disease.Biochimica et Biophysica Acta (BBA)-Molecular Basis of Disease1772(6), pp. 629-644.

Learning to make healthier food choices

By Sophie Waldron

If you haven’t already, read Sophie’s first article ‘Learn, eat, repeat: how food advertising works’!

You are sitting down at a desk and a huge burger comes floating towards you. It gets bigger and bigger as it advances, faster and faster. Luckily, you know what you have to do. Don’t press anything on your computer, and the burger will go away.

This isn’t some dystopian reality where burgers are our new overlords. It’s a computer task that can help people make healthier food choices. In the task, which people can also engage with on their smartphones as an app, participants have to prevent a learnt response to unhealthy food items.

People are first trained to press certain computer keys every time certain images of healthy and unhealthy food come on screen. Then in a subsequent phase participants have to press keys to every picture on screen except to pictures of unhealthy food. Helpfully, a prompt comes with pictures of healthy food, warning people not to respond.

This task may seem simple, but it is designed to help people mentally tackle automatically activated learnt responses to obtain unhealthy food. We live in what scientists call an ‘obesogenic’ environment, where high calorie food is abundant and pushed on us through advertising. Research has shown that learnt associations between pictures (such as a brand logo) and tasty food can make us reach for that food even when we are full (Watson et al., 2014)! In such a world we learn to respond to the attractively colored logo-dripping packets of fast food and eat them, rather than thinking carefully about which foods benefit us. Inhibiting learnt key press responses to food might give us the cognitive skills to think twice about automatically reaching for a chocolate bar.

The unhealthy food inhibition task was designed by Lawrence and her team in 2015. They investigated whether preventing an automatic key press response to unhealthy food pictures would reduce consumption of unhealthy foods in people’s everyday lives. It was found that the task reduced self-reported snacking for up to 6 weeks. This holds many possibilities. If people can make healthier choices based on one lab session, it is likely that they can be healthier for much longer if they could carry on with the task on a regular basis as part of a smartphone app.

Another avenue for treating overeating is mindfulness. Mindfulness practice cultivates experiencing the present external and internal environment, including sensory influx and the thoughts and feelings we have. Mindfulness eating practice focuses on the experiential qualities of food, taste, texture, and our feelings of satiety. There has been evidence that incorporating mindfulness eating into one’s life reduces self reported measures of binge and emotional eating (Alberts et al 2012), consumption of sweets (Mason et al., 2015), and BMI (Tapper et al., 2009).

Currently the mechanism by which mindfulness eating leads to healthier food consumption is unknown. Mindfulness has been found to decrease self-reported body image concern in healthy women with disordered eating (Alberts et al 2012), which may lead these women to eat healthier. However there is a problem that runs through this research: self-report.

Self-report is practical, experimenters could not follow around participants every day for 6 weeks prior to testing and write down exactly what they ate, so instead they ask them to keep a food diary. However self-report studies give an indirect measurement of the dimension experimenters are focusing on, and thus conflate actual changes in behaviour with changes in reporting about a certain behaviour. For example in the study on body image and mindfulness eating, it could be that reduced body image concern actually results in healthier and more natural eating. Yet it also could be that women eat the same but interpret this eating as healthier because of their more positive self-image. Self-report cannot distinguish these possibilities.

Another way in which mindfulness eating may trigger healthier choices is by increasing the flexibility of learning about reward and punishment. It has been claimed that obesity might be due to an inflexibility in this kind of learning, as once people have learned that a food is tasty (and thus rewarding) they may eat too much of it despite the undesirable consequences overeating brings, such as feeling too full or being overweight. In other words, they fail to change their overeating behaviour even when it leads to something unpleasant, a punishment. Janessen and colleagues (2018) found that time invested in mindfulness eating correlated positively with good performance on a task where participants had to quickly learn that a previously rewarded item was now punished, and vice versa. This hints that mindfulness eating could arm people with the cognitive flexibility required to overcome compulsive automatic eating patterns.

Further research will have to look at the long-term health consequences of mindfulness eating practice, and apps that train us to inhibit automatically reaching for unhealthy food. Yet studies so far are promising! Both these tools, and others, will be essential in catching up with the explosion of accessible high calorie food and food advertisement in the modern world.

Interested in the science of obesity and how we can tackle it? A recent BBC documentary ‘The Truth About Obesity‘ covers some strategies, including a study looking at the effectiveness of using apps to train better eating behaviours, filmed at CUBRIC

Interested in the effect of mindfulness on the brain? Check out our previous article: ‘The Neuroscience of Mindfulness: What Happens When We Mediate?’

Edited by Jonathan

References:

Watson, P., Wiers, R. W., Hommel, B., & Wit, S. (2014). Working for food you don’t desire. Cues interfere with goal-directed food-seeking. Appetite, 139 -148.

Lawrence, N. S., O’Sullivan, J., Parslow, D., Javaid, M., Adams, R. C., Chambers, C. D., Kos, K., Verbruggen, F. (2015). Training response inhibition to food is associated with weight loss and reduced energy intake. Appetite, 17-28.

Janssen, L. K., Duif, I., Loon, I., dv Vries, J. H. M., Speckens, A. E. M., Cools, R & Aarts, E. (2018). Greater mindful eating practice is associated with better reversal learning. Scientific Reports, 5702.

Alberts, H. J. E. M., Thewissen, R. & Raes, L. (2012). Dealing with problematic eating behaviour. The effects of a mindfulness-based intervention on eating behaviour, food cravings, dichotomous thinking and body image concern. Appetite 58, 847–851.

Tapper, K. et al. (2009). Exploratory randomised controlled trial of a mindfulness-based weight loss intervention for women. Appetite 52, 396–404.

Mason, A. E. et al. (2015). Effects of a mindfulness-based intervention on mindful eating, sweets consumption, and fasting glucose levels in obese adults: data from the SHINE randomized controlled trial. J. Behav. Med. 1–13.

Learn, eat, repeat: how food advertising works

By Sophie Waldron

You woke up late and ate breakfast late. Thing is, now it’s noon and you are hungry again. How can you be hungry when you only ate an hour ago?

When interacting with our environment, we form associations between items that occur together. This occurs with food, for example if we always eat lunch at 12pm, we will associate that time with food.

What is more, feelings and responses associated with one item can be linked to another by association. The feeling of hunger that is linked to food can become associated with 12pm, so that the time seems to be making you hungry independent of whether you are about to eat or not!

This kind of learning by association is Pavlovian conditioning, discovered by the Russian doctor Ivan Pavlov (hence the name)! He was attempting to study dog’s digestion by measuring their saliva, when he discovered that they would salivate not only to food, but to other stimuli associated with food, such as the experimenter’s footsteps.

Pavlovian conditioning is a gift to advertisers. It provides an avenue to create thoughts or feelings towards a product by associating it with other relevant things. It’s what Coca-Cola are trying to do by sponsoring sporting events – they are hoping that by associating Coke with sports, people will think of it as healthy. Predominantly advertisers attempt to create good feeling towards their products by associating them with stuff their audience likes. They sell Walker’s crisps by associating them with football, new chocolate bars by associating them with old chocolate bars, and even try to sell Pepsi by linking it to Kendall Jenner and social justice movements!

…perhaps think twice about that last one!

It is likely that these associations take charge of our preferences from an early age. We live in an environment teeming with food advertising! A recent study asked 4-6 year old children which of two identical food items they would rather eat, one in plain packaging and one in packaging with a fast food logo. Children overwhelmingly chose the packet with the logo on, despite no differences to how the food inside looks or smells (Robinson et al 2010).

This likely reflects the fact that because of associations between the logo and other pleasant things, established by advertising, children see the food inside as inherently better. The age of the children points to how associations like these form early on in life, and likely determine food choices for many years to come. This learning could contribute to why changing eating behaviour is so hard, as learnt associations have to be revised.

How do these associations influence our behaviour? Pavlovian conditioning only describes the formation of associations between two items, so we must invoke another form of learning if we are to understand how advertising can change which foods we act to obtain. This is instrumental learning, which describes how if a behaviour leads to something pleasant (like tasty food) it is likely to be repeated. My mum used to employ this tactic to stop me being naughty as a child. If I didn’t throw a tantrum the whole way round the supermarket, I got a chocolate cookie.

Pavlovian and instrumental learning interact to influence our actions. Specifically, Pavlovian associations can override and shape learnt behaviours. Watson and colleagues (2014) demonstrated this using a laboratory based computer task. Participants first learned instrumentally that pressing one key got them a piece of chocolate, and another key delivered popcorn. They also learned associations between chocolate and a striped pattern and popcorn and a checked pattern.

Participants then ate lots of chocolate or popcorn. This wasn’t just for fun, the idea was that if a participant eats lots of chocolate they will want to eat it less in future, so not act to obtain it. This was true, participants who had gorged on chocolate pressed on the key associated with chocolate less in the next round of the experiment. These participants were then presented with one of the pictures Pavlovian-associated with chocolate. Experimenters found that even if a participant did not usually want to act to obtain chocolate, when presented with an image associated with chocolate they would press the key to get it!

Think about this in a real world setting. This means that even if we are completely full and don’t want to eat, if we see an advert for pizza on the TV we may quickly find ourselves on the deliveroo website. Furthermore our behaviour may be swayed even by seeing things which are not food, but which are associated with food through advertising. Seeing Gary Lineker’s face might make us reach for a packet of crisps even if we don’t really fancy them. This means that food adverts can control not only what we like to eat, but also tip us towards eating when we do not want to.

Association formation surrounding food is not specific to advertisement. When we interact with an environment in which food is only presented in certain situations, associations (both Pavlovian and instrumental) will form. Going back to the kitchen in the house you grew up in can make you crave cherished childhood foods – fish fingers and smiley faces anyone? It also works the other way round, how often has a certain flavour evoked in you a memory.

This fundamental mechanism just transfers particularly well to advertising. Associations can make us like certain foods in the first place, and also control when and how often we eat them.

Related article: Learning to make healthier food choices.

Edited by Jonathan

References

Robinson, T. N., Borzekowski, D. L. G., Matheson, D. M., & Kraemer, H. C. (2007). Effects of fast food branding on young children’s taste preferences. Archives of Pediatrics & Adolescent Medicine, 161(8), 792e797.

Watson, P., Wiers, R. W., Hommel, B., & Wit, S. (2014). Working for food you dont desire. Cues interfere with goal-directed food-seeking. Appetite, 0195-6663.

How to read a baby’s mind

By Priya Silverstein 

Priya, a guest writer for The Brain Domain, is a second-year PhD student at Lancaster University. She spends half her time playing with babies and the other half banging her head against her computer screen.

Okay, I’ll admit that was a bit of a clickbait-y title. But would you have started reading if I’d called it ‘Functional Near Infrared Spectroscopy and its use in studies on infant cognition’? I thought not. So, now that I’ve got your attention…

Before I tell you how to read a baby’s mind, first I have some explaining to do. There’s this cool method for studying brain activity but, as one of the lesser used technologies, it’s a bit underground. It’s called fNIRS (functional Near Infrared Spectroscopy). Think of fNIRS as fMRI’s cooler, edgier sister. Visually, the two couldn’t look more different – with an MRI scanner being a human-sized tube housing a massive magnet that you might have seen on popular hospital dramas, and NIRS simply looking like a strange hat.

MRI.png   fNIRS_cover
Left: MRI scanner, Right: NIRS cap
Picture credit left: Aston Brain Centre, right: Lancaster Babylab

What these two methods do have in common is that they both measure the BOLD (Blood Oxygen Level Dependent) response from the brain. Neurons can’t store excess oxygen, so when they are active, they need more of it to be delivered. Blood does this by ferrying oxygen to the active neurons faster than to their lazy friends. When this happens, you get a higher concentration of oxygenated to deoxygenated blood in the more active areas of the brain.

Now, to the difference between fMRI and fNIRS. fMRI infers brain activity due to oxygenated and deoxygenated blood having different magnetic properties. When the head is put inside a strong magnetic field (the MRI scanner) changes in blood oxygenation, due to changes in brain activity, alter the magnetic field in that area of the brain. fNIRS on the other hand, uses the fact that oxygenated and deoxygenated blood absorb a different amount of light, as deoxygenated blood is darker than oxygenated blood. Conveniently, near-infrared light goes straight through the skin and skull of a human head (don’t worry, this is not at all dangerous and a participant would not feel a thing). So, shining near-infrared light into the head at a source location, and measuring how much light you get back at a nearby detector, gives a measurement of how much light has been absorbed by the blood in that area of the brain. Therefore, you get a measure of a relative change in oxygenated and deoxygenated blood in that area. All of this without the need for a person to lie motionless in a massive cacophonous magnet, with greater portability, and for about a hundredth of the price of an MRI scanner (about £25,000 compared to £2,500,000).

fNIRS_tech.png
The source and detector are placed on the scalp, so that the light received at the detector is reflected light following banana-shaped pathways

Picture credit: Tessari et al., 2017

“That sounds amazing! Sign me up!” I hear you say. However, I must put a little disclaimer out. There are reasons why fMRI is still the gold standard for functional brain imaging. As fNIRS relies on the measurement of light that gets back to the surface of the scalp after being in the brain, it can’t be used to measure activity from brain areas more than about 3 cm deep. This is being worked on by using cool ways of organising sources and detectors on the scalp. However, it is not thought that fNIRS will ever be able to produce a whole-brain map of brain activity. Also, as fNIRS is looking at the centimetre level, rather than millimetre, its spatial resolution and accuracy of location is limited in comparison to fMRI. Despite this, if the brain areas you’re interested in investigating are closer to the surface of the head, and not too teensy tiny, then fNIRS is a great technology to use.

So, what has this all got to do with babies? Well, fNIRS has one vice, one Achilles heel. Hair. Yes, this amazingly intelligent technology has such a primitive enemy. If your participants are blonde or bald, you’ll probably be fine. But anything deviating from this can block light from entering the head, and therefore weaken the light reaching the brain and eventually getting back to the detectors. However, do you know who has little to no hair? Babies. Plus, babies aren’t very good at lying still, particularly in a cacophonous magnet. This is why fNIRS is especially good for measuring brain activity in infants.

fNIRS is used to study a variety of topics related to infant development.  One of the most studied areas of infant psychology is language development. Minagawa-Kawai et al (2007) investigated how infants learn phonemes (the sound chunks that words are made up of). They used fNIRS to measure brain activation in Japanese 3 to 28-month-olds while they listened to different sounds. Infants listened to blocks of sounds that alternated between two phonemes (e.g. da and ba), and then other blocks that alternated between two different versions of the same phoneme (e.g. da and dha). In 3 to 11-month-olds, they found higher activation in a brain area responsible for handling language for both of these contrasts. So, this means that infants were treating ‘da’ and ‘ba’ and ‘dha’ as three different phonemes. However, 13 to 28-month-olds only had this higher activation when listening to the block of alternating ‘ba’ and ‘da’. This means that the older infants were treating ‘da’ and ‘dha’ as the same phoneme. This is consistent with behavioural studies showing that infants undergo ‘perceptual narrowing’, whereby over time they stop being able to discriminate between perceptual differences that are irrelevant for them. This has been related to why it’s much easier to be bilingual from birth if you have input from both languages, than it is to try to learn a second language later in life.

Another popular area of infant psychology is how infants perceive and understand objects. Wilcox et al (2012) used fNIRS to study the age at which infants began to understand shapes and colours of objects. They measured brain activation while infants saw objects move behind a screen and emerge at the other side. This study used a live presentation, made possible by the fact that fNIRS has no prerequisites for a testing environment except to turn the lights down a bit.

fNRIS_study.png

The shape change (left), colour change (middle), and no change (right) conditions of Wilcox et al. (2012). Each trial lasted 20 seconds, consisting of two 10 second cycles of the object moving from one side to the other (behind the occluder) and back again.

These objects were either the same when they appeared from behind the screen, or they had changed in shape or colour. They found heightened activation in the same area found in adult fMRI studies for only the shape change in 3 to 9-month olds, but for both shape and colour changes in the 11 to 12-month-olds. This confirms behavioural evidence that infants are surprised when the features of objects have changed, and that babies understand shape as an unchanging feature of an object before they understand colour in this way. This study shows how you can use findings from adult fMRI and infant behavioural studies to inform an infant fNIRS study, helping us learn how the brain’s complex visual and perceptual systems develop from infancy to adulthood.

There’s a lot more to learn if you wish to venture into the world of infant fNIRS research; it’s a fascinating area filled with untapped potential. fNIRS can help us to measure the brain activity of a hard-to-reach population (those pesky babies), enabling us to ask and answer questions about the development of language, vision, social understanding, and more! Questions being investigated in the Lancaster Babylab (where I am doing my PhD) include:

  • Do babies understand what pointing means?
  • Are bilingual babies better at discriminating between sounds?
  • Why do babies look at their parents when they are surprised?

And beyond this, the possibilities are endless!

If you are intrigued by fNIRS and want to learn more, I’d recommend review papers such as the one by Wilcox and Biondi (2015), and workshops such as the 3-day Birkbeck-UCL NIRS training course.

Edited by Jonathan and Rachael

References:

Minagawa-Kawai, Y., Mori, K., Naoi, N., & Kojima, S. (2007). Neural Attunement          Processes in Infants during the Acquisition of a Language-Specific Phonemic Contrast. Journal Of Neuroscience, 27(2), 315-321.

Otsuka, Y., Nakato, E., Kanazawa, S., Yamaguchi, M., Watanabe, S., & Kakigi, R.   (2007). Neural activation to upright and inverted faces in infants measured by near infrared spectroscopy. Neuroimage, 34(1), 399-406

Tessari, M., Malagoni, A., Vannini, M., & Zamboni, P. (2015). A novel device for non-invasive cerebral perfusion assessment. Veins And Lymphatics, 4(1).

Wilcox, T., Stubbs, J., Hirshkowitz, A., & Boas, D. (2012). Functional activation of the infant cortex during object processing. Neuroimage, 62(3), 1833-    1840.

The Neuroscience of Mindfulness: What Happens When We Meditate?

By Joseph Holloway

Joe is a guest writer for The Brain Domain, and is currently pursuing an MSc in Mindfulness-based Cognitive Therapies and Approaches, as well as an MA in 18th Century Literary Studies, at the University of Exeter.

‘Mindfulness’ is a word that has gathered momentum over the last decade. It has grown beyond associations of yoga and alternative therapies and moved into the realms of corporate culture, education, and mental health. Mindfulness has become such a prevalent aspect of our culture that there was even a Ladybird Books for Grown-Ups dedicated to it. When a phenomenon becomes this prominent and when it enters such fundamental spheres of our lives it is good to review its evidence base. What is Mindfulness meditation? How is it employed in a therapy context? What happens in the brain when we meditate? What evidence do we have that it is effective? This article attempts to answer these questions.

A Brief History of Mindfulness and Therapy

Firstly, what is Mindfulness? The term has an interesting history of development (Analayo, 2006, pp. 15-41) that is beyond the scope of this article, but a commonly accepted contemporary definition is: “moment-to-moment awareness” (Kabat-Zinn, 1990, p.2). Participants deliberately pay attention to thoughts, feelings, and sensations in the body, bringing their mind back to the task at hand when it wanders. This form of meditation is entrenched in many of the oldest religions and can be traced back to early canonical buddhist texts such as the Satipaṭṭhāna-sutta and the Mahāsatipatṭhāna Sutta. Contemporary Western understandings of Mindfulness meditation are a repackaging of the teachings of these texts in a secular context. They focus on the insights about the workings of the mind and the teachings on how to reduce the amount of distress that we cause ourselves.

A key example of such repackaging was Jon Kabat-Zinn’s Mindfulness Based Stress Reduction (MBSR) course originally developed at MIT in the 1970’s. This is an 8 week group course teaching participants how to engage with Mindfulness meditation and is open to all that feel (i) that they have too much stress in their lives, or (ii) that they are not relating to their stress healthily. In the 1990’s Mark Williams, John Teasdale and Zindel Segal combined Kabat-Zinn’s successful model with Beck’s Cognitive Behavior Therapy (CBT) to create a more specialised programme called Mindfulness-based Cognitive Therapy (MBCT). This programme is specifically designed to treat recurrent depression, and largely only open to those referred by their primary medical consultant. These two arms, the general MBSR and the specific MBCT, are the constituents of the Mindfulness-based interventions available on the NHS in the UK and through other providers around the world. They are widely used both as complementary and sole treatments for a variety of mental and physical health diagnoses including depression, generalised anxiety disorder, post-traumatic stress disorder, insomnia and eating disorders.

What evidence is there that Mindfulness is effective?

The effectiveness of Mindfulness-based interventions has been demonstrated through longitudinal studies, tracking the same people over time. An important early example found depressive participants in the MBCT programme to have half the amount of relapses one year after treatment compared to depressive participants that had treatment as usual (Teasdale et al, 2000). This finding was reinforced by the replication trial (Ma and Teasdale, 2004) concluding that there is ‘further evidence that MBCT is a cost-efficient and efficacious intervention to reduce relapse/recurrence in patients with recurrent major depressive disorder’ (ibid, p. 39). In these studies, the pool of participants in recovery from depression were randomly allocated into either the experimental or the control group. This was done by an external statistician and participants were matched for ‘age, gender, date of assessment, number of previous episodes of depression, and severity of last episode’ (ibid, p. 32). The results were important confirmation for the effectiveness of Mindfulness-based Interventions as therapy.

Whilst this was great news, it wasn’t until 2008 that Mindfulness-based interventions were compared to the gold standard for treatment of recurrent depression (Kuyken et al, 2008). This is maintenance antidepressive medication (m-ADM), requiring the participant to take antidepressive medication even when there are no indications of a relapse. Importantly, the 2008 study found that patients treated with MBCT were less likely to relapse than those treated with the gold standard after 15 months (47% compared to 60% of the m-ADM group). This was also replicated in a follow up study (Segle et al, 2010) where MBCT was compared against m-ADM and also against a placebo. Once participants were in remission they were given either MBCT, m-ADM or discontinued their active medication and given a placebo. Participants for all groups were randomly distributed by an external statistician, ensuring a close control on factors not being investigated. The MBCT and m-ADM group here showed the same levels of prevention from recurrence (73%), both much higher than the placebo group. Over a short term (15 months) Mindfulness-based interventions were thus shown to be better than m-ADM, and equally effective over an even longer period. In addition, it is arguably cheaper to administer Mindfulness-based interventions than m-ADM, there are no issues with drug tolerance, and unlike many antidepressants Mindfulness meditation can be utilised whilst pregnant or breastfeeding.

How does Mindfulness work?

When the brain is not responding to any particular task and is ‘at rest’, areas collectively known as the Default Mode Network (DMN) are activated (Berger, 1929), (Ingvar, 1974), (Andreasen et al, 1995).  This was found to be closely associated with mind wandering (Mason et al, 2007). It was also found it to be consistent with “internally focused tasks including autobiographical memory retrieval, envisioning the future, and conceiving the perspectives of others ” (Bruckner, 2008, p. 1). When our mind is wandering and not focused on a task we are normally either lost in personal memories or running through a scenario in our head, predicting, anticipating or worrying.

More frequent and more automatic activation of this network is associated with depressed individuals (Greicius et al, 2007); (Zhang et al, 2010) (Berman et al, 2011). Regularly wallowing in old memories or worrying about the future are perfect foundations for conditions that may lead to depression. These two functions, conducive to ‘living on autopilot’’, are the exact opposite to the definition of Mindfulness meditation given above: “moment-to-moment awareness.” Indeed, studies have shown that activation of the DMN can be regulated by Mindfulness meditation (Hasenkamp et al, 2012). Participants were observed meditating, and whenever they noticed their mind wandering they had to press a button. Immediately before this action the participants were unconsciously mind wandering. When the participants noticed that their mind had wandered, (indicated by the button press) the researchers regularly observed a deactivation of the DMN. The act of practising mindfulness-meditation was here regularly associated with a deactivation of the DMN.  A correlation between self-reported meditation experience and lower levels of DMN activation was also observed (Way et al, 2010).

Of course, the brain is never ‘doing nothing’ and a counter-network was regularly activated when participants weren’t mind-wandering: when they were paying attention to a task. This network in part consists of the anterior cingulate cortex (ACC), which is known to be instrumental in task monitoring (Carter et al, 1998). Activation of the ACC is closely associated with ‘executive control’ (Van Veen & Carter, 2002, p. 593) which detects incompatibilities or conflicts between a predicted outcome, and the observed reality. In this way the ACC functions as error-reporting or quality management. The ACC does not attempt to remedy the situation, but instead highlights it to other areas of the brain. This all happens before the subject is cognitively aware that there is a conflict.

Crucially, an association has been shown between meditation and activation of the ACC. A positive correlation between AAC thickness and meditation experience (Grant et al, 2010) and between mindfulness meditation and activation of the ACC (Zeidan et al, 2013), has been demonstrated. Mindfulness meditation is reliably shown to activate the ACC and improve the relative ease and likelihood of it being activated. Activation of the ACC prevents the mind from wandering, and prevents activation of the DMN. Mind wandering and activation of the DMN is related to depressive symptoms either developing or recurring. This is how Mindfulness-based interventions are thought to help those at a neurological level.

Conclusions

Mindfulness meditation has been around for 3500 years. It has been utilised in the West for nearly 40 years. We have had good evidence that it works for nearly 20 years but we are only just starting to explore how it works. The recent findings above help outline the process of change that the brain goes through whilst a regular Mindfulness-meditation practise is established, but they are by no means the full picture. We are also investigating how Mindfulness meditation facilitates people to more regularly respond instead of instinctively react. We are investigating how Mindfulness meditation enables decentering, and how it reduces the connectivity to the emotional areas of the brain. Research into the nuts and bolts of Mindfulness has never been so intense, and exciting results just like those depicted in this article are sure to arise soon.

Joe teaches a 10 week course devised by the Mindfulness in Schools Project (see details here). He teaches all levels and abilities, from College to University, and finds that it has had an overwhelmingly positive impact on level of well-being, achievement, and attendance of his students. If this is something that interests you, he can be contacted at joseph.c.holloway@gmail.com is now taking bookings for autumn term 2017, and for 2018.

Edited by Jonathan and Rachael

References:

  • Analyo (2003). Satipaṭṭhāna: The Direct Path to Realisation. Birmingham: Windhorse Publishing
  • Andreasen, N. et al. (1995). Remembering the past: two facets of episodic memory explored with positron emission tomography. Annals of the Journal of Psychiatry, 152, (1), pp 1576- 1585.
  • Berger, H. (1929). Über das elektrenkephalogramm des menschen. Archiv für Psychiatrie und Nervenkrankheiten, 87, (1), pp 527-570.
  • Berman, M. et al. (2011). Depression, rumination and the default network. Social Cognitive & Affective Neuroscience, 6, (1), pp 548-555.
  • Bruckner, R. (2008). The brain’s default network: anatomy, function, and relevance to disease. New York Academy of Sciences, 1124, (1), pp 1-38.
  • Carter, C. et al. (1998). Anterior cingulate cortex, error detection, and the online monitoring of performance. Science, 280, (1), pp 748-749.
  • Greicius, M. (2007). Resting-state functional connectivity in major depression: abnormally increased contributions from subgenual cingulate cortex and thalamus. Biological Psychiatry, 62, (5), pp 429-437.
  • Grant, J. et al. (2010). Cortical thickness and pain sensitivity in zen meditators. American Psychological Association, 10, (1) pp 43-53.
  • Hasenkamp, W. (2012). Mind wandering and attention during focused meditation: a fine-grained temporal analysis of fluctuating cognitive states. NeuroImage, 59, (1,) pp 750-760.
  • Holzel, B. et al. (2011). Mindfulness practise leads to increases in regional brain grey matter density. Psychiatry Research, 191, (1), pp 36-43.
  • Ingvar, D. (1974). Patterns of brain activity revealed by measurements of regional cerebral blood flow. Copenhagen: Alfred Benzon Symposium.
  • Kabat-Zinn, J. (1990). Full Catastrophe Living. New York: Dell Publishing.
  • Kuyken, W. et al. (2008). Mindfulness-Based Cognitive Therapy to prevent relapse in recurrent depression. Journal of Consulting and Clinical Psychology, 76, (6), pp 966-978.
  • Ma, S. & Teasdale, J. (2004). Mindfulness-Based Cognitive Therapy for depression: replication and exploration of differential relapse prevention effects. Journal of Consulting and Clinical Psychology, 72, (1), pp 31-40.
  • Mason, M. et al (2007). Wandering mind: the default network and stimulus-independent thought. Science, 315, (19), pp 393-395.
  • Teasdale, J. et al. (2000). Prevention of relapse/recurrence in major depression by Mindfulness-Based Cognitive Therapy. Journal of Consulting and Clinical Psychology, 68 (4), pp 615-623.
  • Segle, Z. et al (2010). Antidepressant monotherapy vs sequential pharmacotherapy and Mindfulness-Based Cognitive Therapy, or placebo, for relapse prophylaxis in recurrent depression. Archives of General Psychiatry, 67, (12), pp 1256-1264.
  • Van Veen, V. & Carter, C. (2002). The timing of action-monitoring processes in the anterior cingulate cortex. Journal of Cognitive Neuroscience, 14, (4), pp 593-602.
  • Way, B. et al (2010). Dispositional mindfulness and depressive symptomatology. Correlations with limbic and self-referential neural activity during rest. Emotion, 10, (1), pp 12-24.
  • Zeidan, F. et al. (2013). Neural correlates of mindfulness meditation-related anxiety relief. Social Cognitive and Affective Neuroscience, 9, (6), pp 751-759.
  • Zhang, D. et al. (2010). Noninvasive functional and structural connectivity of the human thalamocortical system. Cerebral Cortex, 20, (1), pp 1187-1194.

Things that look like your brain but actually aren’t

When you’re drunk and merry celebrating Christmas today, remember to watch out for the brain imposters! The Brain Domain team have put together a list of some of the most convincing brain imposters – things that look like your brain but actually aren’t!

                             A WALNUT                                                   A CAULIFLOWER

dreamstime_xxl_1444099.jpgdreamstime_xxl_23035622.jpg

BRAIN CORAL IN GREAT BARRIER REEF

dreamstime_xxl_1564121.jpg

A PRIMATE’S BRAIN

Screen Shot 2016-12-24 at 17.09.00.png

A folded cerebral cortex allows a much greater surface area of brain to be packed in to the skull. This makes them similar to a human brain in appearance. Watch out for your peanuts this Christmas!

SOMEONE ELSE’S BRAIN

Brain.png

Though it may be difficult, never confuse someone else’s brain with your own! This can cause all kinds of awkward conversations, particularly at Christmas.

…. Do come back in January to read some neuroscience content that’s a lot more informative!

Walnut, Coral, and Cauliflower images sourced from Dreamstime
Coral: © Surpasspro | Dreamstime.com – Brain Coral In Barrier Reef Photo
Cauliflower: © Jultud | Dreamstime.com – Cauliflower Photo)

 Animal Brain image adapted from the Mammalian Brain Collection

 

 

 

 

 

 

 

 

 

 

 

Sport on the brain; when do the risks outweigh the benefits?

By Catherine Foster

You’re likely to have come across articles citing evidence that physical fitness and sport participation benefits cardiovascular and brain health1. Active lifestyles reduce the risk of high blood pressure, cardiovascular disease, Type 2 Diabetes, and stroke. Physical activity has also been shown to reduce inflammation, depression, cerebral metabolic and cognitive decline and brain atrophy.  The evidence is clear; sport is undeniably a good thing for your brain as well as your ability to fit in your jeans.

Yet, certain sports carry a relatively high risk of concussion, also misleadingly called mild traumatic brain injury (MTBI). American football, wrestling, soccer, ice hockey, boxing and rugby are among the sports with the highest risk of brain injury, though many others involve some risk. The point of this article is not to warn people off playing sport but to highlight the danger of mismanagement of concussion and the long-term effects of repeated impacts. Most research focuses on American Football but concussion is a frequent hazard in all contact and combat sports as well as equestrian and snow-sports.

In the US alone, the estimated annual incidence of sports-related concussions is around 3.8 million with approximately 50% going unreported2. Research and media attention has focused on concussion in American Football, partly due to the popularity of the sport. Another important factor was the diagnosis of chronic traumatic encephalopathy (CTE) by Dr. Bennet Omalu in 2002, in Mike Webster, a former star centre with a 14-year professional career. A form of dementia, CTE causes progressive brain degeneration resulting from an accumulation of the tau protein, which kills brain cells. CTE is now known to be caused by repeated brain trauma such as that suffered by athletes playing contact or collision sports3.

Bennet’s discovery, and ensuing battle with the National Football League (NFL) to have this disease risk acknowledged, was chronicled in the movie Concussion, starring Will Smith as Dr. Omalu. The movie followed the billion-dollar settlement the NFL made to retired players and families of ex-athletes for concealing the effects of repeated concussions. Around $10m of this was dedicated to research into concussion and CTE. Knowledge of the effects of concussion is much improved thanks to Dr. Omalu and others, and it is promising that a small portion of NFL profits will go towards further research. Ideally this would have been the NFL’s own initiative, but that is a separate argument. Increased awareness of concussion has increased reporting. Though, without a solution, the increasing size of players and aggression with which the sport is played is contributing to the rising number of concussions each year4.

Film.png
BENNET OMALU (LEFT) & WILL SMITH  WHO PORTRAYED HIM IN
THE 2016 FILM CONCUSSION.  IMAGE SOURCE


How regularly do concussions occur?

The NFL reported 271 concussions in the 2015 season, add this to college and school football and that is a lot of young adults with brain injuries.  It is estimated that a professional football player receives up to 1500 head impacts a season and a Virginia Tech study revealed that the G-force of these hits can reach 150 Gs5, with helmets only absorbing a fraction of the impact. Comparatively, fighter pilots experience a G-force of 9 during a jet roll. This seriousness of concussion is not limited to the NFL: helmetless sports such as soccer, where the impact speed of a ball headed by a player can be 70mph (40mph in children’s games), and rugby where a player’s average weight is around 114kg in the UK and higher internationally. In 2013-14 there were 86 reported concussions in English Rugby alone. So what actually happens to the brain following concussion and can anything be done to reduce the incidence?


What is happening to the brain?

Concussion’s principle causes are the acceleration and deceleration forces caused by a blow to the head or neck6. These forces cause the brain to rotate rapidly and the tissue to stretch and tear, resulting in the death of neuronal cell bodies, dendrites and axons, glia, and blood vessels in both grey matter and white matter connective tissue. Cells downstream of the site of injury are affected due to reduced energy supply or communication from other regions.

In addition, a metabolic cascade characterised by a huge release of neurotransmitters (chemical messengers) attempts to maintain balance following injury. This increase in activity increases glucose requirements and therefore hypermetabolism occurs, even though blood flow in the brain (which carries glucose) is reduced. This “energy crisis” in the brain typically results in a range of symptoms including dizziness or loss of consciousness, confusion, memory loss, headache and mood disturbances.

The good news is that with the correct care, symptoms of concussion typically subside within days or weeks thanks to the brain’s remarkable ability to repair itself, known as ‘brain plasticity’. Unfortunately, concussion is often not dealt with appropriately and individuals go on to develop post-concussion syndrome (PCS), a complex disorder involving physical disability, cognitive and mood disturbances. A second major risk is that of second impact syndrome (SIS) whereby a second injury occurs before the first has resolved. This injury may be extremely minor at first glance but rapidly advances to cerebral oedema, coma and is ultimately fatal.

Screen Shot 2016-11-22 at 14.39.17.png

What can be done to reduce the incidence and consequences of concussion in sport?

As David Camarillo illustrates in his TED talk, current helmets do not adequately protect the brain and research is aimed at developing a more effective alternative. Senior sports clinicians including Dr. James Robson, Chief Medical Officer for Scotland Rugby Union, have called for changes in the way the game is played to reduce head injuries; it is estimated there is at least one concussion per game in the Six Nations alone. Changing the rules of a sport in a way that dramatically alters the game has huge financial and cultural implications but neuropathology and brain imaging studies have shown the impact of sports concussion on blood flow, metabolism and neurodegeneration in their current form7. Right now, the most effective way to reduce the risk of complications following concussion is to ensure that athletes refrain from playing for an appropriate time following any impact to the head. While it is unclear how many concussions it takes to cause irreversible damage, players and coaches should accept that if an individual continues to play after multiple concussions they are likely to develop neurological problems. One could argue that players should be educated on the risks and allowed to make their own decision on whether to participate in the sport. But, with high incidences of sports-related concussion in children and teenagers this argument becomes more complex and again boils down to; do the many benefits of sport, physical and cultural, outweigh the risks?

Header image source: Getty Images.

Edited by Jonathan and Rachael 

References:

  1. Cotman, C. W., Berchtold, N. C., & Christie, L. A. (2007). Exercise builds brain health: key roles of growth factor cascades and inflammation. Trends in neurosciences30(9), 464-472.
  2.  Harmon, K. G., Drezner, J. A., Gammons, M., Guskiewicz, K. M., Halstead, M., Herring, S. A., … & Roberts, W. O. (2013). American Medical Society for Sports Medicine position statement: concussion in sport. British journal of sports medicine47(1), 15-26.
  3. Omalu, B., Bailes, J., Hamilton, R. L., Kamboh, M. I., Hammers, J., Case, M., & Robert Fitzsimmons, J. D. (2011). Emerging histomorphologic phenotypes of chronic traumatic encephalopathy in American athletes. Neurosurgery69(1), 173-183
  4. Haring, R. S., Canner, J. K., Asemota, A. O., George, B. P., Selvarajah, S., Haider, A. H., & Schneider, E. B. (2015). Trends in incidence and severity of sports-related traumatic brain injury (TBI) in the emergency department, 2006–2011. Brain injury29(7-8), 989-992.
  5. Jenkins, T., J. (2013). Sports Science Part II: Anatomy of a Hit in Football. Retrieved from: http://www.personal.psu.edu/afr3/blogs/siowfa13/2013/10/sports-science-part-ii-anatomy-of-a-hit-in-football.html
  6. McKee, A. C., Daneshvar, D. H., Alvarez, V. E., & Stein, T. D. (2014). The neuropathology of sport. Acta neuropathologica127(1), 29-51.
  7. Henry, L. C., Tremblay, S., & De Beaumont, L. (2016). Long-Term Effects of Sports Concussions Bridging the Neurocognitive Repercussions of the Injury with the Newest Neuroimaging Data. The Neuroscientist, 1073858416651034.

Who the hell is MEG, and how can she help us understand the brain?

Let me tell you about a MEG who doesn’t get her fair share of the limelight. MEG uses her SQUIDs to catch your brain activity, after it has left your head. She’s quite a fast mover, and can do this at a millisecond rate! Strangely though, she’s kept locked in a room with really thick walls. Poor MEG.

Still confused? Of course you are.  I guess it is time for me to admit MEG isn’t a woman. Similar to an MRI scanner, MEG is a technique researchers use to learn about the brain.  MEG is short for Magnetoencephalography (magneto refers to magnetic fields, encephalon means the brain, and –graphy indicates the process of recording information). Nothing to do with the guy in the purple cape.

This is a MEG scanner:

meg-scannerSource of this image: Magnetoencephalography Wikipedia

I’d like to say this brain imaging technique was inspired by a woman getting a perm in the 80s, as that’s what it has always reminded me of.  I’m afraid that’s not the case.

So how does MEG measure these magnetic fields? Any electrical current will produce a magnetic field. Even the electrical currents in your brain. If a big group of neurons (brain cells) are facing the same direction and send electrical impulses to each other, they induce a weak magnetic field, with a certain direction and strength. These magnetic fields leave the brain, and can still be measured outside the skull.

Picture1.pngSource of this image: Magnetoencephalography Wikipedia

Since the magnetic fields that leave the head are so weak (around a billion times weaker than the magnetic field of a typical fridge magnet!), a MEG scanner measures them using really sensitive instruments called SQUIDs (Superconducting Quantum Interference Devices). SQUIDs are quite high maintenance though; they only work at temperatures below -296°C! Bathing them in liquid helium keeps them this cold. As SQUIDs are so sensitive, they also pick up stronger magnetic fields from the environment, which can mask the ones we want to measure from the brain. Because of this, a MEG scanner has to be kept in a magnetically shielded room, with a door like this:

msr_layered_doorSource of this image: Magnetoencephalography Wikipedia

Why is it useful to measure these magnetic fields anyway? MEG allows us to measure brain activity in a non-invasive way; there is no discomfort for the person being scanned, and no side effects. MEG helps us to learn about how and where the brain responds to certain tasks, improving knowledge of the link between brain function and human behaviour. Brain function measured with MEG has been shown to be different in many neurological and psychiatric diseases.  MEG has a key role to play in helping localise regions of the brain that are faulty, and that might need to be surgically removed, for example in epilepsy.

I hope you’ve enjoyed being introduced to a new MEG. Watch this space for more articles on what she gets up to.

Organ Donation: A No-Brainer, Right?

Organ donation. It’s an unusual topic for neuroscience (unless you’re talking about this), but the brain might just present the biggest issue preventing the advancement of this essential field. Why? Because a recent innovation relies on growing human organs inside pigs, and the initial studies show that we risk human cells entering the pig brains. What if their brains become too human? What if they start to think like us? Could we end up with a new race of Pig-men? It’s an intriguing idea (that might make you feel a little weird inside), but before we think about the ethical implications of an interspecies brain, let’s think about how this all came about.

The ‘Organ Deficit’

Anyone who has ever tuned into a dodgy medical drama on daytime telly will know that sometimes organs need to be replaced, and without that replacement organ, the patient will die. The problem is getting hold of donor organs is difficult! You need it to be fresh, you need it to be compatible with the patient, and you need it before the patient’s time runs out. What people often forget is that you also need to hope someone else tragically died before their time, but also before you do.

If that wasn’t distressing enough, if you look at the numbers you’ll see the current system is not only morbid, but also failing us. In the USA alone twenty two people die everyday waiting for an organ1 and those who are lucky enough to receive one often have to suffer for many years before hand2. We are suffering from an Organ Deficit, and this one won’t be fixed with a little austerity.

organ-donation

Wouldn’t it be better if a doctor could simply say “Your kidney is failing, but don’t worry, we’ll just grow you a new one and you’ll be right as rain again!”? Nobody would have to die to get that organ, or suffer for years on a waiting list. The only people who might actually suffer are the scriptwriters of those medical dramas (and mildly at that). Growing new tissue for patients is a core aim of scientists working in regenerative medicine. But there is a list of problems that have to be overcome first. Unfortunately, that ‘list of problems’ isn’t a short one, and despite decades of research, several big leaps, and even a few Nobel prizes, the end still isn’t in sight. Meanwhile, another day passes and another twenty two people have missed their window.

What we need is a temporary fix; some way to increase the number of organs available to help the people in need now, and allow scientists to continue researching in the background for the ideal solution. Something with a fancy long name (and an unnecessary number of g’s) – something like Xenogeneic Organogenesis.

Xeno-whatnow?

Ok, I admit, I made that term up. But it makes sense! Let me translate. ‘Xenogeneic’ means working with cells of two different species, and ‘Organogenesis’ means growing organs. Simply put, we’re talking about growing new human organs inside of host animals. How? Well, in a breakthrough paper Kobayashi et al.3 demonstrated a way to grow fully formed rat organs inside a mouse.

This was achieved through a technique called Blastocyst Complementation; when an embryo is injected with stem cells from a second animal. The resulting animal is called a ‘chimeric animal’, because it is made of cells from the two different animals, and those cells are genetically independent of each other. This has been successfully achieved in animals of the same species before (e.g. putting mouse stem cells into a mouse embryo), but here they crossed species by creating a mouse-rat chimera and a rat-mouse chimera (See image A, below). The chimeras were morphologically similar to the animal species of the host embryo (and mother), but crucially, the chimeras were composed of cells derived from both species, randomly placed across the animals. In other words, whilst the mouse-rat chimera looked like a mouse, if you looked closely at any body part then you would see it was in fact built of both mouse and rat cells. The researchers believe this happened because the stem cells don’t alter the ‘blueprints’ that the embryo already has. Instead, they grow just like other embryonic stem cells, following chemical directions given to them and gradually building the animal according to those instructions.

AB

Satisfied they could use blastocyst complementation to create inter-species chimeric animals, the researchers went one step further. They genetically modified a mouse embryo to prevent it from growing a pancreas (in image B this is called the ‘Pdx1-/-’ strain: a name that refers to the gene that was removed) and injected unmodified rat stem cells into the embryo. They were hoping that by preventing the mouse embryo’s stem cells from being able to form a pancreas, the chemical directions to build a pancreas would only be followed by the newly introduced rat stem cells. And guess what? It worked! They reported the pancreas inside the Pdx1-/- strain was built entirely of rat cells (See image B, above).

This got a lot of people very excited! Would it be possible to do this with human organs? Could we farm human organs like we do food? Could we even use iPS technology to grow autologous patient-specific organs to improve the transplantation process? The lab has now begun tackling these sorts of questions, starting by testing the viability of pig-human chimeric embryos (pig embryos with human stem cells, to make pigs built with human cells), to see if the two cell types will contribute to the animal in the same way the mouse and rat cells did.

Freaky! Is this why we’re worried about creating a race of Pig-men?

Yep. What I didn’t explain above is that the brains of those chimeric animals, like the hearts, were also comprised of both mouse and rat cells. Considering that the scientific community generally accepts it is our incredible brain that separates us as a species, what would happen if human brain cells made their way into the pig brain? How human could they become? Would they begin to look like us? Act like us? Talk like us? Even think like us? How human would they have to become before we gave them human rights? Where is the legal, moral and ethical line between animal and human when one creature is a mixture of the two?

Most people (not all) would agree that sacrificing a farm animal’s life to save a human’s is an acceptable cause. After all, we already do that for bacon… a cause that even I (an avid carnivore) cannot claim as exactly necessary. But sacrificing a half pig, half human? That sounds like something you’d find in a horror story!

Even though many scientists believe an intelligent pig-human chimera is biologically implausible (let alone a speaking one), no-one is willing to say it’s impossible. Concerns over a potentially altered cognitive state have led to the US based NIH (a.k.a USA government funding central) to announce that they will not be supporting any research that involves introducing human cells into non-human embryos5. The fact is we don’t know enough about how the human brain develops or works. We don’t understand how the biological structures, electronic signals, and chemical balances translate to the gestalt mind experience. We don’t have one easy answer that makes “being a human” and “being a pig” distinct enough, to know how to interact morally with a pig-human chimera. And that makes me (and probably you) rather uncomfortable.

It’s also giving me all kinds of flashbacks to my school theatre group’s rendition of Animal Farm (I played Boy. Sounds like a rubbish part right? You’re wrong. He’s the narrator!). I can’t help but wonder if Orwell ever imagined his metaphorical work could have literal connotations too…

animalfarm“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”

Fortunately, nobody wants to create pig-men (at least I don’t think there is a Dr. Moreau in the research team?), and the pig-human embryos being generated are being terminated long before they grow into anything substantial. The lab wants to be careful, to understand the potential consequences before even considering letting a pig-human foetus go to full term. Naturally this means there’s a heck of a lot of work to be done, but xenogeneic organogenesis (copyright: Me) is still decades ahead of other organ replacement models. Continued work could mean viable results a lot sooner, saving countless lives. At the very least, it would enable us to study natural organ growth directly, fast-tracking dish-driven stem cell models.

Is this really the best solution available to solve the Organ Deficit?

Good question reader! Let’s bring this discussion back to a simpler solution. Late last year (1st Dec 2015) Wales (home of The Brain Domain) became the first country in the UK to change the law to make organ donation ‘opt-out’ instead of ‘opt-in’.

This isn’t a new idea. Many other countries have implemented an opt-out system before, and generally statistics look good6. Yet there is ongoing debate about whether this change will be sufficient alone7. Cultural variations and infrastructural differences in health care systems have a large impact on the effectiveness of such legislation, but generally speaking we should see some improvement. If that improvement is sufficient, then the policy will likely be rolled out across the rest of the UK (fingers crossed we beat the four years it took to get the plastic bag charge across the river Severn!). But if that is still not enough, then we’ll just have to hope those at Stanford can find a way to make xenogeneic organogenesis a real no-brainer.

References:

1) http://www.organdonor.gov/about/data.html

2) https://www.organdonation.nhs.uk/real-life-stories/people-who-are-waiting-for-a-transplant/

3) Kobayashi et al. http://www.sciencedirect.com/science/article/pii/S0092867410008433

4) Example image taken from: http://www.sciencedirect.com/science/article/pii/S0092867410009529

5) https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-158.html

6) http://webs.wofford.edu/pechwj/Do%20Defaults%20Save%20Lives.pdf

7) http://www.bbc.co.uk/news/uk-wales-34932951

Reflections on Dementia

This is not our usual type of Brain Domain article, but personal stories are important reminders of why neuroscience research is so crucial. Here are some touching thoughts from a friend of someone with Dementia.

Dementia: Please no!

By Andy Stickland

As I look upon my frail friend Bob, lying there in his chair, blanket all wrapped around him as the nurse says “he feels the cold more these days”, I realise I have never been in his room before, despite my visits over the years. I inquired about his routine to another nurse and she said “Robert does not sit in the front room too much these days and generally comes down just for meals.” The last time Bob saw me, although he wouldn’t remember this, he was happily sitting downstairs with others, displaying his usual smile and lighting up the room, actively enjoying the music playing in the background.

Bob has not known who I am, or anyone for that matter, for all the years that I have (too infrequently) called in at his Care Home, set amidst the Cornish Countryside in the Looe valley. Although not as aware or as attentive, compared to my last visit, Bob did seem ‘happy’, to coin a somewhat overused and oversimplified term (yet, this is the only way I have been able to express the situation to others and myself). He may seem happy in his world, whatever happy is, but it’s those who knew him, liked him, loved him, that suffer: from the daughter who he knows no more, to his former colleagues who used to call on the telephone, when he lived in town. Bob did not know them, and so had little he could say, so they stopped calling. Memories are what make us, are they not.

Bob 3We are often reminded of the value of face-to-face communication. Dear old Bob is nearly 90 years old, and uses a lot of non-verbal communication. However, over recent years, I normally have no idea what on earth he is trying to communicate. His fingers move around while he explains in depth a concept that seems so clear to him but which I struggle to understand. I play along in a desire to not distress him, saying “yes” or “oh I see”. Some sentences click into place and I strain to find some meaning in the riddles Bob speaks, that seem to make sense to him. I remember the gist of some of what he says. Out of the frail man in front of me comes the lovely soothing voice that I first knew. I met Bob in the 80s, when I was a young trainee farmer, approaching 21. He was then in his 50’s (as I am now!), and was the Estate Manager. Today, the voice, nearly as powerful, and the eyes, nearly as bright and welcoming, relay such statements: “some people are loved a lot by family but too much sometimes” – Bob smiles as he speaks – “as they are not always nice”, and “our parents sometimes warn us about some people that we may go here or there with but are not really good for us”, and “we shall grow old together, some mates (chaps) are good some not so good” – another grin – “and best not to get too close to those who are not good for us!”.

I often tell Bob of what a great blessing he was to many people, and tell him of things he did, and he responds with “Oh did I”? I do not say, “Oh, do you not remember?” as I know he does not, and I do not want to worry him. What pain he may feel in being told things of which he has no recollection, by people who he no longer knows? The man of intellect, of reasoning, and with memories, is all gone. Does Bob know of the present distress of those who knew him and loved him? Blissfully, it seems not, as all his memories are gone. Five years ago he at least had his long-term memory, but not today.

I dare to say, why then, what is the point? Some are not ‘happy’ like Bob, but instead fearful, angry and confused, with their loved ones saying “this is not what they were like”, despairing at the stranger in front of them. Oh what pain beholds a husband, a wife, a father, a mother, a sibling, or a dear friend, who even if they are ‘happy’ does not know who you are to them, or who they used to be.

bob 1I gaze around the room and see a picture of Bob standing in a field overlooking Lee Bay. This is the beach on the Lee Abbey estate, in North Devon, at end of Valley of Rocks near Lynton, and he, as Estate Manager, had come to see how the haymaking was coming on. The old Massey (not so old then though) pulling a trailer laden with bales was the one I think I used a couple of years later for scraping the yard after I had milked the cows. So sad that he does not now have any memories at all of these happy days. Many of us do and we remember the giant of a man with a big heart and lovely smile who blessed so many of us along our journeys in life. I take his hand and say a short prayer of thanks to a higher power.

I do so, so hope, that the brain-power of some gifted souls (maybe even my own flesh and blood) can come up with a cure to stop this whole horrid state of affairs. We have the memories but sadly they do not.

I go to leave and I hear the words “toodle ooh”, the very words he so often uses when one leaves, said in his usual confident and caring way, and I take some comfort.

Of course, the experience of dementia is not the same for everyone, or every family. Click here to read a daughter’s personal account of her father’s dementia , written with honesty, charm and a welcome pinch of humour.

What is dementia, and how can you get help? Click here to find out more.

Edited by Rachael