Can’t or Won’t – An Introduction To Apathy.

 By Megan Jackson

Often, when a person hears the word apathy, an image comes to mind. A glassy-eyed teenager scrolling vacantly through their phone while their parent looks on in despair. While comical, it does not reflect what apathy really is clinically: a complex symptom that has important clinical significance.

In 1956, a study was published describing a group of Americans released from Chinese prison camps following the Korean war1. As a reaction to the severe stress they had suffered during their time in prison, the men were observed to be ‘listless’, ‘indifferent’ and ‘lacking emotion’. The scientists decided to call this pattern of behaviours apathy. However, at this point, there was no formal way to measure apathy. It was acknowledged that it could manifest in varying degrees, but that was the extent of it. It was over 30 years before apathy was given a proper definition and recognised as a true clinical construct.

As time went on, scientists noticed that apathy doesn’t just arise from times of extreme stress, like time in a prison camp, but that it also appears in a variety of clinical disorders. A proper definition and a way of assessing apathy was needed. In 1990, Robert Marin defined apathy as ‘a loss of motivation not attributable to current emotional distress, cognitive impairment, or diminished level of consciousness’. As this is a bit of a mouthful, it was summarised as ‘A measurable reduction in goal-directed behaviour’. This definition makes it easy to imagine an individual who no longer cares about, likes, or wants anything and therefore does nothing. However, this is not always the case. There are different subtypes of apathy that each involve different brain regions and thought-processes, these are:

Cognitive – in which the individual does not have the cognitive ability to put a plan into action. This may be due to disruption to the dorsolateral prefrontal cortex.

Emotional-affective – in which the individual can’t link their behaviour or the behaviour of others with emotions. This may be due to disruption to the orbital-medial prefrontal cortex.

Auto-activation – in which the individual can no longer self-initiate actions. This may be due to disruption to parts of the globus pallidus.

It’s much easier to picture how the different types of apathy affect behaviour. Take Bob for example. Bob has apathy, yet Bob likes cake. When somebody asks Bob whether he would like cake he responds with a yes. However, Bob makes no move to go and get it. Bob still likes cake, but he can no longer process how to obtain that cake. He has cognitive apathy. In another example, Bob may want cake but does not want to get up and get it. However if someone told him to, he probably would go and get it. This is auto-activation apathy and is the most severe and the most common kind. If Bob could no longer associate cake with the feeling of happiness or pleasure, then he has emotional-affective apathy.

So, whatever subtype of apathy Bob has, he doesn’t get his cake. A shame, but this seems a little trivial. Should we really care about apathy? Absolutely! Imagine not being able to get out of your chair and do the things you once loved. Imagine not being able to feel emotions the way you used to. Love, joy, interest, humour – all muted. Think of the impact it would have on your family and friends. It severely diminishes quality of life, and greatly increases caregiver burden. It is extremely common in people with neurodegenerative diseases like dementia2, psychiatric disorders like schizophrenia3, and in people who’ve had a stroke4. It can even occur in otherwise healthy individuals.

Elderly people are particularly at risk, though scientists haven’t yet figured out why. Could it be altered brain chemistry? Inevitable degeneration of important brain areas? One potential explanation is that apathy is caused by a disruption to the body clock. Every person has a body clock, a tiny area of the brain called the suprachiasmatic nucleus, which controls the daily rhythms of our entire bodies like when we wake up, go to sleep, and a load of other really important physiological processes like hormone release. Disruption to the body clock can cause a whole host of health problems, from diabetes to psychiatric disorders like depression. Elderly people have disrupted daily rhythms compared to young, healthy people and it is possible that the prevalence of apathy in the elderly is explained by this disrupted body clock. Much more research is needed to find out if this is indeed the case and why!

Figuring out how or why apathy develops is a vital step in developing a treatment for it, and it’s important that we do. While apathy is often a symptom rather than a disease by itself, there’s now a greater emphasis on treating neurological disorders symptom by symptom rather than as a whole, because the underlying disease mechanisms are so complex. So, developing a treatment for apathy will benefit a whole host of people, from the elderly population, to people suffering from a wide range of neurological disorders.

Edited by Sam & Chiara

References

Stassman, H.D. (1956) A Prisoner of War Syndrome: Apathy as a Reaction to Severe Stress, Am.J.Psyc., 112(12):998-1003

Chow, T.W., (2009) Apathy Symptom Profile and Behavioral Associations in Frontotemporal Dementia vs. Alzheimer’s Disease, Arch Neurol, 66(7); 88-83

Chase, T.N., (2011) Apathy in neuropsychiatric disease: diagnosis, pathophysiology, and treatment Neurotox Res 266-78.

Willem van Dalen, J., (2013) Poststroke apathy, Stroke, 44:851-860.

Gillette M.U., (1999) Suprachiasmatic nucleus: the brain’s circadian clock, Recent Prog Horm Res. 54:33-58

Square eyes and a fried brain, or a secret cognitive enhancer- how do video games affect our brain?

By Shireene Kalbassi

If, like me, you spent your childhood surrounded by Gameboys and computer games, you have probably heard warnings from your parents that your eyes will turn square, and that your brain will turn to mush. While we can safely say that we are not suffering from an epidemic of square-eyed youths, it is less clear what gaming is doing to our brain.

In the support of worried parents all around the world, there is a disorder associated with gaming. Internet gaming disorder is defined as being an addictive behaviour, characterised by an uncontrollable urge to play video games. In 2013, internet gaming disorder was added to the Diagnostic and Statistical Manual of Mental Disorders (DSM), with a footnote, saying that more research on the matter is needed[i]. Similarly, in 2018, the world health organisation (WHO) included internet gaming disorder to the section ‘disorders due to addictive behaviours’[ii].

There is evidence to suggest that internet gaming does lead to changes in brain regions associated with addiction. Structurally, it has been shown that individuals diagnosed with internet gaming disorder show an increase in the size of a brain region known as the striatum, a region associated with pleasure, motivation, and drug addiction (Cai et al 2016[iii], Robbins et al 2002[iv]). The brains of those with internet gaming disorder also show altered responses to stimuli related to gaming. In one study, two groups of participants were assessed: one with internet gaming addiction, and the other without. All the participants with internet gaming disorder were addicted to the popular multiplayer online role-playing game, World of Warcraft. The participants were shown a mixture of visual cues, some being associated with World of Warcraft, and others being neutral. Whilst being shown the visual cues, the brains of the participants were scanned for brain activation using an fMRI machine. It was observed that when being shown visual cues relating to gaming, the participants with internet gaming disorder showed increased activation of brain regions associated with drug addiction, including the striatum and the prefrontal cortex.  The activation of these brain regions was positively correlated with self-reported ‘craving’ for these games; the higher the craving for the game, the higher the levels of activation (Ko et al 2009[v]). These studies, among others, do suggest that gaming does have a place in joining the list of non-substance related addictive disorders.

But don’t uninstall your games yet; it is important to note that not everyone who plays computer games will become addicted. And what if there is a brighter side to gaming? What if all those hours of grinding away on World of Warcraft, thrashing your friends on Mario Kart, or chilling on Minecraft might actually benefit you in some way? There is a small, but growing, amount of research that suggests that gaming might be good for your brain.

What we have learnt about how the brain responds to the real world, is being applied to how the brain responds to the virtual world. In the famous work of Maguire et al (2000[vi]), it was demonstrated that the taxi drivers of London showed an increased volume of the hippocampus, a region associated with spatial navigation and awareness. This increased volume was attributed to the acquisition of a spatial representation of London. Following from this, some researchers asked how navigation through a virtual world may impact the hippocampus.

In one of these studies, the researchers investigated how playing Super Mario 64, a game in which you spend a large amount of time running and jumping around a virtual world (sometimes on a giant lizard) impacts the hippocampus. When compared to a group that did not train on Super Mario 64, the group that trained on Super Mario 64 for 2 months showed increased volumes of the hippocampus and the prefrontal cortex. As reduced volumes of the hippocampus and the prefrontal cortex are associated with disorders such as post-traumatic stress disorder, schizophrenia and neurodegenerative diseases, the researchers speculate that video game training may have a future in their treatment (Kühn et al 2014[vii]). In another study, the impact of training on Super Mario 64 on the hippocampus of older adults, who are particularly at risk of hippocampus-related pathology, was assessed. It was observed that the group that trained by playing Super Mario 64 for 6 months showed an increased hippocampal volume and improved memory performance compared to participants who did not train on Super Mario 64 (West et al 2017[viii]). So, it appears that navigating virtual worlds, as well as the real world, may lead to hippocampal volume increase and may have positive outcomes on cognition.

1

A screenshot of Super Mario 64. This game involves exploration of a virtual world. Image taken from Kühn et al 2014[1]

Maybe it makes sense that the world being explored doesn’t have to be real to have an effect on the hippocampus, and games like Super Mario 64 have plenty to offer in terms of world exploration and navigation. But what about the most notorious of games, those first-person shooter action games? It has been suggested that first-person shooter games can lead to increased aggressive behaviours in those who play them, however researchers do not agree that this effect exists (Markey et al 2014[ix] Greitemeyer et al 2014[x]). Nevertheless, can these action games also have more positive effects on the cognitive abilities of the brain? Unlike Super Mario 64, these games require the player to quickly respond to stimuli and rapidly switch between different weapons and devices to use, depending upon the given scenario. Some researchers have investigated how playing action games, such as Call of Duty, Red Dead Redemption, or Counterstrike, impact short-term memory. Participants who either did not play action games, causally played action games, or were experienced in playing action games were tested for visual attention capabilities. The participants were tested on numerous visual attention tests, involving recall and identification of cues that were flashed briefly on a screen. The researchers observed that those who played action games showed significantly better encoding of visual information to short-term memory, dependent on their gaming experience, compared to those who did not (Wilms et al 2013[xi]).

In another study, the impact of playing action games on working memory was assessed. Working memory is a cognitive system involved in the active processing of information, unlike short-term memory which involves the recall of information following a short delay (Baddeley et al 2003[xii]). In this study, the researchers tested groups of participants who either did not play action games or did play action games. The researchers tested the participants’ working memory by utilising a cognitive test known as the “n-back test”. This test involves watching a sequence of squares that are displayed on a screen in alternating positions. As the test progresses the participants have to remember the position of the squares on the screen from the previous trials whilst memorising the squares being shown to them at that moment.  The researchers observed that people who did play action games outperformed those who did not on this test; they were better able to remember the previous trials, whilst simultaneously memorising the current trials (Colzato et al 2013[xiii]). From these studies, it appears that action games may have some benefit on the cognitive abilities of the players, leading to increased short-term processing of information in those who play them.

A screen grab from first person shooter games: Call of Duty: WW2 (left), and Halo (right). These fast-paced games involve quickly reacting to stimuli and making quick decisions to bypass enemies and progress in the game.

So, for the worried parents, and the individuals who enjoy indulging in video games, maybe it’s not all bad. As long as you are not suffering from a form of gaming addiction (and if you think you might be please see a health expert) maybe all these hours gaming may actually not be as bad for your brain as it might seem. But ultimately, much more research is needed to understand how a broader range of games played over childhood development, and for time periods of years and decades, affects our brains and mental health.

If you think you may be suffering from a gaming addiction, see the NHS page  for more information.

Edited by Lauren & Monika

References:

[i] American Psychiatric Association, 2013. Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Pub

[ii] World Health Organization [WHO]. 2018a. ICD-11 beta draft – Mortality and morbidity statistics. Mental, behavioural or neurodevelopmental disorders.

[iii] Cai, C., Yuan, K., Yin, J., Feng, D., Bi, Y., Li, Y., Yu, D., Jin, C., Qin, W. and Tian, J., 2016. Striatum morphometry is associated with cognitive control deficits and symptom severity in internet gaming disorder. Brain imaging and behavior10(1), pp.12-20.

[iv] Robbins, T.W. and Everitt, B.J., 2002. Limbic-striatal memory systems and drug addiction. Neurobiology of learning and memory78(3), pp.625-636

[v] Ko, C.H., Liu, G.C., Hsiao, S., Yen, J.Y., Yang, M.J., Lin, W.C., Yen, C.F. and Chen, C.S., 2009. Brain activities associated with gaming urge of online gaming addiction. Journal of psychiatric research43(7), pp.739-747

[vi] Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J., Frackowiak, R.S. and Frith, C.D., 2000. Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences97(8), pp.4398-4403.

[vii] Kühn, S., Gleich, T., Lorenz, R.C., Lindenberger, U. and Gallinat, J., 2014. Playing Super Mario induces structural brain plasticity: gray matter changes resulting from training with a commercial video game. Molecular psychiatry19(2), p.265

[viii] West, G.L., Zendel, B.R., Konishi, K., Benady-Chorney, J., Bohbot, V.D., Peretz, I. and Belleville, S., 2017. Playing Super Mario 64 increases hippocampal grey matter in older adults. PloS one12(12), p.e0187779.

[ix] Markey, P.M., Markey, C.N. and French, J.E., 2015. Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture4(4), p.277

[x] Greitemeyer, T. and Mügge, D.O., 2014. Video games do affect social outcomes: A meta-analytic review of the effects of violent and prosocial video game play. Personality and Social Psychology Bulletin40(5), pp.578-589.

[xi] Wilms, I.L., Petersen, A. and Vangkilde, S., 2013. Intensive video gaming improves encoding speed to visual short-term memory in young male adults. Acta psychologica142(1), pp.108-118

[xii] Baddeley, A., 2003. Working memory: looking back and looking forward. Nature reviews neuroscience4(10), p.829

[xiii] Colzato, L.S., van den Wildenberg, W.P., Zmigrod, S. and Hommel, B., 2013. Action video gaming and cognitive control: playing first person shooter games is associated with improvement in working memory but not action inhibition. Psychological research77(2), pp.234-239

The healing power of companionship

By Shireene Kalbassi

When it comes to the recovery of wounds and other medical conditions, most people probably think of hospital beds, antibiotics, and maybe some stitches. What probably doesn’t come to mind is the role that companionship may play in speeding up the healing process.

And yet, studies in humans have shown a link between increased companionship and enhanced recovery prospects (Bae et al 2001, Boden-Albala et al 2005).

So why should it be that social interaction influences the recovery process? Well, in social species, social interaction leads to the release of the hormone known as oxytocin, AKA the “love hormone”. This hormone is released from the pituitary gland, located in the brain. Increased levels of oxytocin have been associated with lower levels of stress response hormones, such as cortisol and corticosterone, and high levels of these stress response hormones have been shown to lead to impaired healing (Padgett et al 1998, DeVries  et al 2002, Heinrichs et al 2003, Ebrecht et al 2004).

This link between social interaction, oxytocin, stress hormones and recovery has been explored in studies, such as the work of Detillion et al (2004). Here, the authors investigated how companionship impacts wound healing in stressed and non-stressed hamsters. The role of companionship was explored by comparing socially isolated hamsters to ‘socially housed’ hamsters, that share a home environment with another hamster. Stressed hamsters were physically restrained to induce a stress response, while non-stressed hamsters did not undergo physical restraint.

In order to understand how these factors relate, the authors therefore compared four different groups: hamsters that were socially isolated and stressed, hamsters that were socially housed and stressed, hamsters that were socially isolated and non-stressed, and hamsters that were socially housed and non-stressed.

The hamsters that were socially isolated and stressed showed decreased wound healing and increased cortisol levels, when compared to socially housed hamsters or non-stressed socially isolated hamsters. Furthermore, when a blocker of oxytocin was given to socially housed hamsters decreased wound healing was observed, while supplementing stressed hamsters with oxytocin lead to increased wound healing and lower levels of cortisol.

So it seems that when social animals interact oxytocin is released, which reduces the levels of stress hormones, leading to increased wound healing.

But what if there is more to the story than this? These studies, and others like it, demonstrate a relationship between companionship and wound healing, but how might factors relating to social interaction impact recovery?

Venna et al (2014) explored the recovery of mice that were given a brain occlusion, where part of the blood supply of the brain is shut off to try to replicate the damage seen in stroke. However, in this study the mice were either socially isolated, paired with another stroke mouse, or paired with a healthy partner. When assessing recovery, the authors looked at multiple parameters including death rates, recovery of movement, and new neuron growth. The authors observed that, as expected, socially isolated stroke mice showed the lowest rates of recovery. Interestingly, stroke mice that were housed with other stroke mice showed decreased recovery rates when compared to stroke mice that were housed with a healthy partner.

So why should the health status of the partner influence the healing process of the mice? The work of Venna et al did not assess if the amount of social contact between stroke mice that were housed with another stroke mouse was equal to that of stroke mice that were housed with a healthy partner, which may explain the discrepancy seen between the two groups. Exploration of this could lead to better understanding of whether the quantity of social interaction may be leading to decreased recovery rates in the stroke mice housed with other stroke mice groups, or if the decreased recovery may be due to other factors.

Regardless, it appears that social interaction may not be a simple box to tick when it comes to enhancing the recovery process but is instead dynamic in nature. And while nothing can replace proper medical care and attention, companionship may have a role in speeding up the recovery process.  

If you want to know more about the use of animals in research, please click here.

Edited By Sophie & Monika

References:

Bae, S.C., Hashimoto, H.I.D.E.K.I., Karlson, E.W., Liang, M.H. and Daltroy, L.H., 2001. Variable effects of social support by race, economic status, and disease activity in systemic lupus erythematosus. The Journal of Rheumatology, 28(6), pp.1245-125

Boden-Albala, B., Litwak, E., Elkind, M.S.V., Rundek, T. and Sacco, R.L., 2005. Social isolation and outcomes post stroke. Neurology, 64(11), pp.1888-1892

Padgett, D.A., Marucha, P.T. and Sheridan, J.F., 1998. Restraint stress slows cutaneous wound healing in mice. Brain, behavior, and immunity, 12(1), pp.64-73.

DeVries, A.C., 2002. Interaction among social environment, the hypothalamic–pituitary–adrenal axis, and behavior. Hormones and Behavior, 41(4), pp.405-413.

Heinrichs, M., Baumgartner, T., Kirschbaum, C. and Ehlert, U., 2003. Social support and oxytocin interact to suppress cortisol and subjective responses to psychosocial stress. Biological psychiatry, 54(12), pp.1389-1398.

Ebrecht, M., Hextall, J., Kirtley, L.G., Taylor, A., Dyson, M. and Weinman, J., 2004. Perceived stress and cortisol levels predict speed of wound healing in healthy male adults. Psychoneuroendocrinology, 29(6), pp.798-809.

Detillion, C.E., Craft, T.K., Glasper, E.R., Prendergast, B.J. and DeVries, A.C., 2004. Social facilitation of wound healing. Psychoneuroendocrinology, 29(8), pp.1004-1011.

Glasper, E.R. and DeVries, A.C., 2005. Social structure influences effects of pair-housing on wound healing. Brain, behavior, and immunity, 19(1), pp.61-68

Venna, V.R., Xu, Y., Doran, S.J., Patrizz, A. and McCullough, L.D., 2014. Social interaction plays a critical role in neurogenesis and recovery after stroke. Translational psychiatry, 4(1), p.e351

How to read a baby’s mind

By Priya Silverstein 

Priya, a guest writer for The Brain Domain, is a second-year PhD student at Lancaster University. She spends half her time playing with babies and the other half banging her head against her computer screen.

Okay, I’ll admit that was a bit of a clickbait-y title. But would you have started reading if I’d called it ‘Functional Near Infrared Spectroscopy and its use in studies on infant cognition’? I thought not. So, now that I’ve got your attention…

Before I tell you how to read a baby’s mind, first I have some explaining to do. There’s this cool method for studying brain activity but, as one of the lesser used technologies, it’s a bit underground. It’s called fNIRS (functional Near Infrared Spectroscopy). Think of fNIRS as fMRI’s cooler, edgier sister. Visually, the two couldn’t look more different – with an MRI scanner being a human-sized tube housing a massive magnet that you might have seen on popular hospital dramas, and NIRS simply looking like a strange hat.

MRI.png   fNIRS_cover
Left: MRI scanner, Right: NIRS cap
Picture credit left: Aston Brain Centre, right: Lancaster Babylab

What these two methods do have in common is that they both measure the BOLD (Blood Oxygen Level Dependent) response from the brain. Neurons can’t store excess oxygen, so when they are active, they need more of it to be delivered. Blood does this by ferrying oxygen to the active neurons faster than to their lazy friends. When this happens, you get a higher concentration of oxygenated to deoxygenated blood in the more active areas of the brain.

Now, to the difference between fMRI and fNIRS. fMRI infers brain activity due to oxygenated and deoxygenated blood having different magnetic properties. When the head is put inside a strong magnetic field (the MRI scanner) changes in blood oxygenation, due to changes in brain activity, alter the magnetic field in that area of the brain. fNIRS on the other hand, uses the fact that oxygenated and deoxygenated blood absorb a different amount of light, as deoxygenated blood is darker than oxygenated blood. Conveniently, near-infrared light goes straight through the skin and skull of a human head (don’t worry, this is not at all dangerous and a participant would not feel a thing). So, shining near-infrared light into the head at a source location, and measuring how much light you get back at a nearby detector, gives a measurement of how much light has been absorbed by the blood in that area of the brain. Therefore, you get a measure of a relative change in oxygenated and deoxygenated blood in that area. All of this without the need for a person to lie motionless in a massive cacophonous magnet, with greater portability, and for about a hundredth of the price of an MRI scanner (about £25,000 compared to £2,500,000).

fNIRS_tech.png
The source and detector are placed on the scalp, so that the light received at the detector is reflected light following banana-shaped pathways

Picture credit: Tessari et al., 2017

“That sounds amazing! Sign me up!” I hear you say. However, I must put a little disclaimer out. There are reasons why fMRI is still the gold standard for functional brain imaging. As fNIRS relies on the measurement of light that gets back to the surface of the scalp after being in the brain, it can’t be used to measure activity from brain areas more than about 3 cm deep. This is being worked on by using cool ways of organising sources and detectors on the scalp. However, it is not thought that fNIRS will ever be able to produce a whole-brain map of brain activity. Also, as fNIRS is looking at the centimetre level, rather than millimetre, its spatial resolution and accuracy of location is limited in comparison to fMRI. Despite this, if the brain areas you’re interested in investigating are closer to the surface of the head, and not too teensy tiny, then fNIRS is a great technology to use.

So, what has this all got to do with babies? Well, fNIRS has one vice, one Achilles heel. Hair. Yes, this amazingly intelligent technology has such a primitive enemy. If your participants are blonde or bald, you’ll probably be fine. But anything deviating from this can block light from entering the head, and therefore weaken the light reaching the brain and eventually getting back to the detectors. However, do you know who has little to no hair? Babies. Plus, babies aren’t very good at lying still, particularly in a cacophonous magnet. This is why fNIRS is especially good for measuring brain activity in infants.

fNIRS is used to study a variety of topics related to infant development.  One of the most studied areas of infant psychology is language development. Minagawa-Kawai et al (2007) investigated how infants learn phonemes (the sound chunks that words are made up of). They used fNIRS to measure brain activation in Japanese 3 to 28-month-olds while they listened to different sounds. Infants listened to blocks of sounds that alternated between two phonemes (e.g. da and ba), and then other blocks that alternated between two different versions of the same phoneme (e.g. da and dha). In 3 to 11-month-olds, they found higher activation in a brain area responsible for handling language for both of these contrasts. So, this means that infants were treating ‘da’ and ‘ba’ and ‘dha’ as three different phonemes. However, 13 to 28-month-olds only had this higher activation when listening to the block of alternating ‘ba’ and ‘da’. This means that the older infants were treating ‘da’ and ‘dha’ as the same phoneme. This is consistent with behavioural studies showing that infants undergo ‘perceptual narrowing’, whereby over time they stop being able to discriminate between perceptual differences that are irrelevant for them. This has been related to why it’s much easier to be bilingual from birth if you have input from both languages, than it is to try to learn a second language later in life.

Another popular area of infant psychology is how infants perceive and understand objects. Wilcox et al (2012) used fNIRS to study the age at which infants began to understand shapes and colours of objects. They measured brain activation while infants saw objects move behind a screen and emerge at the other side. This study used a live presentation, made possible by the fact that fNIRS has no prerequisites for a testing environment except to turn the lights down a bit.

fNRIS_study.png

The shape change (left), colour change (middle), and no change (right) conditions of Wilcox et al. (2012). Each trial lasted 20 seconds, consisting of two 10 second cycles of the object moving from one side to the other (behind the occluder) and back again.

These objects were either the same when they appeared from behind the screen, or they had changed in shape or colour. They found heightened activation in the same area found in adult fMRI studies for only the shape change in 3 to 9-month olds, but for both shape and colour changes in the 11 to 12-month-olds. This confirms behavioural evidence that infants are surprised when the features of objects have changed, and that babies understand shape as an unchanging feature of an object before they understand colour in this way. This study shows how you can use findings from adult fMRI and infant behavioural studies to inform an infant fNIRS study, helping us learn how the brain’s complex visual and perceptual systems develop from infancy to adulthood.

There’s a lot more to learn if you wish to venture into the world of infant fNIRS research; it’s a fascinating area filled with untapped potential. fNIRS can help us to measure the brain activity of a hard-to-reach population (those pesky babies), enabling us to ask and answer questions about the development of language, vision, social understanding, and more! Questions being investigated in the Lancaster Babylab (where I am doing my PhD) include:

  • Do babies understand what pointing means?
  • Are bilingual babies better at discriminating between sounds?
  • Why do babies look at their parents when they are surprised?

And beyond this, the possibilities are endless!

If you are intrigued by fNIRS and want to learn more, I’d recommend review papers such as the one by Wilcox and Biondi (2015), and workshops such as the 3-day Birkbeck-UCL NIRS training course.

Edited by Jonathan and Rachael

References:

Minagawa-Kawai, Y., Mori, K., Naoi, N., & Kojima, S. (2007). Neural Attunement          Processes in Infants during the Acquisition of a Language-Specific Phonemic Contrast. Journal Of Neuroscience, 27(2), 315-321.

Otsuka, Y., Nakato, E., Kanazawa, S., Yamaguchi, M., Watanabe, S., & Kakigi, R.   (2007). Neural activation to upright and inverted faces in infants measured by near infrared spectroscopy. Neuroimage, 34(1), 399-406

Tessari, M., Malagoni, A., Vannini, M., & Zamboni, P. (2015). A novel device for non-invasive cerebral perfusion assessment. Veins And Lymphatics, 4(1).

Wilcox, T., Stubbs, J., Hirshkowitz, A., & Boas, D. (2012). Functional activation of the infant cortex during object processing. Neuroimage, 62(3), 1833-    1840.

Organ Donation: A No-Brainer, Right?

Organ donation. It’s an unusual topic for neuroscience (unless you’re talking about this), but the brain might just present the biggest issue preventing the advancement of this essential field. Why? Because a recent innovation relies on growing human organs inside pigs, and the initial studies show that we risk human cells entering the pig brains. What if their brains become too human? What if they start to think like us? Could we end up with a new race of Pig-men? It’s an intriguing idea (that might make you feel a little weird inside), but before we think about the ethical implications of an interspecies brain, let’s think about how this all came about.

The ‘Organ Deficit’

Anyone who has ever tuned into a dodgy medical drama on daytime telly will know that sometimes organs need to be replaced, and without that replacement organ, the patient will die. The problem is getting hold of donor organs is difficult! You need it to be fresh, you need it to be compatible with the patient, and you need it before the patient’s time runs out. What people often forget is that you also need to hope someone else tragically died before their time, but also before you do.

If that wasn’t distressing enough, if you look at the numbers you’ll see the current system is not only morbid, but also failing us. In the USA alone twenty two people die everyday waiting for an organ1 and those who are lucky enough to receive one often have to suffer for many years before hand2. We are suffering from an Organ Deficit, and this one won’t be fixed with a little austerity.

organ-donation

Wouldn’t it be better if a doctor could simply say “Your kidney is failing, but don’t worry, we’ll just grow you a new one and you’ll be right as rain again!”? Nobody would have to die to get that organ, or suffer for years on a waiting list. The only people who might actually suffer are the scriptwriters of those medical dramas (and mildly at that). Growing new tissue for patients is a core aim of scientists working in regenerative medicine. But there is a list of problems that have to be overcome first. Unfortunately, that ‘list of problems’ isn’t a short one, and despite decades of research, several big leaps, and even a few Nobel prizes, the end still isn’t in sight. Meanwhile, another day passes and another twenty two people have missed their window.

What we need is a temporary fix; some way to increase the number of organs available to help the people in need now, and allow scientists to continue researching in the background for the ideal solution. Something with a fancy long name (and an unnecessary number of g’s) – something like Xenogeneic Organogenesis.

Xeno-whatnow?

Ok, I admit, I made that term up. But it makes sense! Let me translate. ‘Xenogeneic’ means working with cells of two different species, and ‘Organogenesis’ means growing organs. Simply put, we’re talking about growing new human organs inside of host animals. How? Well, in a breakthrough paper Kobayashi et al.3 demonstrated a way to grow fully formed rat organs inside a mouse.

This was achieved through a technique called Blastocyst Complementation; when an embryo is injected with stem cells from a second animal. The resulting animal is called a ‘chimeric animal’, because it is made of cells from the two different animals, and those cells are genetically independent of each other. This has been successfully achieved in animals of the same species before (e.g. putting mouse stem cells into a mouse embryo), but here they crossed species by creating a mouse-rat chimera and a rat-mouse chimera (See image A, below). The chimeras were morphologically similar to the animal species of the host embryo (and mother), but crucially, the chimeras were composed of cells derived from both species, randomly placed across the animals. In other words, whilst the mouse-rat chimera looked like a mouse, if you looked closely at any body part then you would see it was in fact built of both mouse and rat cells. The researchers believe this happened because the stem cells don’t alter the ‘blueprints’ that the embryo already has. Instead, they grow just like other embryonic stem cells, following chemical directions given to them and gradually building the animal according to those instructions.

AB

Satisfied they could use blastocyst complementation to create inter-species chimeric animals, the researchers went one step further. They genetically modified a mouse embryo to prevent it from growing a pancreas (in image B this is called the ‘Pdx1-/-’ strain: a name that refers to the gene that was removed) and injected unmodified rat stem cells into the embryo. They were hoping that by preventing the mouse embryo’s stem cells from being able to form a pancreas, the chemical directions to build a pancreas would only be followed by the newly introduced rat stem cells. And guess what? It worked! They reported the pancreas inside the Pdx1-/- strain was built entirely of rat cells (See image B, above).

This got a lot of people very excited! Would it be possible to do this with human organs? Could we farm human organs like we do food? Could we even use iPS technology to grow autologous patient-specific organs to improve the transplantation process? The lab has now begun tackling these sorts of questions, starting by testing the viability of pig-human chimeric embryos (pig embryos with human stem cells, to make pigs built with human cells), to see if the two cell types will contribute to the animal in the same way the mouse and rat cells did.

Freaky! Is this why we’re worried about creating a race of Pig-men?

Yep. What I didn’t explain above is that the brains of those chimeric animals, like the hearts, were also comprised of both mouse and rat cells. Considering that the scientific community generally accepts it is our incredible brain that separates us as a species, what would happen if human brain cells made their way into the pig brain? How human could they become? Would they begin to look like us? Act like us? Talk like us? Even think like us? How human would they have to become before we gave them human rights? Where is the legal, moral and ethical line between animal and human when one creature is a mixture of the two?

Most people (not all) would agree that sacrificing a farm animal’s life to save a human’s is an acceptable cause. After all, we already do that for bacon… a cause that even I (an avid carnivore) cannot claim as exactly necessary. But sacrificing a half pig, half human? That sounds like something you’d find in a horror story!

Even though many scientists believe an intelligent pig-human chimera is biologically implausible (let alone a speaking one), no-one is willing to say it’s impossible. Concerns over a potentially altered cognitive state have led to the US based NIH (a.k.a USA government funding central) to announce that they will not be supporting any research that involves introducing human cells into non-human embryos5. The fact is we don’t know enough about how the human brain develops or works. We don’t understand how the biological structures, electronic signals, and chemical balances translate to the gestalt mind experience. We don’t have one easy answer that makes “being a human” and “being a pig” distinct enough, to know how to interact morally with a pig-human chimera. And that makes me (and probably you) rather uncomfortable.

It’s also giving me all kinds of flashbacks to my school theatre group’s rendition of Animal Farm (I played Boy. Sounds like a rubbish part right? You’re wrong. He’s the narrator!). I can’t help but wonder if Orwell ever imagined his metaphorical work could have literal connotations too…

animalfarm“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”

Fortunately, nobody wants to create pig-men (at least I don’t think there is a Dr. Moreau in the research team?), and the pig-human embryos being generated are being terminated long before they grow into anything substantial. The lab wants to be careful, to understand the potential consequences before even considering letting a pig-human foetus go to full term. Naturally this means there’s a heck of a lot of work to be done, but xenogeneic organogenesis (copyright: Me) is still decades ahead of other organ replacement models. Continued work could mean viable results a lot sooner, saving countless lives. At the very least, it would enable us to study natural organ growth directly, fast-tracking dish-driven stem cell models.

Is this really the best solution available to solve the Organ Deficit?

Good question reader! Let’s bring this discussion back to a simpler solution. Late last year (1st Dec 2015) Wales (home of The Brain Domain) became the first country in the UK to change the law to make organ donation ‘opt-out’ instead of ‘opt-in’.

This isn’t a new idea. Many other countries have implemented an opt-out system before, and generally statistics look good6. Yet there is ongoing debate about whether this change will be sufficient alone7. Cultural variations and infrastructural differences in health care systems have a large impact on the effectiveness of such legislation, but generally speaking we should see some improvement. If that improvement is sufficient, then the policy will likely be rolled out across the rest of the UK (fingers crossed we beat the four years it took to get the plastic bag charge across the river Severn!). But if that is still not enough, then we’ll just have to hope those at Stanford can find a way to make xenogeneic organogenesis a real no-brainer.

References:

1) http://www.organdonor.gov/about/data.html

2) https://www.organdonation.nhs.uk/real-life-stories/people-who-are-waiting-for-a-transplant/

3) Kobayashi et al. http://www.sciencedirect.com/science/article/pii/S0092867410008433

4) Example image taken from: http://www.sciencedirect.com/science/article/pii/S0092867410009529

5) https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-158.html

6) http://webs.wofford.edu/pechwj/Do%20Defaults%20Save%20Lives.pdf

7) http://www.bbc.co.uk/news/uk-wales-34932951