It’s Fright Night!

By Lucy Lewis

Halloween is the time for all things scary, and the best thing for scaring people is to put on a good horror movie, preferably something with creepy dark figures appearing suddenly after a long, tense build up. But what is it about these films that gets us so freaked out?

Fear is an integral part of our survival mechanisms, it is a part of the ‘fight or flight’ response to prepare us for combating or escaping from a potential threat. Deep down, you know the demons from Insidious aren’t real, and the clown from It isn’t going to appear behind the sofa, and yet we jump at their appearance on screen, hide behind pillows when the music goes tense and even continue thinking about them once the film is finished.


Scene from ‘A Nightmare on Elm Street’ (1984).

This is because horror movies tap in to this innate survival mechanism by presenting us with a situation/character that looks like a threat. When we see something potentially threatening, this activates a structure in the brain known as the amygdala, which associates this threat with something to be fearful of (Adolphs 2013). The amygdala acts as a messenger to inform several other areas of the brain that there is a threat present, thus resulting in multiple physiological changes, including the release of adrenaline (Steimer, 2002) which causes the typical bodily functions we experience during fear, e.g. increased heart rate.

Even though we know the movie isn’t real, the amygdala takes over our logical reasoning to produce these responses, a phenomenon known as the “amygdala hijack” (Goleman, 1995).  This is when the amygdala stimulates these physiological changes to prepare us against this fearful threat before the prefrontal cortex of the brain, responsible for executive functions, can assess the situation and regulate our reactions accordingly (Steimer, 2002).

In fact, studies looking into fear conditioning, which is when associations are made between a neutral stimulus like a clicking sound and an aversive stimulus like an electric shock (Gilmartin et al 2015), have shown that inactivation of the medial prefrontal cortex results in a deficit in the ability to disconnect this association, suggesting that the prefrontal cortex is necessary for the extinction of a fear memory (Marek et al, 2013).


A schematic diagram of the fear pathway, adapted from a VideoBlocks image.

You’ve probably noticed that you can also remember something that scared you so much better than even remembering what you had for breakfast. Forming memories of the events that cause us to feel fear is important for our survival mechanisms, because it helps us to avoid threats in the future. Evidence suggests that the stress hormones that are released following exposure to fearful events contribute to the consolidation of memories, for example Okuda et al (2004) gave rats an injection of corticosterone (the major stress hormone for rodents) immediately following one training session of a task requiring the animals to remember objects presented to them in an arena. They showed that, 24 hours later, the animals that received this injection showed significantly greater recognition of these objects than controls, suggesting that this stress hormone increased their ability to remember the objects around them. Therefore, the scarier the film, the more we remember it.

So, don’t worry if you are a big scaredy-cat when it comes to horror films, it’s actually natural! It’s just your brains response to a threatening situation, and you can’t control it no matter how hard you try. But if you are planning a film night full of ghosts and ghouls this Halloween, go and get yourself a big pillow to hide behind and prepare for that good old amygdala to get into action!

Edited By Lauren & Sophie


Adolphs, R. 2013. The Biology of Fear. Current Biology, 23:R79-R93.

Gilmartin, M. R., Balderston, N. L., Helmstetter, F. J. 2015. Prefrontal cortical regulation of fear learning. Trends in Neuroscience, 37:455-464.

Goleman, D. 1995. Emotional Intelligence: Why It Can Matter More Than IQ. New York: Bantam Books.

Marek, R., Strobel, C., Bredy, T. W., Sah, P. 2013. The amygdala and medial prefrontal cortex: partners in the fear circuit. Journal of Physiology, 591: 2381-2391.

McIntyre, C. K. and Roozendaal, B. 2007. Adrenal Stress Hormones and Enhanced Memory for Emotionally Arousing Experiences. In: Bermudez-Rattoni, F. Neural Plasticity and Memory: From Genes to Brain Imaging. Boca Raton (FL):CRC Press/Taylor & Francis, Chapter 13.

Okuda, S., Roozendaal, B., McGaugh, J. L. 2004. Glucocorticoid effects on object recognition memory require training-associated emotional arousal. Proceedings of the National Academy of Science, USA, 101:853-858.

Steimer, T. 2002. The biology of fear- and anxiety-related behaviors. Dialogues in Clinical Neuroscience, 4:231-249.

VideoBlocks, AeMaster still photo from: 3D Brain Rotation with alpha.

Can’t hear someone at a loud party? Just turn your head!

By Josh Stevenson-Hoare

At one point or another, we have all been in this uncomfortable situation. You are talking to someone at a party, and then had the realisation – you can’t understand a word they are saying. You smile and nod along until they get to what sounds like a question. You ask them to repeat it, and you can’t hear what they say. So you ask again. We all know how it goes from there, and how awkward it gets on the fifth time of asking.

Fortunately, there is a way to avoid this happening. Researchers at Cardiff University have developed a way to maximise your chances of hearing someone at a party on the first try. You don’t need fancy equipment, or to start learning sign language. All you have to do is turn your head. 

If you are trying to listen to something difficult to hear, it makes sense to turn your head 90 degrees from it. This is the same angle as when someone is whispering in your ear, or when you only have one working earbud. If you do this, then one ear will get most of the useful sounds, and the other ear will only get un-useful sounds. You can then ignore any sounds from the un-useful ear, and focus on the sounds you want – the words.

However, for speech, the sounds made by the person speaking are not the only useful piece of information. Lip-reading is something we can all do a little bit, without training. You may not be able to follow a whole conversation, but what you can do is tell the difference between small sounds. These are called phonemes. 

A phoneme is the smallest bit of sound that has meaning. It’s the difference between bog and bag, or tend and mend. If you miss hearing part of a word you often use the shape of the speaker’s mouth to help you figure out what the word was. This can lead to some interesting illusions, such as the McGurk effect. This is when the same sound paired with different mouth shapes causes you to hear different syllables.

If you want to read someone’s lips while they talk, you have to be able to see their face. Unless you have a very wide field of vision, the best way to lip-read is to face someone.

So, to improve hearing you should face away from someone, and for best lip-reading you should face towards them. Not the easiest things to do at the same time.

To find out which is more useful, the researchers played video clips of Barack Obama speaking to some participants. Over this they played white noise, which sounds like a radio tuned between stations. The participants turned their heads to different angles as instructed by the researchers. The white noise was then made louder and quieter until participants could only just hear what Obama was saying.

Participants found it easier to understand the speech when they pointed their heads away from the speech. The best angle, though, changed depending on where the white noise was coming from. 

You might expect that pointing away from the white noise would be best strategy. But, the researchers found that the best strategy was to point around halfway between the speech and the white noise. Not so far that participants couldn’t see Obama’s face, but far enough to have one ear pointed at him.

The best angle to face when trying to hear someone in a loud environment. Picture Credit: Josh Stevenson-Hoare.  

In real-life, noise comes from lots of different angles at once. It would be very difficult to work out what angle would be the best all the time. But, if there is one major source of noise such as a pneumatic drill or a music speaker, it can be useful to remember this technique. 

It might look a little silly at first, but it could save you from an embarrassing faux pas next time you go to a party. So now you can hear all the details of Uncle Jimmy’s fishing trip with ease. For the third time this year. You’re welcome.

Edited by Lucy & Sophie

Microglia: Guardians of the Brain

By Ruth Jones

Our Immune System works hard to protect us from unwanted bugs and help repair damage to our bodies. In our brains, cells called Microglia are the main immune players. Microglia are part of a group of immune cells called Macrophages, which translates from Greek as “big-eaters”. Much like students faced with free food, macrophages will eat just about anything, or at least anything out of the ordinary. Whether it’s cell debris, dying cells or an unknown entity, macrophages will eat and digest it. Microglia were first discovered in 1920’s by W. Ford Robertson and Pio del Rio-Hortega. They saw that these unidentified cells seemed to act as a rubbish disposal system. Research into these mysterious cells was side-lined during WWII, but was eventually picked up again in the early 1990’s.

When I think about how microglia exist in the brain, I am always reminded of the blockade scene in Marvel’s Guardians of the Galaxy. Each “ship” or cell body stays in its own territory while its branches extend out to touch other cells and the surrounding environment. The microglia ships can then use their branches to touch other cells or secrete chemicals to communicate, together monitoring the whole brain.

ruthNova Core Blockade Scene © Guardians of the Galaxy. (2014). [DVD] Directed by J. Gunn. USA: Marvel Studios.

Microglia have evolved into brilliant multi-taskers. They squirt out chemicals called chemokines that affect the brain’s environment by quickly promoting inflammation to help remove damaged cells, before dampening inflammation to protect the brain from further damage. They also encourage the growth of new nerve cells in the brain and can remove old “unused” connections between neurones, just like clearing out your Facebook friend list. Microglia keep the brain functioning smoothly by taking care of and cleaning up after the other cells day in, day out.

Today, microglia are a trending topic, believed to play a part in multiple brain diseases. Genetic studies have linked microglia to both neuropsychiatric disorders like autism and neurodegenerative diseases such as Alzheimer’s disease.

Do microglia help or hurt in Alzheimer’s disease? This is a complicated question. Scientists have found microglia can both speed-up and slow-down Alzheimer’s disease. Scientists have thought for a while that in disease these microglia “ships” are often destroyed or are too aggressive in their efforts to remove sticky clumps of Αmyloid-β protein (a big problem in Alzheimer’s disease).

What is going on in the Alzheimer’s disease brain? Recent advances in technology mean scientists are now starting to discover more answers. One problem when working with microglia is they have a whole range of personalities, resulting in a spectrum of protein expression and behaviour. Therefore, when you just look at the entire microglial population you may miss smaller, subtle differences between cells. It is likely that microglia don’t start out as “aggressive” or harmful, so how can we see what causes microglia behaviour to change?

Luckily a new process called “single-cell sequencing” has been able to overcome this. An individual cell is placed into a dish where the mRNA, the instructions to make protein, can be extracted and measured. This means you can compare the variation in the entire microglia population, which could make finding a “trouble-maker” microglia easier in disease. This process could also be used to see how individual cell types change across the course of Alzheimer’s disease.

In the future, by looking in detail at how these individual microglia “guardians” behave, scientists can hopefully begin to unravel some of the mysteries surrounding these fascinating and hugely important cells in both health and across all disease stages.

Edited by Sam & Chiara

Brain-controlled mice face robobugs


By Monika Sledziowska

Have you ever wondered how realistic was the mind control technology presented in Kingsman: the secret service? I can’t claim to go to many Hollywood parties, but I may have an idea.

In a lesser-known series of experiments, in the 1960s a Yale University Professor, Jose Delgado, demonstrated that direct electrical stimulation of the brain is able to elicit various emotional states and reactions from a range of animals, including humans (1). The stimulation and recording of electrical activity was also achieved using radio waves. Doesn’t it sound just like a recent film?

However cool (and perhaps disturbing) this line of enquiry was, it was soon abandoned. One reason for this may be that the technology of the time wasn’t quite advanced enough to establish which specific neural pathways were responsible for which state or reaction. However, in an interesting turn of events, a new technique has come along, which has the capacity to do precisely that.

The technique, by the name of optogenetics, doesn’t use radio waves but rather controls cells in the brain using light. Before you get too worried about the effects of light on your brain, I should explain that only cells that have been genetically modified to have light-sensitive proteins will be influenced by light of a specific frequency (2). These light-sensitive proteins can be limited to a particular part of the brain and a particular cell type using genetic manipulation.

So, what can optogenetics tell us about the neural basis of specific behaviours? A recent paper by Han et al. (2017) focused on defining the neural pathways that are responsible for different aspects of hunting behaviour.

Step one: the authors injected a virus carrying the genes necessary to have light-sensitive proteins into the amygdala of mice. This is the part of the brain known to be involved in reward and fear processing (3). The genetic background of the mice and the genes in the virus interacted in such a way that only inhibitory neurons (neurons that restrict or ‘inhibit’ activity), are affected. The researchers then implanted the mice with small optical fibres that could shine light of certain frequency onto the amygdala.

Step two: the mice’s amygdala was exposed to the light resulting in the inhibitory neurons being active in the amygdala. As a result, the mice showed increased jaw muscle activity (one could argue that jaws are quite important for capturing prey) and they were faster to chase and capture crickets (their natural prey).

What’s interesting is that they also bit and ate inedible objects such as wooden sticks and attacked artificial robotic bugs (you can purchase these, like anything else, from Amazon), which rarely happens in the real mouse world. And if you don’t believe me, you can see for yourself in the video below:

Using this technique, the researchers were also able to uncover the projection pathways from the amygdala to a part of the brainstem called the reticular formation (responsible for biting the prey), and to part of the midbrain called the periaqueductal grey (responsible for pursuing prey).

Even though we may still be quite far away from the mind-controlling techniques like those used by the villain in the Kingsman, current technology offers interesting opportunities for uncovering the workings of mouse and human brains.

If you want to know more about the use of animals in research, please click here.

Edited by Oly & Jonathan 


  1. Delgado JMR. Free Behavior and Brain Stimulation. In: Neurobiology CCP and JRSBT-IR of, editor. Academic Press; 1964. p. 349–449.
  2. Deisseroth K, Feng G, Majewska AK, Miesenbock G, Ting A, Schnitzer MJ. Next-Generation Optical Technologies for Illuminating Genetically Targeted Brain Circuits. J Neurosci [Internet]. 2006;26(41):10380–6.
  3. Han W, Tellez LA, Rangel MJ, Motta SC, Zhang X, Perez IO, et al. Integrated Control of Predatory Hunting by the Central Nucleus of the Amygdala. Cell. 2017;168(1–2):311–324.e18.

The indirect applications of leisure technology

As grant applications within science become increasingly competitive, the pressure grows to highlight the direct benefits of one’s research to human health and prosperity. These are the impact statements–is your research going to directly contribute to the “cure”?

Unfortunately, this attitude obscures a very important source of new knowledge and tools–simple curiosity and ‘play’. It is important to remember we don’t always know what we are doing when we dive into research. In fact, simple exploration of interesting concepts can have very important knock-on benefits!

For example, as we improve technology for leisure, developing more powerful smartphones and more realistic video games, we are also creating tools which can feed back into health and medicine applications. Smartphone apps are a very common example of this feedback. CloudUPDRS, an Android app designed by George Roussos and colleagues at Birbeck, University of London, takes advantage of the smartphone’s gyroscopic sensors to conduct frequent physical tests for Parkinson’s patients. Tying these physical tests and the associated self-assessment questionnaires to this constant companion device help researchers track symptoms and disease progression regularly and over an extended period of time. 

Developments in smartphones for leisure made this tool possible, but the feedback loop between leisure technology, health research, and medicine extends beyond our phones. Virtual reality, used for everything from gaming to drone flying, can be used to help train surgeons.

It is very important to keep investing in science and technology as a whole, even when the benefits to us aren’t immediately apparent. Encouraging play and building tools for play can help us creatively solve important problems. Restricting funding to “the most relevant” research angles may be an important investment strategy, but it may also risk restricting our creativity. Curiosity beyond ourselves helps us develop new knowledge –while our questions may not directly apply to a “cure”, they may incidentally equip us with tools we didn’t know we needed.

CloudUPDRS is explained in the New Scientist Article:

Feature Image Link

♥ Achy-breaky heart? Try touchy-feely brain! ♥

Written by Laura Smith

As today is Valentine’s day, let’s get a bit touchy-feely. Whether you’re looking forward to a date with your significant other; planning to profess your feelings to a special someone; or hoping your soulmate will sweep you off your feet, you’d probably like to share a romantic caress. But what happens in the brain when we anticipate touching the one we desire? Using functional magnetic resonance imaging (fMRI), scientists in Italy set out to answer just that question. Isn’t that convenient!

fMRI uses the same principle as standard MRI: a large, very powerful electromagnet detects differences in the magnetic properties of different bodily tissues, and some fancy maths turns these signals into pictures. In fMRI, people in the scanner perform tasks, and scientists can locate brain areas where activity levels change in response to this.

In their fMRI study, published in the journal Frontiers in Behavioural Neuroscience, Ebisch and colleagues1 wanted to find out if how much someone loved their partner was reflected in their brain activity when they anticipated caressing them. Participants in the MRI scanner were instructed to affectionately touch either a ball or their partner’s hand, which were both placed close to them.  They received a “touch” or “do not touch” instruction 3 seconds after they were told which item to touch (the hand or the ball). Therefore, they would anticipate performing the touch each time, which would involve a change in brain activity.  The task was performed many times but, 67% of the time, participants were asked not to perform the touch.

Participants also completed the Passionate Love Scale (PLS)2: a 15-item questionnaire measuring the intensity of their desire for their partner from “extremely cool” to “extremely passionate”, so that the researchers could see whether it was related to the changes in brain activity.  There was such a relationship in the right posterior insula: an area of cortex believed to act as a processing-hub for information about the body’s current physiological state3.  Insula activity decreased during anticipation of touching but the more passionate the love, the less deactivation there was for anticipation of romantic but not non-romantic touching. So, when participants’ desire for their partners was higher, there was more neural response to anticipation of touching the partner versus the ball.  Additionally, insula activity increased when touches were actually performed, and significantly moreso for romantic versus non-romantic touches.

laurainsulaLocation of right posterior insula. Retrieved from Ebisch et al. (2014)1

The insula interacts with brain areas involved in bodily sensation4, in particular the somatosensory cortex.  This area’s activity in response to touch was previously shown to be influenced by anticipation of a reward5. Taking this into account, the researchers suggest that the posterior insula, via its connection with somatosensory cortex, may influence how we actually experience touches. As such, because desire for the partner was associated with less insula deactivation during anticipation of touching them, it may be that wanting to touch someone actually makes the experience of doing so all the more pleasant.

So spare a thought for your clever insula today, and have a happy Valentine’s day.


  1. Ebisch SJ, Ferri F & Gallese V. (2014). Touching moments: desire modulates the neural anticipation of active romantic caress.
  2. Hatfield E. & Sprecher S. (1986). Measuring passionate love in intimate relations. Adolescence, 9, 383-410.
  3. Augustine, JR. (1996). Circuitry and functional aspects of the insular lobe in primates including humans. Brain Res. Rev., 22, 229-244.
  4. Zweynert S, Pade JP, Wustenberg T et al. (2011). Motivational salience modulates hippocampal repetition suppression and functional connectivity in humans. Hum. Neurosci, 5, 144.
  5. Pleger B, Blankenburg F, Ruff C, Driver J & Dolan R. (2008). Reward facilitates tactile judgments and modulates hemodynamic responses in human primary somatosensory cortex. Neurophysiol., 39, A9.

Featured image by Alex Van

Understanding Social Behaviour in Research–Why we should remember RAT PARK

One of the major aims of scientific investigation is to improve human lives, particularly our health. One of the best current tools for improving human health is the animal model, though there is a lot of ongoing work into alternative methods. An animal model is a representative living system that can be manipulated to reflect a particular condition or illness. Rodents are popularly used due to many factors including size, breeding rates, and the ease at which they can be altered in genetically meaningful ways. However, it is important to remember that rodents cannot be considered a tool in the same manner as a microscope as they are living organisms with their own set of behaviours. This may seem obvious or even silly to mention, but experiments that have not considered this important aspect of the animal model have affected our ability to improve human health.

    An excellent example of this is in the field of drug addiction studies which allowed animals to self administer recreational drugs which were common in the 1950’s and 60’s. These consisted of socially isolating each animal to prevent animals from damaging equipment and hindering each other’s surgical recovery. The results of such studies caused investigators to conclude that drug addiction was not only instant but impossible to stop. A research group from Simon Fraser University considered such experiments in the context of the human condition. Knowing that rats and humans are social animals, they considered the impact that social isolation may play in producing these results. After all, how would you respond to an endless supply of drugs and nothing else?

    To this end they built a free range environment they named ‘rat park’ and split the rats into two groups. One group lived in the free range rat park environment and the second group lived in classical single caged environments. Their results showed that not only were the rat park animals better at coping with addiction but they were also more resilient to becoming addicted in the first place. This is very important to understanding human addiction as it is clear that there is more to addiction than simply a supply of drugs.This example illustrates how animal models can be much more informative tools to medicine when their social behaviour is considered as an important factor.

For more information on Rat Park have a look at Prof Bruce Alexander’s website.

Or this comic by Stuart McMillan.

The header image was sourced from:

Who the hell is MEG, and how can she help us understand the brain?

Let me tell you about a MEG who doesn’t get her fair share of the limelight. MEG uses her SQUIDs to catch your brain activity, after it has left your head. She’s quite a fast mover, and can do this at a millisecond rate! Strangely though, she’s kept locked in a room with really thick walls. Poor MEG.

Still confused? Of course you are.  I guess it is time for me to admit MEG isn’t a woman. Similar to an MRI scanner, MEG is a technique researchers use to learn about the brain.  MEG is short for Magnetoencephalography (magneto refers to magnetic fields, encephalon means the brain, and –graphy indicates the process of recording information). Nothing to do with the guy in the purple cape.

This is a MEG scanner:

meg-scannerSource of this image: Magnetoencephalography Wikipedia

I’d like to say this brain imaging technique was inspired by a woman getting a perm in the 80s, as that’s what it has always reminded me of.  I’m afraid that’s not the case.

So how does MEG measure these magnetic fields? Any electrical current will produce a magnetic field. Even the electrical currents in your brain. If a big group of neurons (brain cells) are facing the same direction and send electrical impulses to each other, they induce a weak magnetic field, with a certain direction and strength. These magnetic fields leave the brain, and can still be measured outside the skull.

Picture1.pngSource of this image: Magnetoencephalography Wikipedia

Since the magnetic fields that leave the head are so weak (around a billion times weaker than the magnetic field of a typical fridge magnet!), a MEG scanner measures them using really sensitive instruments called SQUIDs (Superconducting Quantum Interference Devices). SQUIDs are quite high maintenance though; they only work at temperatures below -296°C! Bathing them in liquid helium keeps them this cold. As SQUIDs are so sensitive, they also pick up stronger magnetic fields from the environment, which can mask the ones we want to measure from the brain. Because of this, a MEG scanner has to be kept in a magnetically shielded room, with a door like this:

msr_layered_doorSource of this image: Magnetoencephalography Wikipedia

Why is it useful to measure these magnetic fields anyway? MEG allows us to measure brain activity in a non-invasive way; there is no discomfort for the person being scanned, and no side effects. MEG helps us to learn about how and where the brain responds to certain tasks, improving knowledge of the link between brain function and human behaviour. Brain function measured with MEG has been shown to be different in many neurological and psychiatric diseases.  MEG has a key role to play in helping localise regions of the brain that are faulty, and that might need to be surgically removed, for example in epilepsy.

I hope you’ve enjoyed being introduced to a new MEG. Watch this space for more articles on what she gets up to.

Happy Halloween!

I know you were wondering this today, and yes, there is a subset of genes named “Halloween genes”. In line with a long standing tradition of naming biological units after silly things, the halloween gene family includes spook, spookier, phantom (phm), disembodied (dib), shadow (sad), and shade. These genes were first identified in Drosophila melanogaster (fruit flies, a wonderful genetic model) by Wieschaus and Nüsslein-Volhard.

But what do these nightmares do? These genes encode enzymes necessary for the biosynthesis of 20E, a steroid hormone which serves more animals and biomass on the planet than any other steroid hormone. Mutation of a single gene of the halloween gene family is embryonic lethal. These genes are critical for development.

As it turns out, the 20E synthesis and signalling pathways are also critical for Drosophila adult social and conditioned behaviour. DopEcR, a receptor for 20E, is involved in activity- and experience-dependent plasticity in the adult fly’s central nervous system. Drosophila rely on the mushroom body–a brain region central to Drosophila learning and memory–for DopEcR-dependent processing of courtship memory.

So–be careful about flirting at those halloween parties… If your moves are too shade-y, you may end up only a fly on the wall.

Gilbert, L. I. (2008). Drosophila is an inclusive model for human diseases, growth and development. Molecular and cellular endocrinology, 293(1), 25-31.

Header image source:

What is neuroimmunology and why should I care?

By Dr Niels Haan

Everybody knows about the immune system, right? You get an infection and your body mobilises its army of immune cells and fights off the invaders. You get a fever, feel awful for a while, then things get back to normal. However, in the brain, it doesn’t work that way. In immunology, the brain is called privileged. This doesn’t mean its parents had money and it went to a better school than you; it means it’s excluded from the normal immune response. In this post, I introduce you to the wondrous world of neuroimmunology, the study of the immune system in the brain. Hopefully this will whet your appetite for a longer article exploring this more in depth, which will be coming out later.

The brain is separated from the rest of the body and the blood circulation by the blood-brain barrier, allowing the brain to be selective with what it lets in, and protecting it from infection. However, this also means the usual immune cells normally can’t get in. Luckily for us, the brain has its own immune cells, microglia. These cells are continuously patrolling your brain for anything that shouldn’t be there, and will attack and clear out any infections. Although this is an important role, unless you’re unlucky enough to contract something like meningitis, most neuroimmunological processes actually do not involve infection.

So why should you care about this? You should care because neuroimmunology is involved in pretty much every brain process and disease investigated so far. The most obvious ones are autoimmune diseases of the central nervous system, such as multiple sclerosis, where the immune system incorrectly recognises your own cells and proteins as something foreign to your body and attacks them. However, there are many more diseases with neuroimmunological involvement.Microglia

In neurodegenerative diseases such as Alzheimer’s or Parkinson’s, microglia are activated and clear out the many dying cells. However, the signalling factors microglia secrete can cause more neuronal dysfunction and cell death, creating a vicious circle. Activation of immune cells has also been found in many psychiatric diseases such as chronic depression and schizophrenia, but we don’t yet have a clear idea of their roles in these conditions. However, the immune response in some of these diseases is now starting to be studied as a target for potential new treatments.

It is easy to view the immune system as something that only kicks into action during infections or disease. It is, in fact, working all the time and is part of the brain’s normal development and maintenance. Neurons continuously make new connections and lose old ones. Microglia, the immune cells of the brain, are responsible for clearing out old and faulty connections, ensuring your neurons connect the way they should.  The immune system is also closely involved with the formation of new neurons. These roles can again be affected by disease. For instance, my own research has shown that neuroinflammation in chronic epilepsy may be responsible for lower numbers of new neurons and the memory problems patients suffer from.

Neuroimmunology is still a rapidly expanding field, and we are continuously finding new roles for immune cells and factors in the brain. In fact, only last month, research suggested that immune factors are involved in the regulation of social behaviour in a range of animals. (Note: you could try this as an excuse for being antisocial next time you have a cold. Don’t blame me if it doesn’t work though.) Having now hopefully convinced you that you should care about neuroimmunology, stay tuned for a longer post exploring its roles in detail!