Happy Halloween!

Kira Rienecker | 31 OCT 2016

I know you were wondering this today, and yes, there is a subset of genes named “Halloween genes”. In line with a long standing tradition of naming biological units after silly things, the halloween gene family includes spook, spookier, phantom (phm), disembodied (dib), shadow (sad), and shade. These genes were first identified in Drosophila melanogaster (fruit flies, a wonderful genetic model) by Wieschaus and Nüsslein-Volhard.

But what do these nightmares do? These genes encode enzymes necessary for the biosynthesis of 20E, a steroid hormone which serves more animals and biomass on the planet than any other steroid hormone. Mutation of a single gene of the halloween gene family is embryonic lethal. These genes are critical for development.

As it turns out, the 20E synthesis and signalling pathways are also critical for Drosophila adult social and conditioned behaviour. DopEcR, a receptor for 20E, is involved in activity- and experience-dependent plasticity in the adult fly’s central nervous system. Drosophila rely on the mushroom body–a brain region central to Drosophila learning and memory–for DopEcR-dependent processing of courtship memory.

So–be careful about flirting at those halloween parties… If your moves are too shade-y, you may end up only a fly on the wall.


Header image source: https://upload.wikimedia.org/wikipedia/commons/0/01/Comus_1873_Fly.jpg

A Step-by-Step Guide to convincing Mom it’s Dad’s fault–with Science!

(You do want to seem like the reasonable one here, right?)

Kira Rienecker | 30 AUG 2016

Link to Image Source

Let’s say it’s a hot summer day (As an American, the chances of such a day seem slim in Britain, but let’s roll with the hypothetical here), and you have in your hand an exquisite ice cream sandwich–cool and sweet, with the mist of a sub-zero freezer still rising off it.

Now let’s say a pesky younger sibling didn’t think to get his own, and you now stand (ahem–fairly) accused of not sharing, under threat of dire punishment.

How do you convince Mom your behaviour is Dad’s fault?

(Preferably before this lovely ice cream sandwich melts away!)

STEP 1: Argue behaviour has some genetic roots

Behaviour is a difficult trait to pin to a genetic origin. It is a complex, emergent property of the brain, influenced by many other confounding factors, like culture, experience, and social context. However, we do have experimental models that demonstrate behaviour does have some genetic roots. For example, some knock-out models, in which we delete a gene from a model organism such as a rodent, demonstrate altered fundamental behaviours.

The behaviours we can measure in model organisms are simple compared to the behaviours Mom is paying that child psychologist to sort out. We need to focus on measures we can easily quantify. Research in behavioural genetics includes measures of dominance vs subordinance, ease of movement and levels of activity, time exploring novel environments, anxiety, sexual behaviour, satiety, impulsivity, and compulsivity, among others.

We also know that some of the genes or clusters of genes, when missing in humans, cause neurodevelopmental disorders with characteristic behavioural changes. For instance, Prader Willi Syndrome (PWS), in which the gene rich chromosome region 15q11-q13 (paternal origin) is disrupted, is associated with a distinct behavioral profile. This includes mild cognitive deficits, insensitivity to pain, tantrums, obsessive tendencies, a compulsive desire to eat, and in some cases psychosis [Davies 2007, Perez 2010].

But missing (or indeed extra) bits of chromosomes aren’t the only reason we might see behavioural variation in humans. Natural genetic variation may explain some (but not all) of the statistically normal range of human behaviours. Someone may have a higher or lower IQ, be more or less impulsive, seek more or less novelty, or be more or less anxious and still be within a range considered ‘typical’, and not pathological [Nuffield 2002, Plomin 2016]. Some might, conceivably, be more or less likely to share their ice cream with their younger sibling…

STEP 2: Point out that Mom’s & Dad’s genomes contribute differently to the brain.

Some behaviours, notably those which relate to mothering behaviour, and altruism (how likely you are to share your ice cream sandwich), may be more Dad’s fault than Mom’s.

But wait! Both Mom and Dad give you a copy of each gene… shouldn’t they contribute equally to how you turn out?

As it turns out, a subset of genes, called imprinted genes, will selectively express (use) only the copy from one parent! Remember Prader Willi Syndrome (PWS)? The particular set of symptoms for this disease only appear if the disrupted chromosome region was inherited from Dad. If this same disrupted region was instead inherited from Mom, it manifests as Angelman’s Syndrome (AS), which has a very different character. AS is characterized by mental retardation, ataxia, epilepsy, a ‘happy’ disposition and repetitive or stereotyped behaviours [Davies 2007, Perez 2010, Bird 2014].

These diseases each result from disruptions in the same DNA region, but the results differ depending on whether this disruption is in the copy of the region from Mom or Dad. This is because some genes in the region normally only express one parent’s version to make what the gene encodes. If the copy the gene normally doesn’t use is missing, no big deal! You weren’t using it anyway. BUT, if the copy the gene exclusively uses is missing, BIG DEAL. The other copy of the gene won’t get the cue to come up to bat, and you’ll use neither version.

Because some genes in the PWS/AS region of the DNA are only expressed from Mom’s copy and others are only expressed from Dad’s copy, problems with the region inherited from one parent will cause a different set of symptoms than problems with the region inherited from the other [McNamara 2013, Cassidy 2000].

Theoretically, then, this could distinguish the impact Mom and Dad each have on your failure to share that ice cream sandwich.

Evolutionarily speaking, imprinting would seem to be a disadvantage. Why would you limit yourself to using only one copy (haploid) when you could use two (diploid) and have a backup? That imprinted genes exist among many species implies a strong natural selection for this mechanism is present, overcoming the disadvantage of functional haploidy (when, for functional purposes, you appear to have only one copy) [Wilkins 2016]. If expression of a particular gene benefits the survival and reproductive success of those using Dad’s copy more than those using Mom’s, natural selection will favor silencing Mom’s and using Dad’s.

But remember, they’re the same gene! Why would Dad’s help you out more than Mom’s (or vice versa)? Often, this is because the same gene can help out the propagation of Dad’s genetic line more than Mom’s (or vice versa).

First of all, let’s look at the arms race going on in the placenta. Mom’s genome and Dad’s genome both want this kid to survive, but Mom has to use the same machinery to produce as many of her kids (with her genetics) as she can, whereas Dad can piggyback the machinery of multiple women to produce kids with his genetics (as much as Mom may disapprove). Therefore, Mom’s and Dad’s genomes have very different strategies during pregnancy. It’s in the best interests of Dad’s genes to suck as many resources out of Mom as possible, to ensure the survival and success of his kid during pregnancy. Mom’s genes, on the other hand, need to carefully parse out what resources she has, so she doesn’t spend it all on just one kid. If she only has one ice cream sandwich, she wants both kids to be happy, so she’s forcing you to share.

What results from the placental arms race is a method of imprinting referred to as intra-locus conflict, where one copy of a gene is active and the other silent. For example, the gene Ifg2 increases the ability of nutrients to passively diffuse across of the placenta. The more nutrients that pass from Mom, through the placenta, to the kid, the more the kid can grow [Sibley 2004]. This is great for Dad’s genes, but potentially damaging for Mom’s! Dad’s genes will be propagated best, according to the restrictions of natural selection, if it keeps this gene ‘on’, producing more protein product and getting as much out of Mom for this kid as possible. Mom’s genes, however, will be propagated best (by limiting the resources she doles out), if she doesn’t allow two copies of Igf2 to be running in the placenta at the same time. Dad’s is already on, so Mom shuts her copy down.

So Mom & Dad contribute copies of the same genes, but these contributions aren’t functionally equivalent–they are complementary. They also contribute differently to different tissues. Relevant to your ice-cream behavioural argument, Dad’s genes seem to contribute more in the brain! In the adult mouse brain, only 37% of the sum total of imprinted genes (whose use is biased towards only one parent’s copy) use Mom’s copy exclusively [Wilkins 2016]. The distribution of parental contribution within regions of the brain exaggerates this difference even more: Mom’s genome appears to contribute more towards the cerebral cortex (important for planning, executive decisions, and higher brain function), whereas Dad’s contributes more towards the hypothalamus and other deep midbrain structures (important for more ‘primitive’ behaviours such as reward-response, motivation, and homeostasis) [Keverne 1996].

Let’s say Mom invites you to proceed with your argument.

STEP 3: Illustrate which behaviours are Dad’s fault.

Let’s also consider sex-biased dispersal patterns in populations [Wilkins 2016, Ubeda 2010]. Mom’s and Dad’s genomes might also have different success rates if a population is more likely to share one parent than the other. A pride of lions, for example, is made up of many females and one male. The cubs in the pride all share genes through their Dad’s side, so most of the genetic differences between them come from Mom’s side. In this case, once the cubs are born (after Dad’s genes demand as much from Mom’s resources as possible), Dad’s genes will be propagated best if the cubs behave cooperatively. Cooperation tends to increase the group’s overall survival rate, and because the group shares Dad’s genes, Dad’s genes do well if the group does well. Mom’s genes, on the other hand, are competing with those of all the other Moms in the group. This competition means Mom’s genes have the best chance of being passed on to the next generation if they give the individual cub an advantage over the other cubs in the pride.

Sex biased dispersal in a pride of lions means you are more likely to share dad than mom. Link to Source.

This difference between group and individual success creates a battle between Mom’s and Dad’s genomes. One way they can battle is through aspects of behaviour. In this pride of lions, Dad’s genes will promote altruism–the sharing of resources among the group to promote the survival of paternal siblings–and Mom’s genes will promote more selfish behaviour–benefiting the individual [Wilkins 2016].

This is where you can see your argument starting to fall apart in Mom’s eyes…

STEP 4: While grounded, sans-ice cream sandwich, consider where you went wrong.

First, consider that humans do not display the particularly female-biased dispersal pattern of a pride of lions, and this train of thought may have been influenced by your recent viewing of “The Lion King”.

Not only might this comparison to a patriarchal system offend your mother’s feminist sensibilities (female lions do most of the work in the pride anyway), but such a suggestion implies you are arguing is your selfish behaviour is really her fault (Though Úbeda et al 2010 do predict this is the case for hominids).

Alternatively. you could have tried the reverse argument, that Dad’s genes cause your selfish behaviour and Mom’s your altruistic behaviour. In the case of multiple paternity, it is in the interest of Mom’s genes to keep the siblings working together while Dad’s genes help them compete [Wilkins 2016, Haig 1992] . Unfortunately this could also have landed you in trouble, because you would then have suggested that you and your sibling don’t share the same Dad, which may or may not disturb your family dynamic.

Secondly, while these genes appear to contribute to some of the basic fundamentals of behaviour, human behaviour is ultimately complex, and we are unlikely to be able to use biology to predict its intricacies at the social level.

Piecing out the genetic contribution to behaviour is tough: one gene may contribute to many behaviours, and multiple genes may contribute to the same behaviour (polygenic). It is highly unlikely we will find “a gene for X”, where “X” is criminality, mothering, or hyperactivity etc. Even where different variants of a gene (alleles) can be shown to impact behaviour, factors such as environmental context, including early life stress, training, social environment, and culture can mediate this impact. Your genes may predispose you to a certain range in the spectrum of normal behaviours, but your outcome is alterable, and this predisposition will not dictate your fate [Nuffield 2002].

Imprinted genes introduce even more complexity. Sometimes the imprinting mark doesn’t result in simply the whole gene being singularly maternally or paternally expressed. Genes can produce several different messages, called transcripts, from the same sequence of code. Imagine this as a recipe for ice cream with optional ingredients (chocolate syrup, strawberries, peppermint dust). Including or excluding different combinations of those ingredients generates slightly different products from the same recipe. Transcripts can have their own imprinting sub-status, which manipulates the relative abundance of the different output versions. Depending on how the marks themselves change, the ratios of these different messages can change dynamically throughout development and between different tissue types [Wilkins et al 2016].

Step-By-Step Guide to Convincing Mom it's Dad's fault Ice Cream Diagrams 21-8-16 250 jpeg
While the original recipe calls for four ingredients, like a gene lays out all of its exons, you can get different products at the end by leaving out some items. Original Image.

Finally, the human genome and the human brain are quite robust systems.

The law treats one’s actions as autonomous and willed. Any criminal defense (including charges of failure to share ice cream) relying upon behavioural genetics must demonstrate the force of a genetic deficiency or variant to be greater than one’s autonomy [Nuffield 2002].  Your genome (the collection of all genes in your DNA) and your brain have many redundant systems in place to compensate for things that may go wrong–they are robust to minor variations and disruptions. Unless you have a clear neurodevelopmental or mental health disorder with distinct behavioural differences demonstrated across other patients, you are likely within the normal range of human behaviour, and therefore can’t use this argument as an excuse. Under the fundamental assumptions of the criminal justice system, you have adequate knowledge of right and wrong as well as control of your own actions and thus responsibility for your decisions. Your melted ice cream is the result of your long winded argument; Mom’s not going to clean up this sticky mess you’ve found yourself in, as it is entirely your own doing.

Regardless of your inability to use this science to argue your way out of trouble, the field of Behavioural Genetics is invaluable. It helps us further our understanding of the brain and contributes to the wealth of knowledge we draw on to address issues such as neurodevelopmental disorders. With this research, we can get closer to genetic, pharmaceutical, and environmental interventions for diseases affecting behaviour outside the statistically normal range–an area of medicine with a history of murky understanding, social stigma, and emotional turmoil. This field is sensitive and important because it helps us connect our biology to our understanding of our identity as humans. We should take care to use this field to nurture a healthy identity and social sphere, rather than distort it to subvert responsibility.

STEP 5:  Realize arguing to Mom that she is responsible for your selfish behaviours is clearly not a way to win the argument and keep your ice cream. Try this argument on Dad next time.


  • BIOETHICS NC. Genetics and human behaviour: The ethical context. Nuffield Council on Bioethics, London. 2002.
  • Bird LM. Angelman syndrome: review of clinical and molecular aspects. Application of Clinical Genetics. 2014 Jan 1;7.
  • Cassidy SB, Dykens E, Williams CA. Prader‐Willi and Angelman syndromes: Sister imprinted disorders. American journal of medical genetics. 2000 Jun 1;97(2):136-46.
  • Haig D. Genomic imprinting and the theory of parent-offspring conflict. Semin. Dev. Biol. 1992 Jan;3:153-60.
  • Keverne EB, Martel FL, Nevison CM. Primate brain evolution: genetic and functional considerations. Proceedings of the Royal Society of London B: Biological Sciences. 1996 Jun 22;263(1371):689-96.
  • Sibley CP, Coan PM, Ferguson-Smith AC, Dean W, Hughes J, Smith P, Reik W, Burton GJ, Fowden AL, Constancia M. Placental-specific insulin-like growth factor 2 (Igf2) regulates the diffusional exchange characteristics of the mouse placenta. Proceedings of the National Academy of Sciences of the United States of America. 2004 May 25;101(21):8204-8.
  • Úbeda F, Gardner A. A model for genomic imprinting in the social brain: juveniles. Evolution. 2010 Sep 1;64(9):2587-600.
  • Wilkins JF, Ubeda F, Van Cleve J. The evolving landscape of imprinted genes in humans and mice: Conflict among alleles, genes, tissues, and kin. Bioessays. 2016 May 1;38(5):482-9.

What is neuroimmunology and why should I care?

Dr Niels Haan | 15 AUG 2016

Everybody knows about the immune system, right? You get an infection and your body mobilises its army of immune cells and fights off the invaders. You get a fever, feel awful for a while, then things get back to normal. However, in the brain, it doesn’t work that way. In immunology, the brain is called privileged. This doesn’t mean its parents had money and it went to a better school than you; it means it’s excluded from the normal immune response. In this post, I introduce you to the wondrous world of neuroimmunology, the study of the immune system in the brain. Hopefully this will whet your appetite for a longer article exploring this more in depth, which will be coming out later.

The brain is separated from the rest of the body and the blood circulation by the blood-brain barrier, allowing the brain to be selective with what it lets in, and protecting it from infection. However, this also means the usual immune cells normally can’t get in. Luckily for us, the brain has its own immune cells, microglia. These cells are continuously patrolling your brain for anything that shouldn’t be there, and will attack and clear out any infections. Although this is an important role, unless you’re unlucky enough to contract something like meningitis, most neuroimmunological processes actually do not involve infection.


So why should you care about this? You should care because neuroimmunology is involved in pretty much every brain process and disease investigated so far. The most obvious ones are autoimmune diseases of the central nervous system, such as multiple sclerosis, where the immune system incorrectly recognises your own cells and proteins as something foreign to your body and attacks them. However, there are many more diseases with neuroimmunological involvement.

In neurodegenerative diseases such as Alzheimer’s or Parkinson’s, microglia are activated and clear out the many dying cells. However, the signalling factors microglia secrete can cause more neuronal dysfunction and cell death, creating a vicious circle. Activation of immune cells has also been found in many psychiatric diseases such as chronic depression and schizophrenia, but we don’t yet have a clear idea of their roles in these conditions. However, the immune response in some of these diseases is now starting to be studied as a target for potential new treatments.

It is easy to view the immune system as something that only kicks into action during infections or disease. It is, in fact, working all the time and is part of the brain’s normal development and maintenance. Neurons continuously make new connections and lose old ones. Microglia, the immune cells of the brain, are responsible for clearing out old and faulty connections, ensuring your neurons connect the way they should.  The immune system is also closely involved with the formation of new neurons. These roles can again be affected by disease. For instance, my own research has shown that neuroinflammation in chronic epilepsy may be responsible for lower numbers of new neurons and the memory problems patients suffer from.

Neuroimmunology is still a rapidly expanding field, and we are continuously finding new roles for immune cells and factors in the brain. In fact, only last month, research suggested that immune factors are involved in the regulation of social behaviour in a range of animals. (Note: you could try this as an excuse for being antisocial next time you have a cold. Don’t blame me if it doesn’t work though.) Having now hopefully convinced you that you should care about neuroimmunology, stay tuned for a longer post exploring its roles in detail!

Your cat is trying to control your mind! (Inadvertently).

Rae Pass | 19 JUL 2016

Cats or dogs? An eternal debate, with passionate advocates on both sides. Those in the pro dog camp have an unusual argument they could lobby: cats can infect you with mind controlling parasites.

Toxoplasma gondii is a single celled obligate parasite, which means it is unable to complete its life cycle without a suitable host. In this case, that would be members of the Felidae family (cats).  Other warm blooded animals (including humans) can also be infected, but the parasite is unable to mature and reproduce. Eggs are excreted in cat faeces, but Toxoplasma can also be transmitted through undercooked meat, soil and from mother to baby if contracted during pregnancy (hence why pregnant women are told to avoid cat litter). Worldwide infection rates vary, but in some countries up to 95% of the population may be infected (CDC, 2015) . In the UK it is estimated up to a third of people will be infected at some point in their life. Someone you know may well be infected, and wouldn’t that just explain their odd behaviour?

Toxoplasma gondii has been dubbed the ‘zombie parasite’. Infection can persist throughout life in the central nervous system, modifying the structure and function of neurons from within, by such as silencing neurons and hindering apoptosis (Flegr, 2013). Over time this can lead to behavioural changes in the host, and gender differences have been shown on self-reported personality factor questionnaires (Lindova et al., 2006). Whilst infected people generally do not display obvious symptoms, they are 2.5x more likely to have a car accident (Flegr et al., 2002). This is potentially explained by long term infection leading to a prolonged reaction time to stimuli (Flegr, 2013).


(Yes this is me. Why do you think I hide in my lab all day?!)

Various studies have been conducted to unpick how this infection alters the host’s behaviour. In rodents, decreases have been seen in neophobia (fear of novelty), learning capability and motor performance (Flegr, 2013). One study found that rats infected with Toxoplasama gondii lost their innate fear of cats, even over the longer term, suggesting it may result in permanent structural changes in the brain (Berdoy, Webster & Macdonald, 2000). Although infection occurs throughout the brain, high concentrations of infection has been found in the amygdala, a region that plays a key role in emotional processing, memory and decision making.

Additionally, infected rodents appear to display a “fatal attraction phenomenon”, essentially the conversion of the fear response to feline urine to something akin to attraction. Ordinarily rodents may fixate on the odour, the associated danger focusing their attention, but in the absence of fear that fixation may cause them to react in an abnormal way. This fear reduction may result from the parasite’s need to find their natural host, a cat, to complete their life cycle. Prey that lose fear of their predators are more likely to be eaten, transmitting the parasite from rat to cat. This demonstrates the “manipulation hypothesis”, which postulates that some parasites alter their host’s behaviour in a manner beneficial to themselves. This phenomenon has been studied in infected humans, again finding a gender difference, with infected men rating cat urine as more ‘attractive’, whilst infected women displayed a decreased rating (Flegr et al., 2011).

There is also a proposed link between Toxoplasama gondii infection and schizophrenia. Those diagnosed with schizophrenia are more likely than the general population to have been infected with Toxoplasama gondii (Torrey et al., 2007), although it is important to stress schizophrenia is an incredibly complicated disease with many factors increasing its risk. A proposed mechanism in schizophrenia is the slow growth of microscopic cysts in neurons, as persistent infection can increase the cell’s production of dopamine, potentially altering their function (Prandovsky et al., 2011; Stanley Medical Research Institute, 2014). Alongside several other neurotransmitters, dopamine functioning appears to be abnormal in schizophrenia.

If you want to read more about ‘scientific zombies’ you should check out How to make a zombie, which covers Toxoplasama gondii, re-animation and all sorts of other interesting and disturbing scientific examples of ‘zombies’. If you’d rather read a more fictional version of a mind controlling parasite the Parasitology series by Mira Grant is also fantastic.

Undoubtedly cat lovers have not changed their minds after reading this, but then you wouldn’t would you? Your cat would not approve.


  • Berdoy, M., Webster, J., & Macdonald, D. (2000). Fatal attraction in rats infected with Toxoplasma gondii. Proc Biol Sci, 267.
  • CDC (2015). Toxoplasma gondii Epidemiology & Risk Factors
  • Flegr, J. (2013).  Influence of latent Toxoplasma infection on human personality, physiology and morphology: pros and cons of the Toxoplasma–human model in studying the manipulation hypothesis. Journal of Experimental Biology, 216. 
  • Flegr, J., et al (2002). Increased risk of traffic accidents in subjects with latent toxoplasmosis: a retrospective case-control study. BMC Infect Dis, 2.
  • Flegr, J., et al (2011). Fatal attraction phenomenon in humans: cat odour attractiveness increased for toxoplasma-infected men while decreased for infected women. PLoS Negl Trop Dis, 5. 
  • Lindova, J., et al (2006). Gender differences in behavioural changes induced by latent toxoplasmosis. International Journal for Parasitology, 36.  
  • Prandovsky, E., et al (2011). The Neurotropic Parasite Toxoplasma Gondii Increases Dopamine Metabolism. PLoS ONE, 6. 
  • Stanley Medical Research Institute (2014). http://www.stanleyresearch.org/patient-and-provider-resources/toxoplasmosis-schizophrenia-research/neurotransmitters-and-t-gondii/
  • Torrey, E, et al (2007). Antibodies to Toxoplasma gondii in patients with schizophrenia: a meta-analysis. Schizophr. Bull. 33.

Is spider venom the future of pain relief?

Aurelien Bunga | 11 JUL 2016

An estimated 8 million people suffer from moderate to severe chronic pain in the UK (Fayaz et al., 2016). Although it may arise from an initial injury or an on-going illness, chronic pain can also occur when feedback mechanisms in your brain malfunction, leaving sufferers to feel constant pain, at times for no apparent cause. Treatments frequently leave patients feeling tired, lead to addiction, or are even ineffective.

Scientists at the University of Queensland are investigating whether spider venom can be used as a cure for chronic pain. Venom is used to paralyse prey and interfere with their ability to control their body, making it likely to contain toxins that may disrupt neural activity in the brain.

The team first tested a variety of spider toxins to identify if any had painkiller effects in humans. The ProTx-II peptide, isolated from the fangs of the Peruvian green velvet tarantula, was found to reduce activity in the Nav1.7 channel pathway particularly important in chronic pain. Previously, studies have revealed that individuals lacking the Nav1.7 channel due to a genetic mutation are unable to experience pain (known as ‘congenital analgesia’) (Cox et al., 2006). Therefore, blocking this channel could be a potential avenue for treating people with normal pain pathways.   

Using biophysical techniques to study the 3D structure of the proTx-II peptide, the researchers showed that the toxin specifically interacts with and inhibits the function of the Nav1.7 channel, which is mostly found in pain-sensing nerves in the dorsal root ganglion (a small bundle of nerve cells involved in bringing sensory information from the rest of the body to the spinal cord). They went on to explain that the cell membrane of nerve cells plays a vital role in the toxin’s ability to inhibit the channel, by attracting the peptide to the membrane, increasing its concentration close to the Nav1.7 channel and locking the peptide in the right orientation to allow maximum interaction with the channel.  

Although we are years away from the development of a successful treatment, and unlikely to be making any Peter Parkers anytime soon, this study may have uncovered a new generation of pain therapeutics.      

* This study was presented at the 60th Annual Meeting of the Biophysical Society in Los Angeles in February 2016.


  • Cox et al., 2006- An SCN9A channelopathy causes congenital inability to experience pain. Nature 444, 894-898
  • Fayaz et al., 2016 – Prevalence of chronic pain in the UK: a systematic review and meta-analysis of population studies. BMJ Open 2016; 6:e010364

Organ Donation: A No-Brainer, Right?

Oly Bartley | 6 JUN 2016

Organ donation. It’s an unusual topic for neuroscience (unless you’re talking about this), but the brain might just present the biggest issue preventing the advancement of this essential field. Why? Because a recent innovation relies on growing human organs inside pigs, and the initial studies show that we risk human cells entering the pig brains. What if their brains become too human? What if they start to think like us? Could we end up with a new race of Pig-men? It’s an intriguing idea (that might make you feel a little weird inside), but before we think about the ethical implications of an interspecies brain, let’s think about how this all came about.

The ‘Organ Deficit’

Anyone who has ever tuned into a dodgy medical drama on daytime telly will know that sometimes organs need to be replaced, and without that replacement organ, the patient will die. The problem is getting hold of donor organs is difficult! You need it to be fresh, you need it to be compatible with the patient, and you need it before the patient’s time runs out. What people often forget is that you also need to hope someone else tragically died before their time, but also before you do.

If that wasn’t distressing enough, if you look at the numbers you’ll see the current system is not only morbid, but also failing us. In the USA alone twenty two people die everyday waiting for an organ (http://www.organdonor.gov/about/data.html) and those who are lucky enough to receive one often have to suffer for many years before hand (https://www.organdonation.nhs.uk/real-life-stories/people-who-are-waiting-for-a-transplant/) . We are suffering from an Organ Deficit, and this one won’t be fixed with a little austerity.


Wouldn’t it be better if a doctor could simply say “Your kidney is failing, but don’t worry, we’ll just grow you a new one and you’ll be right as rain again!”? Nobody would have to die to get that organ, or suffer for years on a waiting list. The only people who might actually suffer are the scriptwriters of those medical dramas (and mildly at that). Growing new tissue for patients is a core aim of scientists working in regenerative medicine. But there is a list of problems that have to be overcome first. Unfortunately, that ‘list of problems’ isn’t a short one, and despite decades of research, several big leaps, and even a few Nobel prizes, the end still isn’t in sight. Meanwhile, another day passes and another twenty two people have missed their window.

What we need is a temporary fix; some way to increase the number of organs available to help the people in need now, and allow scientists to continue researching in the background for the ideal solution. Something with a fancy long name (and an unnecessary number of g’s) – something like Xenogeneic Organogenesis.


Ok, I admit, I made that term up. But it makes sense! Let me translate. ‘Xenogeneic’ means working with cells of two different species, and ‘Organogenesis’ means growing organs. Simply put, we’re talking about growing new human organs inside of host animals. How? Well, in a breakthrough paper Kobayashi et al. (2010) demonstrated a way to grow fully formed rat organs inside a mouse.

This was achieved through a technique called Blastocyst Complementation; when an embryo is injected with stem cells from a second animal. The resulting animal is called a ‘chimeric animal’, because it is made of cells from the two different animals, and those cells are genetically independent of each other. This has been successfully achieved in animals of the same species before (e.g. putting mouse stem cells into a mouse embryo), but here they crossed species by creating a mouse-rat chimera and a rat-mouse chimera (See image A, below, taken from Solter, 2010). The chimeras were morphologically similar to the animal species of the host embryo (and mother), but crucially, the chimeras were composed of cells derived from both species, randomly placed across the animals. In other words, whilst the mouse-rat chimera looked like a mouse, if you looked closely at any body part then you would see it was in fact built of both mouse and rat cells. The researchers believe this happened because the stem cells don’t alter the ‘blueprints’ that the embryo already has. Instead, they grow just like other embryonic stem cells, following chemical directions given to them and gradually building the animal according to those instructions.


Satisfied they could use blastocyst complementation to create inter-species chimeric animals, the researchers went one step further. They genetically modified a mouse embryo to prevent it from growing a pancreas (in image B this is called the ‘Pdx1-/-’ strain: a name that refers to the gene that was removed) and injected unmodified rat stem cells into the embryo. They were hoping that by preventing the mouse embryo’s stem cells from being able to form a pancreas, the chemical directions to build a pancreas would only be followed by the newly introduced rat stem cells. And guess what? It worked! They reported the pancreas inside the Pdx1-/- strain was built entirely of rat cells (See image B, above, taken from Solter, 2010).

This got a lot of people very excited! Would it be possible to do this with human organs? Could we farm human organs like we do food? Could we even use iPS technology to grow autologous patient-specific organs to improve the transplantation process? The lab has now begun tackling these sorts of questions, starting by testing the viability of pig-human chimeric embryos (pig embryos with human stem cells, to make pigs built with human cells), to see if the two cell types will contribute to the animal in the same way the mouse and rat cells did.

Freaky! Is this why we’re worried about creating a race of Pig-men?

Yep. What I didn’t explain above is that the brains of those chimeric animals, like the hearts, were also comprised of both mouse and rat cells. Considering that the scientific community generally accepts it is our incredible brain that separates us as a species, what would happen if human brain cells made their way into the pig brain? How human could they become? Would they begin to look like us? Act like us? Talk like us? Even think like us? How human would they have to become before we gave them human rights? Where is the legal, moral and ethical line between animal and human when one creature is a mixture of the two?

Most people (not all) would agree that sacrificing a farm animal’s life to save a human’s is an acceptable cause. After all, we already do that for bacon… a cause that even I (an avid carnivore) cannot claim as exactly necessary. But sacrificing a half pig, half human? That sounds like something you’d find in a horror story!

Even though many scientists believe an intelligent pig-human chimera is biologically implausible (let alone a speaking one), no-one is willing to say it’s impossible. Concerns over a potentially altered cognitive state have led to the US based NIH (a.k.a USA government funding central) to announce that they will not be supporting any research that involves introducing human cells into non-human embryos (https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-158.html). The fact is we don’t know enough about how the human brain develops or works. We don’t understand how the biological structures, electronic signals, and chemical balances translate to the gestalt mind experience. We don’t have one easy answer that makes “being a human” and “being a pig” distinct enough, to know how to interact morally with a pig-human chimera. And that makes me (and probably you) rather uncomfortable.

It’s also giving me all kinds of flashbacks to my school theatre group’s rendition of Animal Farm (I played Boy. Sounds like a rubbish part right? You’re wrong. He’s the narrator!). I can’t help but wonder if Orwell ever imagined his metaphorical work could have literal connotations too…


“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”

Fortunately, nobody wants to create pig-men (at least I don’t think there is a Dr. Moreau in the research team?), and the pig-human embryos being generated are being terminated long before they grow into anything substantial. The lab wants to be careful, to understand the potential consequences before even considering letting a pig-human foetus go to full term. Naturally this means there’s a heck of a lot of work to be done, but xenogeneic organogenesis (copyright: Me) is still decades ahead of other organ replacement models. Continued work could mean viable results a lot sooner, saving countless lives. At the very least, it would enable us to study natural organ growth directly, fast-tracking dish-driven stem cell models.

Is this really the best solution available to solve the Organ Deficit?

Good question reader! Let’s bring this discussion back to a simpler solution. Late last year (1st Dec 2015) Wales (home of The Brain Domain) became the first country in the UK to change the law to make organ donation ‘opt-out’ instead of ‘opt-in’.

This isn’t a new idea. Many other countries have implemented an opt-out system before, and generally statistics look good (http://webs.wofford.edu/pechwj/Do%20Defaults%20Save%20Lives.pdf) . Yet there is ongoing debate about whether this change will be sufficient alone (http://www.bbc.co.uk/news/uk-wales-34932951). Cultural variations and infrastructural differences in health care systems have a large impact on the effectiveness of such legislation, but generally speaking we should see some improvement. If that improvement is sufficient, then the policy will likely be rolled out across the rest of the UK (fingers crossed we beat the four years it took to get the plastic bag charge across the river Severn!). But if that is still not enough, then we’ll just have to hope those at Stanford can find a way to make xenogeneic organogenesis a real no-brainer.


  • Kobayashi, T., Yamaguchi, T., Hamanaka, S., Kato-Itoh, M., Yamazaki, Y., Ibata, M., Sato, H., Lee, Y.S., Usui, J.I., Knisely, A.S. and Hirabayashi, M., 2010. Generation of rat pancreas in mouse by interspecific blastocyst injection of pluripotent stem cells. Cell142(5), pp.787-799.
  • Solter, D. (2010). Viable rat-mouse chimeras: where do we go from here?. Cell142(5), 676-678.

Graphene and the Cyborg Circuit Board

Kira Rienecker | 17 MAY 2016

How much of a human do you need to replace to create a cyborg? How far can you go before a cyborg becomes a biological computer?  Does Inspector Gadget from the 1983 TV series still retain the autonomy, dignity, and essential rights he had before becoming a cyborg?

Is the cyborg Inspector Gadget still human? Or a charismatic computer with biological systems? Image source

Some of the common characteristics that define science fiction cyborg include mechanical limbs, enhanced senses, and computer interfaces linked directly to the consciousness. At the present, we are getting better at making robotic limbs to replace lost arms and legs, but what if we could distribute an electrical system throughout the brain and make that SciFi step? This paper suggests a graphene based neural interface might get us closer to electronic neural prostheses. Graphene, which has very interesting properties we will not go into here, but which make it a material with promise for building neuro-interfaces.

Neuro-interfaces are systems that allow us to send signals (usually electrical) to neurons or brain tissue. Think of it as a very basic computer output into a cyborg brain, allowing an external computer to control programs. If we send signals to neurons to fire or to repress firing, we can hijack the basic “what fires together, wires together” mechanism of synaptic pruning. Consequently, an extended neuro-interface could tell brain regions when to be active or silent, and even coordinate multiple regions.

But how would we distribute such an electrical system throughout the brain? While the particular graphene-based interface described in the paper had the advantage of being a good substance for the neurons to grow on without change to their physiological properties, it would be difficult to distribute the interface to every neuron in a fully developed brain. The interface investigated is a structure on which the neurons adhere as they grow, and is less compatible with integration into an already existing structure.

However, this doesn’t preclude its use in developed brains for cyborg prosthesis and enhancement! The paper pointed out the use of such an interface for brain-damage repair and sensory restoration therapies. Targeted use of such an interface to a small brain region may be much more feasible. Imagine injecting a graphene interface onto a damaged or even epileptic area, and using that interface to direct neural activity. One might even grow brain repair tissue on a graphene structure for coordination of synaptic connections. This could allow clinicians to grow patient-specific brain tissue in a dish with a useful, pre-loaded synaptic structure. (Of course, we’re not sure how we’d know what synaptic connections or structure we might need yet. This is where we venture further into SciFi).

For the near future, implanting this programmed tissue could immediately restore some brain functions (more sensory perceptions and motor skills than personality). The graphene interface might even be programmed to assist synaptic plasticity and repair mechanisms to encourage innervation and long-term recovery. The bumbling Inspector could probably have benefited from some brain repair and cognitive enhancements…

But that’s just a small section of the brain. And it’s mostly cortical surface repair–near the outer edges of the brain.

If we wanted a brain for which every neuron was connected to a graphene interface, for a really complex cyborg, we’d probably have to grow it from scratch.

But Ethics!

The programmable nature of an electronic system might mean we could program how that brain developed, and what synaptic connections it kept, through the hijacking of the “what fires together, wires together” rule I mentioned earlier. This graphene interface seems to be equipped to stimulate firing or potentially depress firing, rather than directly stimulate the outgrowth of new neuronal processes for further dendritic and axonal connections. Thus, complementing a binary “fire/depress” control with normal developmental pathways, we might eventually grow and program a brain with total specification (again, total SciFi here) …

But is the brain we’ve grown a biological computer at this point? Or does some element of human autonomy, dignity, and independence still exist? The answer would likely depend on how good we were at programming a developing brain, and what we chose to specify or leave to environmental or genetic influence without direct choice on the clinician’s part.

What happens if someone hacks into this grown and programmed brain? Are they hacking a biological computer or a human being?

Regardless of how far into the realm of science fiction we Go-Go-Gadget, the potentials of graphene for building neuro-interfaces is exciting, and further blurs the line between brain and computer, human and electronic prosthesis. We should use this to its full advantage for brain repair and therapy, but we should also consider how much we wish to exert direct control over how our brains, or new brains we bring into being, work. At what point are we human, cyborg, or biological machine, and should we assign value to such distinctions at all?


  • Fabbro, Alessandra, et al. “Graphene-Based Interfaces do not Alter Target Nerve Cells.” ACS nano (2015).

Killing cancer with your brain!

Oly Bartley | 9 MAY 2016

It is predicted that in the UK over one thousand people will be diagnosed with cancer every day this year Statistics Fact Sheet, 2015). Those unlucky enough to develop the most common form of brain cancer (Glioblastoma) will typically only survive 12 to 15 months (World Cancer Report, 2014). But what if you could kill the cancer with your brain? Unfortunately, I don’t mean cancer-fighting psychic powers (I know, the picture of Jean Grey is misleading… it was a cruel hook!). Instead, I’m referring to a new use for neural stem cells (NSC), to do the job of tracking and killing down cancerous cells for us. Believe it or not there are scientists working on such an intervention, and a new paper (Bagó et al., 2016) published last month in Nature Communications describes an exciting advancement that could help bring this strange therapy to a cancer clinic near you.


An example of a Glioblastoma

NSCs are unusual for various reasons, but of particular interest here, they’re able to migrate through the brain towards tumorous cancer cells. By engineering these cells to also secret anti-cancer molecules, they become natural cancer hunters capable of both finding and killing tumorous cells. This has been demonstrated as an effective therapy in various pre-clinical models, but the difficulty has been finding a good cell source to move this concept into clinics. Ideally we need something autologous (to stop our immune systems killing the NSC) and readily available. Unfortunately, the naturally occurring NSCs we all have exist deep within our brains (making them hard to obtain), and haven’t been genetically modified to secrete those important anticancer molecules (they’re not natural cancer killers).

Previously, scientists have wondered if we could make our own NSCs for this treatment by using induced Pluripotent Stem cells (iPSC). To learn more about iPSCs watch this short video:

iPSCs seem like they would be ideal because they are easy to get hold of (we can grow them from a skin sample), genetic modification can be made during the initial generation process, and they can be autologous to the patient. However, labs that have transplanted NSCs grown from iPSCs into the brain frequently report that iPSCs also have a really REALLY frustrating tendency to become cancerous tumour cells… and you’re not going to fix brain cancer by sticking more in there!

So what’s different about this paper? Simple; they used transdifferentiation. What the heck is transdifferentiation? I’m glad you asked. It’s a method similar to iPSC generation, except that instead of reprogramming cells to a pluripotent state, you reprogram them directly into the cell type you want. In this case they took fibroblasts (skin cells) and transdifferentiated them into NSCs, and named these new cells induced neural stem cells (iNSC). Crucially, these iNSC don’t seem to have the same tendency to become cancerous, yet they retain all those benefits of iPSCs outlined above! The researchers found that by injecting these iNSC into mice with glioblastomas, their survival increased between 160%-220%!

The iNSCs aren’t ready for clinics yet. For instance, one problem the paper outlines is that their iNSCs need a structure to help target the right parts of the brain, because otherwise they wander off before they’ve killed all the cancer. But, because these cells work so well, the scientists are confident the work only needs refinements*, so maybe one day in the not too distant future we can expect to be killing brain cancers with our brains!**


  • Bagó, J.R., Alfonso-Pecchio, A., Okolie, O., Dumitru, R., Rinkenbaugh, A., Baldwin, A.S., Miller, C.R., Magness, S.T. and Hingtgen, S.D. (2016) Therapeutically engineered induced neural stem cells are tumour-homing and inhibit progression of glioblastoma. Nature communications7.
  • Statistics Fact Sheet, 2015, Macmillan Cancer Support
  • World Cancer Report, 2014, World Health Organisation

*Of course, ‘refinements’ usually translates to another decade or two of work.

**Disclaimer: This statement is technically true, though your ‘cancer killing brain’ might actually be skin cells that have been turned into genetically modified brain cells. 😉 TECHNICALITY!

There is more than one scientist

Rachael Stickland | 14 MAR 2016

umbrellaThe word ‘scientist’..
Why do we use this umbrella term?

What do the words scientist, physicist, consilience, catastrophism, uniformitarian, ion, anode and cathode all have in common? Well, the late William Whewell, a wordsmith and polymath, created them, often suggesting them to scientists when they had made a discovery. As well as the many scientific disciplines on which he published, he also found time to compose poetry. What a babe. In the 19th century, people we now call ‘Scientists’ were ‘Natural Philosophers’ or ‘Men of Science.’ Whewell first proposed the word scientist anomalously in 1834, and then more seriously in 1840 in ‘The Philosophy of the Inductive Sciences’:

williamwImage source 

“As we cannot use physician for a cultivator of physics, I have called him a physicist. We need very much a name to describe a cultivator of science in general. I should incline to call him a Scientist. Thus we might say, that as an Artist is a Musician, Painter, or Poet, a Scientist is a Mathematician, Physicist, or Naturalist.”

The scientific community initially objected to this term, and it wasn’t until the late 19th/early 20th century that it became established in the United States and Great Britain. (There’s a nice little blog post about it here if you are interested in finding out more about the history of the word). Moving from ‘Man of Science’ to ‘Scientist’ better acknowledges that women actually are capable of scientific pursuit, yet there’s still room for improvement here. Then again, we still have many labels and titles that hark back to older times.  Take ‘PhD’ which stands for Doctorate of Philosophy. The origin of the word philosophy has its roots in the Greek philo– meaning ‘love’ and –sophos meaning wisdom. 

It is difficult to define what a science is. In simple terms, science is a process, whereby you collect enough data in a valid and repeatable way, using the scientific method. The biggest commonality between all scientists is simply that they are studying something in great depth. There are certainly many commonalities between scientists  but there are even more differences. However, in news headlines, we more frequently read ‘Scientists say’ than ‘Physicists says’ or ‘Geneticists says’ which can contribute to a simplification or vagueness of what a specific scientist does, and a lack of appreciation for the diversity of people this term represents.

umbrellaDelving into the diversity

Let’s examine the study of Parkinson’s disease, a neurodegenerative condition that predominantly affects motor function (tremor, rigidity, difficulty with initiating movement), as well as cognitive and emotional functioning.
 greywellyA geneticist might spend their day in a lab, analysing large genetic samples, trying to understand why some people get Parkinson’s disease and others don’t.
greywellyA neurologist might spend their morning in a clinic seeing patients, and the afternoon in a lab carrying out a clinical trials to investigate the effectiveness of a new drug.
greywellyA psychologist might be trying to develop a non-drug based therapy, such as exercise or diet modification, to help alleviate the symptoms of Parkinson’s.
greywellyA radiologist might carry out an MRI scan to investigate changes in activity in specific brain regions, and relate this to behavioural symptoms.

Whilst each one is a neuroscientist, in that they study the brain in one form or another, they have vastly different daily routines, skill sets, and areas of expertise. Yet they are all working towards the same goal: to understand and tackle Parkinson’s disease. This example shows how even a specific scientific title, neuroscientist, can mean many different things.

Diversity of roles needs diversity of people. Science benefits from a diverse group of people, with a diverse set of skills. We shouldn’t  limit the list of people who think they can participate in science, or limit how they can.

umbrellaPerceptions of a ‘Scientist’

The public image of ‘scientist’ has been a concern  for many years and systematic research into this topic goes back as far as Mead & Meatraux’s seminal work (1975). In this study, 35,000 US high school students wrote an essay describing their view of a scientist. Analysis of these essays revealed an elderly or middle-aged man, in a white coat, with glasses, working in a laboratory, performing dangerous experiments.

Image source 
Another prominent study, Chambers (1983), asked 4,807 children aged 5-11 years to draw scientists. By 7 or 8 years old this stereotype was starting to emerge. The older the child, the more similar the drawing was to the description above. Only 28 female scientists were drawn, and only by girls. This instrument, known as the ‘Draw-A-Scientist Test (DAST)’ has been widely used in research since.  Admittedly, these two studies
were carried out quite some years ago, and you could argue that societal views have changed for the better. A full discussion of that is beyond the scope of this blog post, and I struggled to find studies in the past ten years that had sample sizes as big as these two. Yet some more recent research and commentaries on this stereotype suggest it unfortunately still exists, to some extent. You can find the references for the above two studies, and a few more recent ones, at the end of the article.

Does this restrictive perception impact on science itself?

So far we’ve been discussing societal attitudes towards the term scientist.  The restrictive view of what it is to be a scientist has practical consequences, contributing to the demographics of the scientific workplace today.  If people interested in science don’t think they fit into this restrictive view of a scientist, and don’t see people working as scientists they can identify with, they are less likely to  feel they are needed or capable of a career in science.

In 2014, The Royal Society “set out to analyse and understand the composition of the scientific workforce in terms of gender, disability, ethnicity and socio-economic status and background.” They used big samples from three different sources.  Though their results paint a complex picture, clear trends do emerge, highlighting there is a lack of diversity in science. You can read a short summary of the report, or the full report itself, here.

Education and public engagement can go a long way to improving this lack of diversity. In order for people to understand the choices that are available to them, we must avidly teach that there is no particular gender, race, background, or rigid set of skills attached to being a scientist. As well as this, enough scientists should be vocal about what they do, but also who they are and where they’ve come from. This is by no means the magic solution to the problem of lack of diversity in science, and much more funding and opportunities are needed for some individuals to make a scientific career even an option for them. Nonetheless, more communication and transparency will help enable people with different skill sets, and from wider spread of backgrounds, to apply themselves to a worthwhile and satisfying pursuit, and for society to get maximum benefit from this.

Edited by Jonathan Fagg 


The two main studies I discussed about the ‘Draw A Scientist Test’:

  • Chambers, D.W. (1983). “Stereotypic Images of the Scientist: The Draw a Scientist Test”. Science Education 67 (2): 255–265.
    Mead, M.; R. Metraux (1957). “The Image of the Scientist Among High School Students: a Pilot Study”. Science 126 (3270): 384–390

More recent commentary on the view of scientists:

  • The Wikipedia page provides a well-sourced background on the DAST, and the study of it over the years!
  • Frazzetto, G. (2004). The changing identity of the scientist. EMBO reports, 5(1), 18-20.
  • Losh, S. C., Wilke, R., & Pop, M. (2008). Some methodological issues with “Draw a Scientist Tests” among young children. International Journal of Science Education, 30(6), 773-792.
  • Schibeci, R. (2006) Student images of scientists : What are they? Do they matter? Teaching Science, 52 (2). pp. 12-16.
  • Steinke et al (2007). Assessing media influences on middle school–aged children’s perceptions of women in science using the Draw-A-Scientist Test (DAST). Science Communication, 29(1), 35-64.