Do you really like it? Social media on the brain.

By Lauren Revie

Social media is something that has become commonplace in most of our lives – we wake up, we scroll the feed, we post throughout the day, like, comment, tweet and share. Most of us are familiar with the concept, despite social media sites such as Instagram and Facebook only coming into popular mainstream use in the last 15 years.

The concept of social media is simple – create an account which allows you to share and connect with friends, family and colleagues across the globe. Many modern-day relationships would cease to exist if it weren’t for the advent of social media, with around 74% of adults connecting on a daily (if not hourly) basis (Meshi, Tamir & Heekeren, 2015). Social media allows us to feel connected, less lonely and can even lead us to feel happier (Mauri et al., 2011).

According to a recent study, the number of friends or followers we acquire on social media also influences the size of different structures related to emotional regulation, and both online and offline social network size, such as the amygdala (Kanai, Bahrami, Roylance & Rees, 2011). This could mean that interaction on social media is linked to our social perception – meaning that if we are more social online, we may also be more socially aware offline.

Picture 1.png

However, with the power to connect and reach thousands, if not millions, of people also comes a darker side to social media. Numerous instances of online bullying, fake news, and negative impacts on mental health have been reported in recent years. Despite this, we continue to use these platforms – so what is it about these sites that make them so hard to resist?

Social media provides our brains with positive reinforcement in the form of social approval, which can trigger the same kind of neural reaction as your brain would experience through behaviour such as smoking or gambling. This pathway – the dopamine reward pathway – is associated with behaviours which give us a good feeling, such as food or exercise, leaving us looking for the positive reinforcement or reward. So, in the same way that eating chocolate may release dopamine and lead you to seek more of it, so does social media. Neuroscientists have reported that social media related ‘addictions’ share similar neural activity to substance and gambling addictions (Turel et al., 2014). However, those individuals who used social media sites heavily also showed differences in their brain’s inhibitory control system, which could result in lower focus and attentional abilities (Turel et al., 2014).

Cognitive neuroscientists have also shown that the rewarding behaviour we engage in online, such as sharing images or receiving likes, stimulate behaviour in an area of the brain called the ventral striatum, which is responsible for reward behaviour. However, activity in this area in response to positive social media feedback may be related to the processing of gains in our own reputation (Meshi, Morawetz & Heekeren, 2013). This could mean that we use social media as less of a means to communicate and share with one another, but more to gain social reputation in an attempt to boost our egos.

With around 5% of adolescents considered to have significant levels of addiction-like symptoms (Banyai et al., 2017), it is clear that social media use may be detrimental to our well-being, as well as beneficial for us socially. Moving forward, users can only be mindful of how powerful connecting with contacts can be, as there is a dark addictive side to the likes and shares we interact with every day.

References

Bányai, F., Zsila, Á., Király, O., Maraz, A., Elekes, Z., Griffiths, M. D., … & Demetrovics, Z. (2017). Problematic social media use: Results from a large-scale nationally representative adolescent sample. PLoS One, 12(1).

Kanai, R., Bahrami, B., Roylance, R., & Rees, G. (2012). Online social network size is reflected in human brain structure. Proceedings of the Royal Society B: Biological Sciences, 279(1732), 1327-1334.

Mauri, M., Cipresso, P., Balgera, A., Villamira, M., & Riva, G. (2011). Why is Facebook so successful? Psychophysiological measures describe a core flow state while using Facebook. Cyberpsychology, Behavior, and Social Networking, 14(12), 723-731.

Meshi, D., Morawetz, C., & Heekeren, H. R. (2013). Nucleus accumbens response to gains in reputation for the self relative to gains for others predicts social media use. Frontiers in human neuroscience, 7, 439.

Meshi, D., Tamir, D. I., & Heekeren, H. R. (2015). The emerging neuroscience of social media. Trends in cognitive sciences, 19(12), 771-782.

Turel, O., He, Q., Xue, G., Xiao, L., & Bechara, A. (2014). Examination of neural systems sub-serving Facebook “addiction”. Psychological reports, 115(3), 675-695.

Frauds, fear of failure and finances: The mental health problem in academia

By Lauren Revie

Mental health is a hot topic at the moment – and it is about time. Around 1 in 6 adults will experience anxiety or depression (Mental Health Foundation, 2016), with the number of people recognising suicidal thoughts increasing drastically (McManus et al., 2016). However, as a response to this growing problem, we have also seen a rise in the formation and support for mental health charities and research into different mental health and psychiatric conditions. More and more people are beginning to talk about our mental health openly; how we feel, what is affecting our mental health, and seeking support for problems we might be experiencing. Mental health awareness and advocacy is gaining momentum, and everything *seems* to be heading the right way in working towards normalization of sharing our feelings, emotions and mental state. 

But what about the researchers behind the mental health statistics and the breakthroughs? There is  growing evidence of a mental health epidemic that is often hidden behind academic success, with almost half of PhD students and graduates in academia struggling with mental health. Approximately 41% of PhD students demonstrate moderate to severe symptoms of anxiety and depression – almost threefold that of the general public – meaning mental health issues are rife in researchers (Evans et al., 2018). 

Perhaps, then, we may attribute this to the ‘type’ of person who is attracted to pursuing a career in academia – highly motivated, a perfectionist, and maybe a little hard on themselves. However, research by Levecque et al (2017) compares the incidence of these problems within PhD students to their highly educated counterparts in industry. The findings indicate that one in two PhD students experience psychological distress, and one in three is at risk of developing a common psychiatric disorder – findings which are significantly higher than those within the comparison group.

But why might this be? Why would seemingly driven, motivated and highly successful young individuals be battling with these staggeringly high rates of mental health problems? Levecque and colleagues (2017) attribute these statistics to the effect of their research on work-family life, and found strong predictors of poor mental health to be job demands, lack of job control, and supervisor’s leadership style. Others have attributed these rates to workplace ‘bullying’ of doctoral students (English, Flaherty & English, 2018) and a feeling of disconnection from the research community due to unfamiliar topics or long isolated work (Reeve & Patridge, 2017). 

Academics and postgraduate students alike attribute mental health problems and feelings of being overwhelmed to lack of support and isolation. Further research by Belkhir et al (2018) followed a group of young academics and early career researchers over four years.  It was reported that feelings of loneliness came from social isolation due to workplace culture, meaning individuals weren’t able to make meaningful relationships with those in their immediate groups. In addition to this, they also reported that they felt unable to participate in conversations with their peers and others in their field, as they felt they lacked both cultural and technical knowledge. 

This leads us on to an issue that many postgraduates and early career researchers can related to, known as the ‘Imposter syndrome’. Clance and Imes (1978) first coined the term ‘Imposter syndrome’ in a bid to collectively define the traits of high-achievers who were struggling to accept and internalize their own success. Often, someone struggling with imposter syndrome will claim to be a fraud, or underestimate their own knowledge, attributing their success to luck or circumstance. ‘Imposters’ will often compare themselves to others, and reject praise, leading to anxiety, stress and in some cases, depression. Positive correlations have been observed between imposter syndrome and academic success, neuroticism and perfectionism – all strong traits of a postgraduate student or early career researcher. And it isn’t just them! Many senior faculty members wake up believing they will one day be ‘found out’ Whilst the syndrome is not exclusive to academics, it is rife amongst university staff and students, and is a huge contributor to declining mental health in post graduate education. Watson and Betts (2010) attribute feelings of imposter syndrome to three main themes in an early career researcher’s experience: fear, family and fellowship. The researchers assessed email conversations of graduate researchers, in which a fear of being discovered as a fraud appeared to be one of the main factors driving feelings of imposter syndrome. In addition, this was further exacerbated by feelings of being drawn away from family responsibilities, and a lack of peer support or fellowship during study. 

There are a number of reasons why researchers and students may feel like imposters. Firstly, academia is a competitive world. Postgraduate study attracts the best of the best, and fairly often, those surrounding you are intelligent and also over-achieving. Partnered with the constant pressure to ‘publish or perish’, and the need to justify your project and area of expertise, this can result in stress, anxiety and often burnout (Bothello & Roulet, 2018).

Other factors which may also contribute to poor mental health in academia include difficulty in time management, organizational freedom (van Rijsingen, 2018) and perception of career perspectives, funding opportunities and financial problems. The struggle to manage your own work, produce innovation and progress whilst being largely self-taught can often come at the price of mental health issues. It is suggested that stress may stem from insecurity within this sphere – be it financial insecurity, or insecurity concerning ‘unwritten rules’ within the lab or school – and also from frequent evaluation, and a seemingly unmanageable workload (Pyhalto et al., 2012). 

All in all, the consensus seems to be that postgraduate researchers and academics alike are struggling in the University environment. This issue is beginning to be addressed more readily, however the phenomenon is not new. McAlpine and Norton (2006) note that the calls for action to rectify this growing problem have generally been ad hoc rather than theory driven (ironically!). As such, research which has been conducted has not been broad enough to integrate factors which could influence outcomes in a University context. And so the cycle continues. 

If you have been affected by anything in this article, please talk to a trusted friend or family member, or access help on www.nhs.uk/conditions/stress-anxiety-depression/mental-health-helplines/

www.mind.org.uk

http://www.samaritans.org

References:

Bothello, J., & Roulet, T. J. (2018). The imposter syndrome, or the mis-representation of self in academic life. Journal of Management Studies, 56(4), 854-861.

Clance, P.R., & Imes, S. A. (1978). The impostor phenomenon in high achieving women: Dynamics and therapeutic intervention. Psychotherapy: Theory, Research, and Practice, 15(3), 241-247. 

English, S., Flaherty, A., & English, A. (2018). Gaslit! An examination of bullying on doctoral students. Perspectives on Social Work, 20.

Evans, T. M., Bira, L., Gastelum, J. B., Weiss, L. T., & Vanderford, N. L. (2018). Evidence for a mental health crisis in graduate education. Nature biotechnology, 36(3), 282.

Levecque, K., Anseel, F., De Beuckelaer, A., Van der Heyden, J., & Gisle, L. (2017). Work organization and mental health problems in PhD students. Research Policy, 46(4), 868-879.

McAlpine, L., & Norton, J. (2006). Reframing our approach to doctoral programs: An integrative framework for action and research. Higher Education Research & Development, 25(1), 3-17.

McManus, S., Bebbington, P., Jenkins, R., & Brugha, T. (2016). Mental Health and Wellbeing in England: Adult Psychiatric Morbidity Survey 2014: a Survey Carried Out for NHS Digital by NatCen Social Research and the Department of Health Sciences, University of Leicester. NHS Digital.

Mental Health Foundation. (2016). Fundamental Facts about Mental Health 2015. Mental Health Foundation.

Pyhältö, K., Toom, A., Stubb, J., & Lonka, K. (2012). Challenges of becoming a scholar: A study of doctoral students’ problems and well-being. ISrn Education, 2012.

Reeve, M. A., & Partridge, M. (2017). The use of social media to combat research-isolation. Annals of the Entomological Society of America, 110(5), 449-456.

van Rijsingen. (2018), E. Mind Your Head# 1: Let’s talk about mental health in academia.

Watson, G., & Betts, A. S. (2010). Confronting otherness: An e-conversation between doctoral students living with the Imposter Syndrome. Canadian Journal for New Scholars in Education/Revue canadienne des jeunes chercheures et chercheurs en éducation, 3(1).

Can’t or Won’t – An Introduction To Apathy.

 By Megan Jackson

Often, when a person hears the word apathy, an image comes to mind. A glassy-eyed teenager scrolling vacantly through their phone while their parent looks on in despair. While comical, it does not reflect what apathy really is clinically: a complex symptom that has important clinical significance.

In 1956, a study was published describing a group of Americans released from Chinese prison camps following the Korean war1. As a reaction to the severe stress they had suffered during their time in prison, the men were observed to be ‘listless’, ‘indifferent’ and ‘lacking emotion’. The scientists decided to call this pattern of behaviours apathy. However, at this point, there was no formal way to measure apathy. It was acknowledged that it could manifest in varying degrees, but that was the extent of it. It was over 30 years before apathy was given a proper definition and recognised as a true clinical construct.

As time went on, scientists noticed that apathy doesn’t just arise from times of extreme stress, like time in a prison camp, but that it also appears in a variety of clinical disorders. A proper definition and a way of assessing apathy was needed. In 1990, Robert Marin defined apathy as ‘a loss of motivation not attributable to current emotional distress, cognitive impairment, or diminished level of consciousness’. As this is a bit of a mouthful, it was summarised as ‘A measurable reduction in goal-directed behaviour’. This definition makes it easy to imagine an individual who no longer cares about, likes, or wants anything and therefore does nothing. However, this is not always the case. There are different subtypes of apathy that each involve different brain regions and thought-processes, these are:

Cognitive – in which the individual does not have the cognitive ability to put a plan into action. This may be due to disruption to the dorsolateral prefrontal cortex.

Emotional-affective – in which the individual can’t link their behaviour or the behaviour of others with emotions. This may be due to disruption to the orbital-medial prefrontal cortex.

Auto-activation – in which the individual can no longer self-initiate actions. This may be due to disruption to parts of the globus pallidus.

It’s much easier to picture how the different types of apathy affect behaviour. Take Bob for example. Bob has apathy, yet Bob likes cake. When somebody asks Bob whether he would like cake he responds with a yes. However, Bob makes no move to go and get it. Bob still likes cake, but he can no longer process how to obtain that cake. He has cognitive apathy. In another example, Bob may want cake but does not want to get up and get it. However if someone told him to, he probably would go and get it. This is auto-activation apathy and is the most severe and the most common kind. If Bob could no longer associate cake with the feeling of happiness or pleasure, then he has emotional-affective apathy.

So, whatever subtype of apathy Bob has, he doesn’t get his cake. A shame, but this seems a little trivial. Should we really care about apathy? Absolutely! Imagine not being able to get out of your chair and do the things you once loved. Imagine not being able to feel emotions the way you used to. Love, joy, interest, humour – all muted. Think of the impact it would have on your family and friends. It severely diminishes quality of life, and greatly increases caregiver burden. It is extremely common in people with neurodegenerative diseases like dementia2, psychiatric disorders like schizophrenia3, and in people who’ve had a stroke4. It can even occur in otherwise healthy individuals.

Elderly people are particularly at risk, though scientists haven’t yet figured out why. Could it be altered brain chemistry? Inevitable degeneration of important brain areas? One potential explanation is that apathy is caused by a disruption to the body clock. Every person has a body clock, a tiny area of the brain called the suprachiasmatic nucleus, which controls the daily rhythms of our entire bodies like when we wake up, go to sleep, and a load of other really important physiological processes like hormone release. Disruption to the body clock can cause a whole host of health problems, from diabetes to psychiatric disorders like depression. Elderly people have disrupted daily rhythms compared to young, healthy people and it is possible that the prevalence of apathy in the elderly is explained by this disrupted body clock. Much more research is needed to find out if this is indeed the case and why!

Figuring out how or why apathy develops is a vital step in developing a treatment for it, and it’s important that we do. While apathy is often a symptom rather than a disease by itself, there’s now a greater emphasis on treating neurological disorders symptom by symptom rather than as a whole, because the underlying disease mechanisms are so complex. So, developing a treatment for apathy will benefit a whole host of people, from the elderly population, to people suffering from a wide range of neurological disorders.

Edited by Sam & Chiara

References

Stassman, H.D. (1956) A Prisoner of War Syndrome: Apathy as a Reaction to Severe Stress, Am.J.Psyc., 112(12):998-1003

Chow, T.W., (2009) Apathy Symptom Profile and Behavioral Associations in Frontotemporal Dementia vs. Alzheimer’s Disease, Arch Neurol, 66(7); 88-83

Chase, T.N., (2011) Apathy in neuropsychiatric disease: diagnosis, pathophysiology, and treatment Neurotox Res 266-78.

Willem van Dalen, J., (2013) Poststroke apathy, Stroke, 44:851-860.

Gillette M.U., (1999) Suprachiasmatic nucleus: the brain’s circadian clock, Recent Prog Horm Res. 54:33-58

The Story of Adult Human Neurogenesis or: How I learned to Stop Worrying and Love The Bomb

By Dr Niels Haan

Recently, the debate about adult human neurogenesis seems to be just a dame short of a panto. Do adult humans form new neurons? Oh no, they don’t! Oh yes, they do! There are not many fields where people debate the very existence of the phenomenon they are studying. What do nuclear bombs have to do with it? We’ll come to that later.

What is the big deal?

For many decades, neuroscience dogma was that once the brain was formed after childhood, that was it. All you could do was lose cells. This was first challenged by Robert Altman in the 1960s, when he showed new neurons were forms in adult rodents, but his work was largely ignored at the time. A second wave of evidence came along in the late 1980s and 90s, first starting in songbirds, and later on the conformation that adult neurogenesis does take place in rodents.

In the years that followed, it has been shown that rodent adult neurogenesis takes place in two main areas of the brain, the wall of the lateral ventricles, and the hippocampus. The real importance is the function of these new neurons. In rodents, these cells are involved in things like discrimination of similar memories, spatial navigation, and certain forms of fear and anxiety.

Obviously, the search for adult neurogenesis in humans started pretty much immediately, but decades years later we still haven’t really reached a conclusion.

Why is there so much controversy?

To definitively show adult neurogenesis, you need to be able to show that any given neuron was born in the adult animal or human, rather than in the womb or during childhood. This means using a way to show cell division, as birth of a neuron requires a stem cell to divide and produces at least one daughter cell that ends up being a neuron.

In animals, this is straightforward. Cell division requires the copying of a cell’s DNA. You inject a substance that gets built into new DNA, detect this later once the new neuron has matured, and say “this cell was born after the injection”. To test what these cells are used for, we tend to reduce the numbers the stem cells with chemicals or genetic tricks, and see what the effect on the behaviour of the animal is.

However, injecting chemicals into the brains of humans tends to be frowned upon. Similarly, killing off all their stem cells and doing behavioural tests doesn’t tend to go down well with volunteers. So, we can’t use our standard methods. What we’re left with then is to detect certain proteins that are only found in stem cells or newly born neurons, to show they are present in the adult brain. However, that’s easier said than done.

Although there are proteins that mark mainly things like dividing cells, stem cells, or newly born neurons, those are not necessarily always only found in those cells. All these markers have been found time and again in the human hippocampus. However, because they are not always unique to stem cells and newly born neurons, there is endless debate on which proteins – or indeed which combinations of proteins – to look at, and what it means when cells have them.

What is the evidence?

Dozens, if not hundreds of papers have tried to address this question, and I don’t have the space – or the energy – to discuss all of them. Let’s look at some influential and recent studies that made the headlines, to show how radically different some people are thinking about this subject.

One of the major influential studies in the field came from the lab of Jonas Frisen in 2015. They used a way to get round the problem of detecting dividing cells. When DNA is copied to make a new cell, a lot of carbon is used. Nuclear bomb testing in the 50s and 60s introduced small amounts of (harmless) radioactive carbon into the atmosphere, and so eventually into the DNA of cells born during that time. The end of nuclear testing has lead to a slow decline of that radioactive carbon. So, by measuring how much radioactive carbon is in a cells DNA, you can determine when it was born. Frisen and his group did just this, and showed that people had neurons born throughout their lives in their hippocampus, with about 700 cells being born per day.

This didn’t convince everyone though. This was shown earlier this year when a widely publicised paper came out in Nature. This group did not do any birthdating of cells, but looked for the characteristic markers of stem cells and immature neurons in brains from people of a wide range or ages. According to them, the only way to reliably detect a newly born neuron is to look for two different markers on the same cell. They could only find one of the markers in adults, so by this measure, they found no new neurons after childhood.

The very next month, a very similar paper came out, using essential identical methods, and showed the exact opposite. They did find the new neurons in brains of a wide range of ages, and when counting them, found very similar rates of neurogenesis as Frisen did with his completely different methods.

So, who is right?

That depends on who you ask, and it depends on the question you’re asking (isn’t science fun?). The majority of studies have shown evidence for some neurogenesis in one form or another. How convincing this evidence is comes down to seemingly petty technical arguments, and biases of who you’re asking. The biggest questions are about which markers to use to find the cells, as shown by the two studies mentioned above, and nobody agrees on this yet.

Barring some spectacular technical breakthrough that gives us the same sorts of tools in humans as we have in animals, this debate will undoubtedly keep going for some years yet. The bigger question, which we haven’t addressed at all yet, is whether these adult born cells actually do anything in humans. That’s the next big debate to have…..

Edited by Lauren, Chiara, and Rae