The following blog is long; it needed to be. It is an article written for “Every Child Journal” a subscription educational publication part of the Imaginative Minds Group. I have attempted to explore self-harm in the context of online behaviour; what research currently tells us; current expert opinion from those who support children and young people with self-harm and how those of us with responsibility for online safety might consider adapting current interventions to understand, ameliorate or indeed prevent digital self-harm.
Please take time to visit the links: there are some amazing professionals working out there in spaces many of us have never even considered.
On August 2nd 2013 Hannah Smith died. This fourteen year-old Leicestershire schoolgirl, described by her family as “bubbly,happy” person and “self confident”, took her own life amid claims she was bullied online and the victim of “trolling”.
The heartache of losing a child in such a desperately cruel and tragic way and the impact on her family is difficult to imagine. Her father, David Smith, responded like many parents would, with calls for tighter regulation for the industry and a change in the law to protect vulnerable youngsters from online abuse. It was a cry that struck a chord with parents across the nation and one that reverberated all the way to government. August 8th on BBC Breakfast Prime Minister David Cameron commented:
“The operators of the sites ‘have got to step up to the plate and clean up their act and show some responsibility. – It’s not acceptable what is allowed to happen on these sites. It’s their responsibility and those posting these hateful remarks first and foremost. If websites don’t clean up their act, and don’t sort themselves out then we as members of the general public have got to stop using these particular sites. Boycott them.”
The resultant media frenzy following the incident saw a plethora of articles that criticised the social media site involved, Latvian-based Ask.fm, for not being proactive enough in intervening when the comments occurred; for not monitoring activity; for not providing moderation or reporting routes; and for not engaging with relevant child protection organisations.
On 6th May 2014, the Coroner’s Inquest into Hannah’s death was published and reached a number of seminal conclusions that raised significant implications for not only parents but for all those involved in the welfare and protection of children and young people. The police evidence suggested that no third party was involved and that the balance of probability was that Hannah herself had posted the messages. Her death was recorded as suicide.
The verdict has been a very difficult one to come to terms with, and yet this tragic case has catalysed a realisation that online technologies are an intrinsic component of young people’s developmental environment; as impactful and as important a component as the physical world on which we focus much of our safeguarding intervention.
Whilst the criticism that Ask.fm faced in the months after Hannah’s death was not entirely justified (the site did have reporting routes and moderation, as well as actively engaging with organisations like the UK Safer Internet Centre Professional’s Online Safety helpline in rapidly removing illegal, defamatory or harmful content) the social network has since made significant improvements in promoting those reporting routes in a clearer and more accessible way. In fact, its recent acquisition by new owners Ask.com (no relation) has prompted a high profile focus on online safeguarding. Their CEO Doug Leeds in a recent Guardian report stated:
“…we’re not going to run a bullying site … If we can’t [fix Ask.fm] we’ll shut it down.”
The company now promises to respond to bullying allegations within 24 hours, and promises to revamp its safety and moderation policies and procedures within six months, including “putting in place filters to catch and remove violent, illegal, threatening or harassing content,” says Leeds.
But does policy and improved systems alone provide the answer?Would it have been effective in intervening in Hannah’s untimely death? It’s more complex.
In Hannah’s case it is worth considering the series of indicators that, in hindsight, may have signalled her increasing vulnerabilities and susceptibility to virtual self-harm. Arguments and escalating incidents with peers; withdrawal; missing school; previous family discussions around self-harm; low self-esteem regarding body-image; continued clandestine use of social media after parental intervention to prevent it. On their own, these may be typical of any developing teenager’s rite of passage; together and combined with embedded technology use, they may present an indication of potential for self-harm.
Would we, as professionals working with children, know enough to have the prescience to join those dots up? To not only identify those gradual changes but to understand the platform technology allows vulnerable young people to propagate and support those behaviours and reaffirm their belief structure? And how would we intervene in a way that would make things better and not disenfranchise a young person to the point where our locus of influence becomes ineffective?
Understanding the breadth of self-harming behaviour
Self harm or self-injury is often described in physical terms: a way in which a person might inflict injury on themselves through a variety of means although many will associate it with deliberate “cutting”. The term however can describe a whole range of other behaviours that might include alcohol or drug abuse; over-eating; starving or smoking. They may be short term responses to an immediate crisis or may build into more long term behaviours linked to physical or psychological issues.
UK charity www.selfharm.co.uk advises that self-harming behaviour can occur at times of anger or stress; social and emotional crisis: as a result of low self-esteem or even depression. It may also be a form of self-punishment responding to something the person has done; thinks they have done; have been made to do by others or told they have done.
Beyond the physical lies emotional and psychological self-harm; this often necessitates the involvement of willing or unwitting third parties to exacerbate or reinforce the negative feelings a person has about themselves. In this latter context, technology and the vast social groups available through the social mechanisms it provides, have given those who self-harm unprecedented access to a broad catalogue of environments with which to interact.
In many cases, technology can provide:
- 24/7 contact and availability
- disinhibition in behaviours often limited within physical relationships (“online disinhibition”)
- relative anonymity
- an audience
- broad social access beyond the limitations of physical geography
- opportunities for covert activity that is difficult for carers to physically monitor
- mobility and ubiquity through a whole range of platforms and devices
- access to groups with extreme beliefs
- free access to inappropriate content
In a positive context, many of these features are empowering if the individual has the skill and resilience to maximise the potential online technologies can offer. It’s something we actively encourage.
For the vulnerable user, it can be a minefield of poor advice and negative reinforcement
And there is an added complexity. Many current interventions focus on identifying acts of “trolling” and “cyberbullying”, gathering digital or printed evidence and then “hunting down the perpetrator”. There have been incidents where a whole school student population has been “hauled over the coals” in an assembly where a carpet-bombing strategy has been employed to warn third parties that “bullying will not be tolerated” and yet … what if the perpetrator is the recipient? What if the very system meant to respond to online aggression in a school community has been “adapted” by some of its students as a tool to help them communicate how they feel? Or to determine how others view them?How prevalent and who may be at risk?
Dr danah boyd (sic) first drew attention to “digital self-harm” in her 2010 academic blog article “Digital Self-Harm and other forms of self-harassment” where she identified that “there were teens out there who were self-harassing by “anonymously” writing mean questions to themselves and then publicly answering them”.
From her conversations with teens, in particular those that used the anonymous Q&A social media site Formspring, boyd offered a number of reasons why:
- A cry for help; validate, support and pay attention to them
- Looking cool; getting criticised and wading in to arguments means you are cool enough to be attacked
- Soliciting compliments; getting positive comments from friends in your defence of negative commentary about you, affirms that you are valued and liked.
In June 2012 the Massachusetts Aggression Reduction Centre at Bridgewater State University (MARC) published its study of over 600 new college students. Headline data from the study highlighted:
Of the students asked,
- 9% said they had cyberbullied themselves
- 13% of boys had done it and 8% of girls
- 23% of students did it once a month, 28% one or two times a year and 49% just once or very infrequently
- Reasons they gave for cyberbullying themselves included a “cry for help” and “so others would worry about me”
This unpins some of the myths around self-harm that it is a “girl-thing”; the studies show a prevalence of boys also engaged.
What can it look like?
When factoring -in technology, it’s difficult to pin-down definitive behaviour and prescribed intervention. The paucity of research currently available and the difficulty in differentiating digital self-harm from actual cyberbullying adds to the complexity. But it is worth considering the potential arenas technology can provide to support these behaviours and whilst the following may be more anecdotal than full blown case-studies, they can help in guiding our focus beyond the physical when supporting young people.
Finding your tribe
Sir Ken Robinson first coined the phrase to illustrate how the internet can empower those with natural talent to find others with the same passion to enrich, support, empower, collaborate and drive an individual to achieving their full creative and intellectual potential. In a positive context, musicians, coders, sportspeople, dancers, artists, mathematicians and writers can directly benefit from communicating and collaborating with others who share their muse. The digital equivalent of a 1960’s coffee bar.
By the same token, it can also project a vulnerable young person into communities where other vulnerable (and not so vulnerable) people are. There may be positive reasons for a young person who is self-harming to need to engage with online groups. To be with people who understand. To tell their story. To ask questions. To learn how to cope. However,online groups have the potential to normalise these behaviours through common consensus
online groups have the potential to normalise these behaviours through common consensus
Are these groups easy to access? A quick search for self harm on Tumblr results in a clickable tableau of imagery and messages (both positive and negative) that acts as a springboard into a relatively unregulated community focused on self harm. Many of these sites are well-intentioned but there is a distinct look and feel to the design and imagery that makes them inherently “emo”, dark and “cool”. They can be used as a vehicle to catalogue feelings and in some to showcase the results of personal self-abuse cuts, scars through heavily “instagrammed” and processed photographs. Some young people have hijacked the whole culture and promoted it as a lifestyle choice; “I must be cool because I am emotionally interesting.”
Are these sites dangerous? Whilst they are easily accessible within a few careful search clicks, like any online content it depends on the resilience of the person accessing them. If digital and information literacy skills are refined enough to challenge the veracity of the content and they able to rank their usefulness, then no. However, that requires an education fit for purpose: that is willing to engage with that aspect of young people’s development; one that is sophisticated enough to understand young people’s engagement with online content and shape it enough to build resilience; to challenge stereotypes.
Some schools are already beginning this journey as a component of their wider safeguarding education; seeing beyond technology alone and moving it away from just ICT departments into other appropriate curriculum areas. Schemes like Common Sense Media and South West Grid for Learning’s “Digital Literacy and Citizenship Curriculum” have been useful in beginning to map that process for many schools struggling to provide a balanced approach.
Tools of the trade
There has always been a public dilemma with advice on how to “abuse safely”. Much in the same way drug users may be provided with clean needles, those in the throes of dealing with self harm issues need to be protected and advised.
Organisations like selfharm.co.uk provide online advice on harm minimisation; what to cut with; where to cut; what to do with serious issues like shock. This is freely searchable by someone looking for advice but again requires a digital veracity to sift out the good advice from the rest.
One girl recounted how she had learned online how to adapt a highlighter pen by replacing the felt tip with a pencil sharpener blade to allow her to continue to cut when she was in school. On other sites you may find “Top 5 Ways to Cut Yourself”; a photograph of a wrist with dotted line lines drawn, one annotated “hospital”, the other “morgue”.
Clear signposting of where there is good advice is critical to ensuring the right information gets to the most vulnerable, not only for young people but for professionals too.
Trolling is a relatively recent phenomenon. The image of an evil Scandinavian monster sums up this activity for many, but it actually takes its name from a Canadian fishing term for trailing a lure behind a boat to see what bites.
“Trolls” will often target vulnerable or belief groups (eg tribute sites to accident or suicide victims or faith groups) and drop a “lure” into the community. They feed on the oxygen of response; their activity is often characterised as being “atemporal” meaning they do not respond straight away and certainly do not engage in two way responsive discussion. It’s often pointless attempting to engage in rational argument as it’s not what the activity is about; hence the advice “don’t feed the troll”.
Trolling is often associated with social broadcast sites like Twitter that are designed to be socially transparent. Whilst its attraction is its openness and reach ( hence celebrity engagement), it is more difficult to manage the content that appears in your stream and who sees your content than with profile-based social media sites like Facebook. This makes it difficult to block or limit defamatory or aggressive content unless it illegal or breaks Twitter’s terms and conditions of use which makes the comments all the more visible.
What trolling does effectively achieve in most cases is a response, particularly within active social groups. If a particular individual is attacked, their immediate friends often respond with positive support and commentary.
A recent development in self-harming behaviour is auto-trolling where a person can utilise these networks todirect emotional and psychological abuse at themselves
direct emotional and psychological abuse at themselves
Rachel Welch, Director of Selfharm.co.uk in her recent article for PremierYouthwork.com highlighted a particular case study:
“Ellie busily scrolls through Facebook after a frustrating day at school, reading multiple status updates from friends she only said goodbye to less than an hour ago. She ‘likes’ a photo of her niece recently posted by her sister-in-law, before adding ‘cuuuuuute <3’ in the comment box. After getting changed out of her uniform and fetching a drink, Ellie settles onto her bed and logs on to Ask.fm. Carefully and deliberately she posts a question: ‘What’s the best thing about me?’ before quickly signing out. Heart racing a little, Ellie logs back in, only this time she isn’t Ellie, she’s using a profile called Staceeyy. She finds her question and replies: ‘Nothing. You are nothing.’”
Welch, herself a survivor of self harm and with particular insight into those behaviours, continues:
Ellie’s experiences of posting messages to herself also led friends to worry that she was being bullied, and some would even join in the threads to try and defend her.
‘It was awful,’ she says. ‘My friends were trying to protect me and stick up for me, so to keep it up I ended up posting nasty messages to them too. It was killing me seeing them get so angry on my behalf, and it was then that I knew I had to stop. It wasn’t about hurting other people, it was about hurting myself.’
It wasn’t about hurting other people, it was about hurting myself.’
There is no doubt that the catharsis digital self harm can provide, the release often described by those who self harm, seems equally as real and as valid as those who might cut. The “tracks” however are much more difficult to detect.
Some enjoy the cut and thrust of debate, however rational or irrational that might be, to establish themselves within an online community. “Flaming” has been around since the early days of the internet, on bulletin boards, listservers and Newsgroups. In contrast, many trolls are impervious to the quality of response they receive after dropping their “lure” into a social community; the response alone, along with the attention it garners, is often enough.
However, deliberately engineering a situation that generates third-party abuse directed to oneself is a complex behavioural strategy that represents a different focus. Whilst auto-trolling requires a number of self-administered online persona,self-baiting involves a third-party to create the abuse
self-baiting involves a third-party to create the abuse
This requires the self-baiter to become a troll… for markedly different reasons but very hard to separate the two. Trolling or self abuse?
Current legislation within the UK has seen a shift towards criminalising trolling with April 2013 amendments to the Defamation Act and there have been high profile cases where that has been enforced. This appears a high risk strategy beyond the self abuse for the perpetrator, with the likelihood of little mitigating support and even criminalisation.
Rachel Welch in her article references another case study:
At 16, Ben found himself reading posts on Facebook and felt tempted to chip in with comments he knew would be met with disagreement. He wasn’t looking to cause trouble; what Ben wanted was to be on the receiving end of what he considered was much-deserved abuse. Ben says it started by accident: ‘I was already online and was just feeling really awful. I saw that someone had posted something that I agreed with but lots of people didn’t. I commented and people started to be horrible; everyone was saying things that I felt were true and that’s what led me to do it more often.’
It was a weird feeling, as I believed everything they were saying about me…It was all the feelings I had about myself being confirmed. In a way it felt good, it persuaded me I was as bad as I thought; I wasn’t imagining it.’
It was all the feelings I had about myself being confirmed
Whilst nobody wants to entertain a ready-made excuse for trolling, it seems clear that the underlying reason for behaviours is vitally important and needs to form part of an effective intervention. We know that in most anti-bullying strategies, particularly those employed in schools, the most effective are those that engage all parties: victims, perpetrators and by-standers. It helps unpick the social and emotional aspects of incidents and allows the resultant intervention to be better communicated, understood and respected.
Legislation; the online technology industry; the online communities. All quite rightly should support the victim and ameliorate issues. However, care must be taken not to create more victims in that process.
As in many complex behavioural problems, there are no quick wins or interventions, particularly if technology obfuscates the clarity of intention and outcome. But one thing is clear; technology is not the culprit. For many it is an empowerment; a choice; an enabler; an entertainer; a creative sandbox. It’s another aspect of young people’s lives; to hijack Shakespeare, the internet is a “stage”
And all the men and women merely players.
They have their exits and their entrances,
And one man in his time plays many parts,
But there are some considerations:
Add sophistication to current interventions
When dealing with self abuse understand the role technology could be playing in the process. Generate a dialogue about a person’s engagement with the online community. “Stop using technology” is not a rational intervention. Understand that stopping that engagement is a very difficult thing to achieve for someone for whom it is an intrinsic part of their life. Accentuate the positive about the internet and encourage the development of a positive online persona
encourage the development of a positive online persona
Educate children and young people
Responsible and positive internet use seldom happens on its own; it requires an holistic approach to educating children and young people in understanding:
- How to build resilience to ensure their own safety and security
- How to manage the flow of information from and to them
- How to function positively in online connected communities
- How to shape their digital reputation in a way that is not only going to inform how others judge them but how to utilise that persona to their benefit
- How to be critical of and how to deal with online content
- How to challenge,report, remove or escalate personal issues online
Appropriate professional development for the children’s workforce
Changes will not happen if professionals are not aware of the issues and empowered to build that awareness into their wider safeguarding interventions. This is seldom a component of Self Harm CPD.
How can we keep young people safe if we don’t know what is going on? There are a two ways in which you can move towards achieving this.
Active reporting. Those organisations with effective safeguarding mechanisms often have reporting routes that are varied, valued, trusted and used.Peer escalation routes are valuable and so too are anonymous online reporting routes
Peer escalation routes are valuable and so too are anonymous online reporting routes
Passive reporting. There is a lot of information openly available in the social media chatter online that could be instrumental in identifying a child at risk. the problem is the volume of material can be too vast for any manual searching. Alerting mechanisms are available that place an electronic ear to the ground
mechanisms are available that place an electronic ear to the ground
Safeguarding by Design
Industry also too has a role to play in ensuring the technical interventions and systems it employs are fit for purpose. Monitoring and moderation are important but with the global nature of social engagement, it’s difficult to achieve the right balance between freedom of speech and punitive measures. These are businesses and their financial success depends on the number of users they have; draconian and outdated platforms are abandoned in droves if they are not what their users want.
Clear and effective reporting routes however we do know are effective and many sites are now becoming a lot more proactive in signposting these clearly on front pages and where conversation happens. These routes should be easy, obvious, effective and reliable.
The internet is an amazing environment and most people’s encounters as they surf through that landscape are positive. As online technology becomes even more embedded in all of our lives, we have to look to this horizon as another of our remits; those who safeguard children have a moral obligation to do so in any environment that a child may find itself.
There are no more excuses not to engage.
Dr Elizabeth Englander who conducted the MARC study at Bridgewater State University effectively concludes:
“When a student claims to be a victim of cyberbullying, they need our support and attention. That need should be front and center, regardless of whether the cyberbullying is real or manufactured. In fact, students who self-cyberbully may be among those who need our attention most of all”
Further help, support and advice
MindEd. www.minded.org.uk aims to provide simple, clear guidance on children and young people’s mental health, wellbeing and development to any adult working with children, young people and families, to help them support the development of young healthy minds.
UK Safer Internet Centre Professionals Online Safety Helpline. http://www.saferinternet.org.uk/about/helpline The Safer Internet Centre has been co-funded by the European Commission to provide a Helpline for professionals working with children and young people in the UK with any online safety issues they may face themselves or with children in their care. They provide support with all aspects of digital and online issues such as those which occur on social networking sites, cyber-bullying, sexting, online gaming and child protection online. The Helpline aims to resolve issues professionals face about themselves, such as protecting professional identity and reputation, as well as young people in relation to online safety.
Papyrushttp://www.papyrus-uk.org/ Founded in 1997 by Jean Kerr, a mother from Lancashire. She and a small group of parents who had each lost a child to suicide were convinced that that many young suicides are preventable. They aim to:
- Reduce Stigma associated with suicide
- Increase Awareness of young suicide and how to help prevent it; this involves speaking in schools, colleges and community organisations
- Provide services (e.g. HOPELineUK; SMS and email advice; Training such as ASIST; online information; professional advice)
- Campaign as a UK charity to prevent young suicide
- Listen and Learn – supporting/disseminating research/knowledge
- Contribute to local, regional and national suicide prevention strategic action
Selfharm. Selfharm.co.uk runs online support groups that you can join at www.selfharm.co.uk
National Self Harm Network. The number for the National Self Harm Network helpline is 0800 622 6000
Childline. ChildLine can be contacted on 0800 1111, you can chat online with a counsellor and use the charity’s message boards