Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 29 April 2024

Problematic social media use mediates the effect of cyberbullying victimisation on psychosomatic complaints in adolescents

  • Prince Peprah 1 , 2 ,
  • Michael Safo Oduro 3 ,
  • Godfred Atta-Osei 4 ,
  • Isaac Yeboah Addo 5 , 6 ,
  • Anthony Kwame Morgan 7 &
  • Razak M. Gyasi 8 , 9  

Scientific Reports volume  14 , Article number:  9773 ( 2024 ) Cite this article

1238 Accesses

1 Citations

1 Altmetric

Metrics details

  • Public health
  • Risk factors

Adolescent psychosomatic complaints remain a public health issue globally. Studies suggest that cyberbullying victimisation, particularly on social media, could heighten the risk of psychosomatic complaints. However, the mechanisms underlying the associations between cyberbullying victimisation and psychosomatic complaints remain unclear. This cross-cultural study examines the mediating effect of problematic social media use (PSMU) on the association between cyberbullying victimisation and psychosomatic complaints among adolescents in high income countries. We analysed data on adolescents aged 11–16.5 years (weighted N = 142,298) in 35 countries participating in the 2018 Health Behaviour in School-aged Children (HBSC) study. Path analysis using bootstrapping technique tested the hypothesised mediating role of PSMU. Results from the sequential binary mixed effects logit models showed that adolescents who were victims of cyberbullying were 2.39 times significantly more likely to report psychosomatic complaints than those who never experienced cyberbullying (AOR = 2.39; 95%CI = 2.29, 2.49). PSMU partially mediated the association between cyberbullying victimisation and psychosomatic complaints accounting for 12% ( \(\beta\)  = 0.01162, 95%CI = 0.0110, 0.0120) of the total effect. Additional analysis revealed a moderation effect of PSMU on the association between cyberbullying victimisation and psychosomatic complaints. Our findings suggest that while cyberbullying victimisation substantially influences psychosomatic complaints, the association is partially explained by PSMU. Policy and public health interventions for cyberbullying-related psychosomatic complaints in adolescents should target safe social media use.

Similar content being viewed by others

cyberbullying on social media research paper

Cyberbullying and cybervictimization on digital media platforms: the role of demographic variables and parental mediation strategies

cyberbullying on social media research paper

The relationship between childhood trauma and cyberbullying: a meta-analysis of mainland Chinese adolescents and young adults

cyberbullying on social media research paper

Problematic situations related to social media use and competencies to prevent them: results of a Delphi study

Introduction.

Adolescence is noted to be a critical developmental stage, with many problems, including loneliness 1 , poor friendships, an adverse class climate, school pressure 2 , suicidal ideation and attempts, and psychosomatic complaints 3 . Psychosomatic complaint is a combination of physical ailments (i.e., headaches, stomach aches, fatigue, and muscle pain) caused or exacerbated by psychological factors such as stress, irritability, anxiety, or emotional distress 4 , 5 . Psychosomatic complaints are common among adolescents, and recent estimates indicate that the global prevalence of psychosomatic complaints ranges between 10 and 50% 6 . Also, an increase in self-reported psychosomatic complaints and related mental health complaints have been reported in adolescents from high-income countries 7 , 8 . The high prevalence of psychosomatic complaints is of concern as psychosomatic complaints have severe implications for multiple detrimental health outcomes, healthcare expenditure, and quality of life of young people 9 . Thus, it is of utmost importance to identify the proximate risk factors for psychosomatic complaints among young people to aid in developing targeted interventions to reduce the incidence of psychosomatic complaints, mainly in high-income countries.

While extant research has identified risk factors for psychosomatic complaints, including malnutrition, low physical activity, and poor parental guidance 10 , 11 , 12 , one understudied but potentially important risk factor is cyberbullying victimisation. Cyberbullying victimisation is an internet-based aggressive and intentional act of continually threatening, harassing, or embarrassing individuals who cannot defend themselves using electronic contact forms such as emails, text messages, images, and videos 13 , 14 . Indeed, being typical of interpersonal interactions, cyberbullying victimisation has shown a rising trend, particularly during adolescence 15 . International literature has shown the prevalence of cyberbullying victimisation to be between 12 and 72% among young people 14 , 16 . It may be hypothesised that cyberbullying victimisation potentially increases the risk of psychosomatic complaints through factors such as problematic social media use (PSMU) 17 , 18 . However, studies are needed to identify whether and the extent to which such factors mediate the potential association of cyberbullying victimisation with psychosomatic complaints among young people.

Given this background, the present study aimed to investigate the association between cyberbullying victmisation and psychosomatic complaints in 142,298 young people aged 11–16.5 years from 35 high-income countries. A further aim was to quantify how PSMU mediates the association between cyberbullying victimisation and psychosomatic complaints.

Cyberbullying victimisation and adolescents’ psychosomatic complaints

Research has consistently shown that cyberbullying victimisation significantly impacts adolescents’ mental health 19 . For example, Kowalski and Limber 20 found that cyberbullying victimisation is associated with increased levels of depression, anxiety, and social anxiety, as well as psychosomatic complaints, such as fatigue and muscle tension. Further, studies have shown that cyberbullying victimisation and perpetration can lead to a variety of physical, social, and mental health issues, including substance abuse and suicidal thoughts and attempts 21 , 22 , 23 , 24 . Furthermore, cyberbullying victimisation is strongly associated with suicidal thoughts and attempts, regardless of demographic factors like gender or age 21 , 25 . These findings underscore the urgent need for interventions that address the mental health consequences of cyberbullying, particularly for adolescents, who are most vulnerable to its harmful effects. The findings also suggest that cyberbullying might be a potential underlying predictor of higher psychosomatic disorders among adolescents. This present study, therefore, hypothesises that H1: there is a statistically significant association between cyberbullying victimisation (X) and psychosomatic complaints (Y) (total effect).

The role of adolescents’ PSMU

Problematic Social Media Use (PSMU), a subtype of problematic internet use, refers to the uncontrolled, compulsive or excessive engagement with social media platforms such as Facebook and Twitter, characterised by addictive behaviours like mood alteration, withdrawal symptoms, and interpersonal conflicts. This pattern of social media usage can result in functional impairments and adverse outcomes 26 . Scholars and professionals have shown great concern about the length of time adolescents spend on social media. Studies have observed that (early) adolescence could be a crucial and sensitive developmental stage in which adolescent users might be unable to avoid the harmful impacts of social media use 27 . According to current research, PSMU may increase adolescents’ exposure to cyberbullying victimisation, which can have severe consequences for their mental health 28 , 29 , 30 . Similarly, an association between PSMU and physical/somatic problems, as well as somatic disorders, has been established in many studies 31 , 32 . Hanprathet et al. 33 demonstrated the negative impact of problematic Facebook use on general health, including somatic symptoms, anxiety, insomnia, depression, and social dysfunction. According to Cerutti et al. 34 , adolescents with problematic social media usage have more somatic symptoms, such as stomach pain, headaches, sore muscles, and poor energy, than their counterparts. Hence, inadequate sleep may be associated with PSMU, harming both perceived physical and mental health 35 , 36 . Again, supporting the above evidence, the relationship between PSMU, well-being, and psychological issues have been highlighted in meta-analytic research and systematic reviews 27 , 31 , 37 , 38 . Thus, this study proposes the following hypothesis: H2: there is a specific indirect effect of cyberbullying victimisation (X) on psychosomatic complaints (Y) through PSMU (M1) (indirect effect a 1 b 1 ).

Study, sample, and procedures

This study used data from the 2018 Health Behaviour in School-aged Children (HBSC) survey conducted in 35 countries and regions across Europe and Canada during the 2017–2018 academic year 39 . The HBSC research team/network is an international alliance of researchers collaborating on a cross-national survey of school students. The HBSC collects data every four years on 11-, 13- and 15- year-old adolescent boys’ and girls’ health and well-being, social environments, and health behaviours. The sampling procedure for the 2018 survey followed international guidelines 40 , 41 . A systematic sampling method was used to identify schools in each region from the complete list of both public and private schools. Participants were recruited through a cluster sampling approach, using the school class as the primary sampling unit 42 . Some countries oversampled subpopulations (e.g., by geography and ethnicity), and standardised weights were created to ensure representativeness of the population of 11, 13, and 15 years 43 . Questionnaires were translated based on a standard procedure to allow comparability between the participating countries. Our analysis used data from 35 countries and regions with complete data on cyberbullying victimisation, PSMU, and psychosomatic complaints. The study complies with ethical standards in each country and follows ethical guidelines for research and data protection from the World Health Organisation and the Organisation for Economic Co-operation and Development. Depending on the country, active or passive consent was sought from parents or legal guardians and students which was checked by teachers to participate in the study. The survey was conducted anonymously and participation in the study was voluntary for schools and students. Schools, children and adolescents could refuse to participate or withdraw their consent until the day of the survey. Moreover, all participating students were free to cease filling out the questionnaire at any moment, or to answer only selected questions. More detailed information on the methodology of the HBSC study including ethics and data protection can be found elsewhere 44 , 45 .

Outcome variable: psychosomatic complaints

Psychosomatic complaints was assessed by one collective item asking students how often they had experienced the following complaints over the past six months: headache, stomach aches, feeling low, irritability or bad mood, feeling nervous, dizziness, abdominal pain, sleep difficulty, and backache. Response options included: about every day, more than once a week, about every week, about every month, and rarely or never. This scale has sufficient test–retest reliability and validity 46 , good internal consistency (Cronbach’s a = 0.82) 47 , and has been applied in several multiple country analyses 48 , 49 . The scale is predictive of emotional problems and suicidal ideation in adolescents 50 , 51 . For our analysis, the scale was dichotomised with two or more complaints several times a week or daily coded as having psychosomatic complaints 47 , 49 .

Exposure variable: Cyberbullying victimisation

Cyberbullying victimisation is the exposure variable in this study. Thus, the exposure variable pertains to only being a victim of cyberbullying and does not include perpetration of cyberbullying. Students were first asked to read and understand a short definition of cyberbullying victimisation. They were then asked how often they were bullied over the past two months (e.g., someone sending mean instant messages, emails, or text messages about you; wall postings; creating a website making fun of you; posting unflattering or inappropriate pictures of you online without your permission or sharing them with others). Responses included: “ I have not   been  cyberbullied”, “once or twice”, “two or three times a month”, “about once a week”, and “several times a week”. These were dichotomised into “never" or “once or more". This measure of bullying victimisation has been validated across multiple cultural settings 43 , 52 , 53 , 54 .

Mediating variable

Problematic social media use (PSMU) was assessed with the Social Media Disorder Scale (Cronbach’s a = 0.89) 55 . The scale contains nine dichotomous (yes/no) items describing addiction-like symptoms, including preoccupation with social media, dissatisfaction about lack of time for social media, feeling bad when not using social media, trying but failing to spend less time using social media, neglecting other duties to use social media, frequent arguments over social media, lying to parents or friends about social media use, using social media to escape from negative feelings, and having a severe conflict with family over social media use. In this study, the endorsement of six or more items indicated PSMU as evidence suggests that a threshold of six or more is an indicative of PSMU 54 , 56 . This scale has been used across cultural contexts 43 , 52 , 54 .

Informed by previous studies 43 , 54 , 57 , the analysis controlled for theoretically relevant confounders, including sex (male/female) and age. Family affluence/socio-economic class was assessed using the Relative Family Affluence Scale, a validated six-item measure of material assets in the home, such as the number of vehicles, bedroom sharing, computer ownership, bathrooms at home, dishwashers at home, and family vacations) 56 , 58 . Finally, parental and peer support were measured using an eight item-measure 59 . Responses were recorded on a 7-point Likert scale (ranging from 0 indicating very strongly disagree to 6 indicating very strongly agree).

Statistical analysis

Region-specific descriptive statistics were calculated to describe the sample. Next, Pearson’s Chi-squared association test with Yates’ continuity correction was performed to examine plausible associations between psychosomatic complaints and other categorical study variables. Also, to account for the regional clustering or unobserved heterogeneity observed in the analytic sample, sequential mixed effect binary logit models with the inclusion of a random intercept were fitted to further examine the associations between psychosomatic complaints and cyberbullying victimisation as well as other considered covariates. Furthermore, a parallel mediator model was fitted to evaluate the specified hypothesis and understand the potential mechanism linking cyberbullying victimisation and psychosomatic complaints. More specifically, cyberbullying victimisation (X) was modelled to directly influence psychosomatic complaints (Y) and indirectly via PSMU (M). Since core variables were binary, paths could be estimated with a sequence of three logit equations: 60 , 61

where, \({i}_{1}\) , \({i}_{2}\) , and \({i}_{3}\) represent the intercept in the respective equations. The path coefficient, c, in Eq. ( 1 ) represents the total effect of predictor X on outcome Y . In Eq. ( 2 ), the path coefficient a denotes the effect of predictor X on the mediator M . Also, the c' parameter in Eq. ( 3 ) represents the direct effect of the predictor X on the response Y , adjusting for the mediator M . Lastly, the path coefficient b coefficient in Eq. ( 3 ) represents the indirect effect of the mediator M on the outcome Y , when adjusting for the predictor X . These logit models provide effect estimates on the log-odds scale, and thus can be transformed into odds ratios. Each model was adjusted for the potential confounding variables.

All statistical analyses were performed using R Software (v4.1.2; R Core Team 2021) with \(\alpha\)  =  0.05 as the significance level. More specifically, the package “mediation” in R 62 was used for the mediation analysis to estimate direct, indirect, and total effects. Inference is based on a non-parametric, 95% bias-corrected and accelerated (BCa) bootstrapped confidence interval 63 , 64 . Bootstrapping for indirect effects was set at 1000 samples, and once the 95% bootstrapped CI of the mediation effects did not include zero (0), it was deemed statistically significant. We also conducted further analysis by including an interaction between cyberbullying victimisation and PSMU to obtain insights analogous to the mediation model.

Ethics approval and consent to participate

The research was exclusively based on data sourced from the World Bank, which adheres to rigorous ethical standards in its data collection processes. Therefore, no separate ethical approval was sought or deemed necessary. Ethical approval was not required for this study since the data used for this study are secondary data. Necessary permissions and survey data were obtained from the World Bank. The World Bank data collection process upheld ethical standards and relevant guidelines in the research process including informed consent from all subjects and/or their legal guardian(s).

Preliminary analyses

The final analytic sample comprised complete information on 142,298 adolescents from 35 high-income countries (Table 1 ). The median age of the sample was 13.6 years. Most participants resided in Wales (6.26%) and the Czech Republic (6.16%). Notably, the prevalence of cyberbullying victimisation was 26.2%, and the majority (53%) were females. As observed in Table 2 , 84.6% of the participants self-reported high levels of psychosomatic complaints. Furthermore, among the participants who experienced PSMU, about 81.16% reported high levels of psychosomatic complaints. About 84.47% of the participants indicated receiving parental and peer support (see Table 2 ).

Main analyses

Results from the sequential binary mixed effects logit model are shown in Table 3 . In the first step, we included only cyberbullying victimisation in the model. We found that cyberbullying victims were 2.430 times more likely to report psychosomatic complaints than those who were not cyberbullied (OR = 2.430; 95%CI = 2.330, 2.530). The second step included sex, PSMU, parental and peer support, and family affluence as covariates. We found that cyber bullying victims were 2.390 times significantly more likely to report psychosomatic complaints than those who never experienced cyberbullying (AOR = 2.390; 95%CI = 2.29, 2.49). Additionally, the third model, which is an additional analysis involved the inclusion of an interaction between and cyberbullying victimisation and PSMU. The results showed that PSMU moderates the association between cyberbullying victimisation and psychosomatic complaints. Adolescents who were cyberbullied but did not report PSMU had reduced odds of psychosomatic complaints compared to those with PSMU (AOR = 1.220; 95%CI = 1.110–1.350). Furthermore, a caterpillar plot of empirical Bayes residuals of the models for the random intercept, region/country is obtained and shown in Fig.  1 . This represents individual effects for each country and offers additional insights into the extent of psychosomatic complaints heterogeneity across different countries. The plots visually demonstrates that regional variation for psychosomatic complaints does exist.

figure 1

A caterpillar plot of empirical Bayes residuals of the models for the random intercept, region/country. This represents individual effects for each region/country. Region or country abbreviations in the figure are as follows: [AL] Albania, [AZ] Azerbaijan, [AT] Austria, [BE-VLG] Vlaamse Gewest (Belgium), [BE-WAL] Wallone, Région (Belgium), [CA] Canada, [CZ] Czech Republic, [DE] Germany, [EE] Estonia, [CA] Canada, [ES] Spain, [FR] France, [GB-ENG] England, [GB-SCT] Scotland, [GB-WLS] Wales, [GE] Georgia, [GR] Greece, [HR] Croatia, [HU] Hungary, [IE] Ireland, [IL] Israel, [IS] Iceland, [IT] Italy, [KZ] Kazakhstan, [LT] Lithuania, [LU] Luxembourg, [MD] Moldova, [MT] Malta, [NL] Netherlands, [PT] Portugal, [RO] Romania, [RS] Serbia, [RU] Russia, [SE] Sweden, [SI] Slovenia, [TR] Turkey, [LU] Luxembourg and [UA] Ukraine.

Figure  2 shows the adjusted parallel mediation results. The effect of cyberbullying victimisation on psychosomatic complaints was significantly mediated by PSMU. The paths from cyberbullying victimisation to PSMU (a: \(\beta\) =0.648, p < 0.001), PSMU to psychosomatic complaints (b: \(\beta\) =0.889, p < 0.001), and that of cyberbullying victimisation to 0.8069 (c′: \(\beta\) =0.051, p < 0.001) were also statistically significant.

figure 2

A parallel mediation model of the influence of PSMU on the association between Cyberbullying Victimisation and Psychosomatic Complaints. a = path coefficient of the effect of exposure on the mediator. b = path coefficient of the effect of the mediator on the outcome. c’ = path coefficient of the direct effect of the exposure on outcome. CV, cyberbullying victimisation. PC, psychosomatic complaints.

Bootstrapping test of mediating effects

The total, direct, and indirect effects of the mediation model based on nonparametric bootstrap are presented in Table 4 . We observe that the estimated CI did not include zero (0) for any effects. This observation suggests a statistically significant indirect effect of cyberbullying victimisation on psychosomatic complaints via PSMU ( \(\beta\)  = 0.01162, 95%CI = 0.0110, 0.0120), yielding 12% of the total effect.

Key findings

This cross-cultural study examined the direct and indirect associations of cyberbullying victimisation with psychosomatic complaints via PSMU among adolescents. The results showed that cyberbullying victimisation independently influenced the experience of psychosomatic complaints. Specifically, adolescents who were victims of cyberbullying were more than two times more likely to report psychosomatic complaints. Crucially, our mediation analyses indicated that PSMU explain approximately 12% of the association between cyberbullying victimisation and psychosomatic complaints. In a further analysis, PSMU moderated the association between cyberbullying victimisation and psychosomatic complaints. This study is the first to examine the direct and indirect associations between cyberbullying victimisation and psychosomatic complaints through PSMU in adolescents across multiple high-income countries.

Interpretation of the findings

Our results confirmed the first hypothesis that there is a statistically significant direct association between cyberbullying victimisation and psychosomatic complaints. Thus, we found that cyberbullying independently directly affected the adolescents' experience of psychosomatic complaints. Previous studies have mainly focused on the direct effect of traditional face-to-face bullying on psychosomatic complaints 20 , 65 or compared the impact of traditional face-to-face bullying to cyberbullying concerning mental health 19 , 66 , 67 , 68 , 69 . A systematic review of traditional bullying and cyberbullying victimisation offers a comprehensive synthesis of the consequences of cyberbullying on adolescent health 19 . Another review suggested that cyberbullying threatened adolescents’ well-being and underscored many studies that have demonstrated effective relationships between adolescents’ involvement in cyberbullying and adverse health outcomes 70 . Other population-based cross-sectional studies have similarly shown that victims of cyberbullying experience significant psychological distress and feelings of isolation, which can further exacerbate their physical and mental health challenges 22 , 71 , 72 . The present study builds on the previously published literature by highlighting the effect of cyberbullying victimisation on adolescent psychosomatic complaints and the extent to which the association is mediated by PSMU.

Consistent with the second hypothesis, we found that PSMU mediated about 12% of the association between cyberbullying victimisation and psychosomatic complaints in this sample. While studies on the mediational role of PSMU in the relationship between cyberbullying victimisation and psychosomatic complaints are limited, evidence shows significant interplay among PSMU, cyberbullying victimisation, and psychosomatic complaints. For example, a study of over 58,000 young people in Italy found that PSMU was associated with increased levels of multiple somatic and psychological symptoms, such as anxiety and depression. 73 Another study of 1707 adolescents in Sweden found that cyberbullying victimisation was associated with increased depressive symptoms and the lowest level of subjective well-being 74 .

Other possible mediators of the cyberbullying victimisation-psychosomatic complaints association may include low self-esteem, negative body image, emotion regulation difficulties, social support, and personality traits such as neuroticism and impulsivity 20 , 67 , 72 , 75 , 76 . For example, Schneider et al. 75 have shown that emotional distress could increase psychosomatic symptoms such as headaches, stomach aches, and muscle tension. In addition, social isolation can lead to social withdrawal and a decreased sense of belonging 78 , 79 . Therefore, it is essential to explore these variables further and develop effective interventions and prevention strategies to address these interrelated factors and reduce their negative impact on adolescent health and well-being.

In a further analysis, the results show that PSMU does not only mediate but also moderate the association between cyberbullying victimisation and psychosomatic complaints among adolescents. Specifically, cyberbullied adolescents with no report of PSMU had reduced likelihoods of experiencing psychosomatic complaints compared to those with PSMU. This result is interesting and could be due to several factors. First, individuals with PSMU may already be experiencing heightened levels of psychological distress due to their excessive social media use, making them more vulnerable to the negative effects of cyberbullying 80 , 81 , 82 . For instance, excessive time spent on social media, particularly in activities such as comparing oneself to others or seeking validation through likes and comments, has been linked to increased psychological distress 83 , 84 . Conversely, the finding that cyberbullied adolescents without PSMU had reduced likelihoods of experiencing psychosomatic complaints compared to those with PSMU suggests a protective effect of lower social media use. Adolescents who are not excessively engaged with social media may have fewer opportunities for exposure to cyberbullying and may also have healthier coping strategies in place to deal with any instances of online victimisation 43 , 85 , 86 .

The results suggest that professionals in the fields of education, counselling, and healthcare should prioritise addressing the issue of cyberbullying victimisation when assessing the physical and psychological health of adolescents. Evidently, adolescents who experience cyberbullying require support. Thus, proactive measures are essential, and support could be provided by multiple professional communities that serve adolescents and young people in society, such as educational, behavioural health, and medical professionals. Sensitive inquiry regarding cyberbullying experiences is necessary when addressing adolescent health issues such as depression, substance use, suicidal ideation, and somatic concerns 19 . Our findings underscore the need for comprehensive, school-based programs focused on cyberbullying victimisation prevention and intervention.

Strengths and limitations

The study's main strength lies in the use of a large sample size representing multiple countries in high income countries. This large sample size improved the representativeness and veracity of our findings. The complex research approach helps advance our understanding of the interrelationships between cyberbullying victimisation, PSMU, and psychosomatic complaints among adolescents. However, the study has its limitations. First, the cross-sectional design does not allow directionality and causal inferences. Second, retrospective self-reporting for the critical study variables could lead to recall and social desirability biases. Third, the presence of residual and unobserved confounders, despite adjusting for some covariates, can be considered a limitation of this study. Further research is needed to confirm these findings and better understand how PSMU mediates the relationship between cyberbullying victimisation and psychosomatic complaints.

Conclusions

This study has provided essential insights into the interrelationships between cyberbullying victimisation, PSMU, and psychosomatic complaints among adolescents in high income countries. The findings suggest that cyberbullying is directly associated with psychosomatic complaints and that PSMU significantly and partially mediates this association. This study also highlights the importance of addressing cyberbullying victimisation and its negative impact on adolescent health and emphasises the need to address PSMU. Overall, the study underscores the importance of promoting healthy online behaviour and providing appropriate support for adolescents who experience cyberbullying victimisation. Further studies will benefit from longitudinal data to confirm our findings.

Data availability

The data that support the findings of this study are available from the World Bank, but restrictions apply to the availability of these data, which were used under license for the current study and so are not publicly available. Data are, however, available from the corresponding author ([email protected]) upon reasonable request and with permission of the World Bank.

Lyyra, N., Välimaa, R. & Tynjälä, J. Loneliness and subjective health complaints among school-aged children. Scand. J. Public Health 46 (20), 87–93. https://doi.org/10.1177/1403494817743901 (2018).

Article   PubMed   Google Scholar  

Ottova, V. et al. The role of individual-and macro-level social determinants on young adolescents’ psychosomatic complaints. J. Early. 32 (1), 126–158. https://doi.org/10.1177/0272431611419510 (2012).

Article   Google Scholar  

Heinz, A., Catunda, C., van Duin, C. & Willems, H. Suicide prevention: Using the number of health complaints as an indirect alternative for screening suicidal adolescents. J. Affect. Disord. 260 , 61–66. https://doi.org/10.1016/j.jad.2019.08.025 (2020).

Högberg, B., Strandh, M. & Hagquist, C. Gender and secular trends in adolescent mental health over 24 years–the role of school-related stress. Soc. Cci Med. 250 , 112890. https://doi.org/10.1016/j.socscimed.2020.112890 (2020).

Hagquist, C., Due, P., Torsheim, T. & Välimaa, R. Cross-country comparisons of trends in adolescent psychosomatic symptoms—a Rasch analysis of HBSC data from four Nordic countries. Health Qual. Life Outcomes 17 (1), 1–13. https://doi.org/10.1186/s12955-019-1097-x (2019).

Shorey, S., Ng, E. D. & Wong, C. H. Global prevalence of depression and elevated depressive symptoms among adolescents: A systematic review and meta-analysis. Br. J. Clin. Psychol. 61 (2), 287–305. https://doi.org/10.1111/bjc.12333 (2022).

Potrebny, T. et al. Health complaints among adolescents in Norway: A twenty-year perspective on trends. PloS one 14 (1), e0210509. https://doi.org/10.1371/journal.pone.0210509 (2019).

Article   CAS   PubMed   PubMed Central   Google Scholar  

van Geelen, S. M. & Hagquist, C. Are the time trends in adolescent psychosomatic problems related to functional impairment in daily life? A 23-year study among 20,000 15–16 year olds in Sweden. J. Psychol. Res. 87 , 50–56. https://doi.org/10.1016/j.jpsychores.2016.06.003 (2016).

Swedish Association of Local Authorities and Regions and Ministry of Health and Social Affairs. Insatser inom området psykisk hälsa och suicidprevention. Överenskommelse mellan staten och Sveriges Kommuner och Regioner (SKR). Swedish Association of Local Authorities and Regions and Ministry of Health and Social Affairs. Stokholm, Sweden: 2021–2022. https://skr.se/skr/halsasjukvard/utvecklingavverksamhet/psykiskhalsa/overenskommelsepsykiskhalsa.234.html (2022).

Brooks, S. J., Feldman, I., Schiöth, H. B. & Titova, O. E. Important gender differences in psychosomatic and school-related complaints in relation to adolescent weight status. Sci. Rep. 11 (1), 14147. https://doi.org/10.1038/s41598-021-93761-0 (2021).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Whitehead, R. et al. Trends in adolescent overweight perception and its association with psychosomatic health 2002–2014: Evidence from 33 countries. J. Adol. Health 60 (2), 204–211. https://doi.org/10.1016/j.jadohealth.2016.09.029 (2017).

Nilsen, W., Karevold, E., Røysamb, E., Gustavson, K. & Mathiesen, K. S. Social skills and depressive symptoms across adolescence: Social support as a mediator in girls versus boys. J. Adol. 36 (1), 11–20. https://doi.org/10.1016/j.adolescence.2012.08.005 (2013).

Englander, E., Donnerstein, E., Kowalski, R., Lin, C. A. & Parti, K. Defining cyberbullying. Pediatric 140 (S2), 148–151. https://doi.org/10.1542/peds.2016-1758U (2017).

Chan, H. C. O. & Wong, D. S. Traditional school bullying and cyberbullying in Chinese societies: Prevalence and a review of the whole-school intervention approach. Aggress. Viol. Behav. 23 , 98–108. https://doi.org/10.1016/j.avb.2015.05.010 (2015).

Griffiths, M. D., Kuss, D. J. & Demetrovics, Z. Social networking addiction: An overview of preliminary findings. Behav Addict. 2014 , 119–141. https://doi.org/10.1016/B978-0-12-407724-9.00006-9 (2014).

Athanasiou, K. et al. Cross-national aspects of cyberbullying victimization among 14–17-year-old adolescents across seven European countries. BMC Public Health 18 , 1–15. https://doi.org/10.1186/s12889-018-5682-4 (2018).

Nagata, J. M. et al. Cyberbullying and Sleep Disturbance among Early Adolescents in the US. Acad. Pediatr. 23 (6), 1220–1225. https://doi.org/10.1016/j.acap.2022.12.007 (2022).

Fahy, A. E. et al. Longitudinal associations between cyberbullying involvement and adolescent mental health. J. Ado.l Health 59 (5), 502–509. https://doi.org/10.1016/j.jadohealth.2016.06.006 (2016).

Zych, I., Ortega-Ruiz, R. & Del Rey, R. Systematic review of theoretical studies on bullying and cyberbullying: Facts, knowledge, prevention, and intervention. Aggress. Viol. Behav. 23 , 1–21. https://doi.org/10.1016/j.avb.2015.10.001 (2015).

Kowalski, R. M. & Limber, S. P. Psychological, physical, and academic correlates of cyberbullying and traditional bullying. J. Adol. Health 53 (1), S13–S20. https://doi.org/10.1016/j.jadohealth.2012.09.018 (2013).

Van Geel, M., Vedder, P. & Tanilon, J. Relationship between peer victimization, cyberbullying, and suicide in children and adolescents: A meta-analysis. JAMA Pediatr. 168 (5), 435–442. https://doi.org/10.1001/jamapediatrics.2013.4143 (2014).

Article   CAS   PubMed   Google Scholar  

Albdour, M., Hong, J. S., Lewin, L. & Yarandi, H. The impact of cyberbullying on physical and psychological health of Arab American adolescents. J. Immig. Minor. Health 21 , 706–715. https://doi.org/10.1007/s10903-018-00850-w (2019).

Yoon, Y. et al. Association of cyberbullying involvement with subsequent substance use among adolescents. J. Adol. Health 65 (5), 613–620. https://doi.org/10.1016/j.jadohealth.2019.05.006 (2019).

Yuchang, J., Junyi, L., Junxiu, A., Jing, W. & Mingcheng, H. The differential victimization associated with depression and anxiety in cross-cultural perspective: A meta-analysis. Trauma Viol. Abuse 20 (4), 560–573. https://doi.org/10.1177/1524838017726426 (2019).

Gini, G. & Espelage, D. L. Peer victimization, cyberbullying, and suicide risk in children and adolescents. Jama 312 (5), 545–546. https://doi.org/10.1001/jama.2014.3212 (2014).

Tullett-Prado, D., Doley, J. R., Zarate, D., Gomez, R. & Stavropoulos, V. Conceptualising social media addiction: A longitudinal network analysis of social media addiction symptoms and their relationships with psychological distress in a community sample of adults. BMC Psychol. 23 (1), 1–27. https://doi.org/10.1186/s12888-023-04985-5 (2023).

Keles, B., McCrae, N. & Grealish, A. A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. Int. J. Adol Youth 25 (1), 79–93. https://doi.org/10.1080/02673843.2019.1590851 (2020).

O’reilly, M. et al. Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clin. Child Psychol. Psych. 23 (4), 601–613. https://doi.org/10.1177/1359104518775154 (2018).

Marino, C., Gini, G., Angelini, F., Vieno, A. & Spada, M. M. Social norms and e-motions in problematic social media use among adolescents. Addict. Behav. Rep. 11 , 100250. https://doi.org/10.1016/j.abrep.2020.100250 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Sedgwick, R., Epstein, S., Dutta, R. & Ougrin, D. Social media, internet use and suicide attempts in adolescents. Curr. Opin. Psychol. 32 (6), 534. https://doi.org/10.1097/YCO.0000000000000547 (2019).

Marino, C., Hirst, C. M., Murray, C., Vieno, A. & Spada, M. M. Positive mental health as a predictor of problematic internet and Facebook use in adolescents and young adults. J. Happ. Stud. 19 , 2009–2022. https://doi.org/10.1007/s10902-017-9908-4 (2018).

Sarmiento, I. G. et al. How does social media use relate to adolescents’ internalizing symptoms? Conclusions from a systematic narrative review. Adol. Res. Rev. 5 , 381–404. https://doi.org/10.1007/s40894-018-0095-2 (2020).

Hanprathet, N., Manwong, M., Khumsri, J. M. S., Yingyeun, R. & Phanasathit, M. Facebook addiction and its relationship with mental health among Thai high school students. J. Med. Assoc. Thailand 98 , 81–90 (2015).

Google Scholar  

Cerutti, R. et al. Sleep disturbances partially mediate the association between problematic internet use and somatic symptomatology in adolescence. Curr. Psychol. 40 , 4581–4589. https://doi.org/10.1007/s12144-019-00414-7 (2021).

Van Den Eijnden, R., Koning, I., Doornwaard, S., Van Gurp, F. & Ter Bogt, T. The impact of heavy and disordered use of games and social media on adolescents’ psychological, social, and school functioning. J. Behav. Addit. 7 (3), 697–706. https://doi.org/10.1556/2006.7.2018.65 (2018).

Andreassen, C. S. & Pallesen, S. Social network site addiction-an overview. Curr. Pharma Des. 20 (25), 4053–4061. https://doi.org/10.2174/13816128113199990616 (2014).

Article   CAS   Google Scholar  

Andreassen, C. S. Online social network site addiction: A comprehensive review. Curr. Addit Rep. 2 (2), 175–184. https://doi.org/10.1007/s40429-015-0056-9 (2015).

Best, P., Manktelow, R. & Taylor, B. Online communication, social media and adolescent wellbeing: A systematic narrative review. Child. Youth Serv. Rev. 41 , 27–36. https://doi.org/10.1016/j.childyouth.2014.03.001 (2014).

Boer, M. et al. Adolescents’ intense and problematic social media use and their well-being in 29 countries. J. Adol. Health 66 (6), S89–S99. https://doi.org/10.1016/j.jadohealth.2020.02.014 (2020).

Inchley, J. et al . Adolescent alcohol-related behaviours: Trends and inequalities in the WHO European Region, 2002–2014: Observations from the Health Behaviour in School-aged Children (HBSC) WHO collaborative cross-national study. World Health Organization. Regional Office for Europe (2018). https://apps.who.int/iris/handle/10665/342239 .

Moor, I. et al. The 2017/18 Health Behaviour in School-aged Children (HBSC) study–mthodology of the World Health Organization’s child and adolescent health study. J. Health Monitor. 5 (3), 88. https://doi.org/10.25646/6904 (2020).

Nardone, P. et al. Dietary habits among Italian adolescents and their relation to socio-demographic characteristics. Ann. Istit. Super. Sanita 56 (4), 504–513. https://doi.org/10.4415/ANN_20_04_15 (2020).

Craig, W. et al. Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. J. Adol. Health 66 (6), S100–S108. https://doi.org/10.1016/j.jadohealth.2020.03.006 (2020).

Moor, I. et al. The 2017/18 Health Behaviour in School-aged Children (HBSC) study–methodology of the World Health Organization’s child and adolescent health study. J. Health Monitor. 5 (3), 88 (2020).

Inchley, J., Currie, D., Cosma, A. & Samdal, O. Health Behaviour in School-Aged Children (HBSC) Study Protocol: Background, Methodology and Mandatory Items for the 2017/18 Survey ; CAHRU: St Andrews, UK (2018).

Haugland, S. & Wold, B. Subjective health complaints in adolescence—reliability and validity of survey methods. J. Adol. 24 (5), 611–624. https://doi.org/10.1006/jado.2000.0393 (2001).

Khan, A., Khan, S. R. & Lee, E. Y. Association between lifestyle behaviours and mental health of adolescents: Evidence from the Canadian HBSC Surveys, 2002–2014. Int. J. Environ. Res. Public Health 19 (11), 6899. https://doi.org/10.3390/ijerph19116899 (2022).

Högberg, B., Strandh, M., Johansson, K. & Petersen, S. Trends in adolescent psychosomatic complaints: A quantile regression analysis of Swedish HBSC data 1985–2017. Scand. J. Public Health 2022 , 21094497. https://doi.org/10.1177/14034948221094497 (2022).

Bjereld, Y., Augustine, L., Turner, R., Löfstedt, P. & Ng, K. The association between self-reported psychosomatic complaints and bullying victimisation and disability among adolescents in Finland and Sweden. Scand. J. Public Health 2022 , 1089769. https://doi.org/10.1177/14034948221089769 (2022).

Heinz, A., van Duin, C., Kern, M. R., Catunda, C. & Willems, H. Trends from 2006–2018 in Health, Health Behaviour, Health Outcomes and Social Context of Adolescents in Luxembourg . University of Luxembourg (2020).  http://hdl.habndle.net/10993/42571 .

Gariepy, G., McKinnon, B., Sentenac, M. & Elgar, F. J. Validity and reliability of a brief symptom checklist to measure psychological health in school-aged children. Child Indic. Res. 9 , 471–484. https://doi.org/10.1007/s12187-015-9326-2 (2016).

Biswas, T. et al. Variation in the prevalence of different forms of bullying victimisation among adolescents and their associations with family, peer and school connectedness: A population-based study in 40 lower and middle income to high-income countries (LMIC-HICs). J. Child. Adol. Trauma 2022 , 1–11. https://doi.org/10.1007/s40653-022-00451-8 (2022).

Sasson, H., Tur-Sinai, A., Dvir, K. & Harel-Fisch, Y. The role of parents and peers in cyberbullying perpetration: Comparison among Arab and Jewish and youth in Israel. Child Indic. Res. 2022 , 1–21. https://doi.org/10.1007/s12187-022-09986-6 (2022).

Marengo, N. et al. Cyberbullying and problematic social media use: An insight into the positive role of social support in adolescents—data from the Health Behaviour in School-aged Children study in Italy. Public Health 199 , 46–50. https://doi.org/10.1016/j.puhe.2021.08.010 (2021).

Van den Eijnden, R. J. J. M., Lemmens, J. & Valkenburg, P. The social media disorder scale: Validity and psychometric properties. Comp. Hum. Behav. 61 (August), 478487. https://doi.org/10.1016/j.chb.2016.03.038 (2016).

Borraccino, A. et al. Problematic social media use and cyber aggression in Italian adolescents: The remarkable role of social support. Int. J. Environ. Res. Public Health 19 (15), 9763. https://doi.org/10.3390/ijerph19159763 (2022).

Hamre, R., Smith, O. R. F., Samdal, O. & Haug, E. Gaming behaviors and the association with sleep duration, social jetlag, and difficulties falling asleep among Norwegian adolescents. Int. J. Environ. Res. Public Health 19 (3), 1765. https://doi.org/10.3390/ijerph19031765 (2022).

Currie, C. et al. Researching health inequalities in adolescents: The development of the Health Behaviour in School-Aged Children (HBSC) family affluence scale. Soc. Sci Med. 66 (6), 1429–1436. https://doi.org/10.1016/j.socscimed.2007.11.024 (2008).

Zimet, G. D., Powell, S. S., Farley, G. K., Werkman, S. & Berkoff, K. A. Psychometric characteristics of the multidimensional scale of perceived social support. J. Person. Assess. 55 (3–4), 610–617. https://doi.org/10.1080/00223891.1990.9674095 (1990).

MacKinnon, D. P., Lockwood, C. M., Brown, C. H., Wang, W. & Hoffman, J. M. The intermediate endpoint effect in logistic and probit regression. Clin. Trial 4 (5), 499–513. https://doi.org/10.1177/1740774507083434 (2007).

Rijnhart, J. J., Valente, M. J., Smyth, H. L. & MacKinnon, D. P. Statistical mediation analysis for models with a binary mediator and a binary outcome: The differences between causal and traditional mediation analysis. Prevent. Sci. 2021 , 1–11. https://doi.org/10.1007/s11121-021-01308-6 (2021).

Tingley D, Yamamoto T, Hirose K, Keele L, Imai K, Yamamoto MT. Package ‘mediation’. Computer software manual. 2019 Sep 13:175-84.

DiCiccio, T. J. & Efron, B. Bootstrap confidence intervals. Stat. Sci. 11 (3), 189–228. https://doi.org/10.1214/ss/1032280214 (1996).

Article   MathSciNet   Google Scholar  

Preacher, K. J. & Hayes, A. F. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behav. Res. Method 40 (3), 879–891. https://doi.org/10.3758/BRM.40.3.879 (2008).

Tomşa, R., Jenaro, C., Campbell, M. & Neacşu, D. Student’s experiences with traditional bullying and cyberbullying: Findings from a Romanian sample. Procedia-Soc. Behav. Sci. 78 , 586–590. https://doi.org/10.1016/j.sbspro.2013.04.356 (2013).

Baier, D., Hong, J. S., Kliem, S. & Bergmann, M. C. Consequences of bullying on adolescents’ mental health in Germany: Comparing face-to-face bullying and cyberbullying. J. Child Fam. Stud. 28 , 2347–2357. https://doi.org/10.1007/s10826-018-1181-6 (2019).

Beckman, L., Hagquist, C. & Hellström, L. Does the association with psychosomatic health problems differ between cyberbullying and traditional bullying?. Emot. Behav. Differ. 17 (3–4), 421–434. https://doi.org/10.1080/13632752.2012.704228 (2012).

Lazuras, L., Barkoukis, V. & Tsorbatzoudis, H. Face-to-face bullying and cyberbullying in adolescents: Trans-contextual effects and role overlap. Tech. Soc. 48 , 97–101. https://doi.org/10.1016/j.techsoc.2016.12.001 (2017).

Li, J., Sidibe, A. M., Shen, X. & Hesketh, T. Incidence, risk factors and psychosomatic symptoms for traditional bullying and cyberbullying in Chinese adolescents. Child. Youth Serv. Rev. 107 , 104511. https://doi.org/10.1016/j.childyouth.2019.104511 (2019).

Nixon, C. L. Current perspectives: The impact of cyberbullying on adolescent health. Adol. Health Med. Therapy 2014 , 143–158. https://doi.org/10.2147/AHMT.S36456 (2014).

Olenik-Shemesh, D., Heiman, T. & Eden, S. Cyberbullying victimisation in adolescence: Relationships with loneliness and depressive mood. Emot. Behav. Differ. 17 (3–4), 361–374. https://doi.org/10.1080/13632752.2012.704227 (2012).

Sourander, A. et al. Psychosocial risk factors associated with cyberbullying among adolescents: A population-based study. Arch. Gener. Psychiatry 67 (7), 720–728. https://doi.org/10.1001/archgenpsychiatry.2010.79 (2010).

Claudia, M. et al. Problematic social media use: Associations with health complaints among adolescents. Ann. Istit. Super. Sanità 56 (4), 514–521. https://doi.org/10.4415/ANN_20_04_16 (2020).

Hellfeldt, K., López-Romero, L. & Andershed, H. Cyberbullying and psychological well-being in young adolescence: The potential protective mediation effects of social support from family, friends, and teachers. Int.. J. Environ. Res. Public Health 17 (1), 45. https://doi.org/10.3390/ijerph17010045 (2020).

Gini, G. & Pozzoli, T. Bullied children and psychosomatic problems: A meta-analysis. Pediatrics 132 (4), 720–729. https://doi.org/10.1542/peds.2013-0614 (2013).

Landstedt, E. & Persson, S. Bullying, cyberbullying, and mental health in young people. Scand. J. Public Health 42 (4), 393–399. https://doi.org/10.1177/1403494814525 (2014).

Schneider, S. K., Odonnell, L., Stueve, A. & Coulter, R. W. Cyberbullying, school bullying, and psychological distress: A regional census of high school students. Am. J. Public Health 102 (1), 171–177. https://doi.org/10.2105/AJPH.2011.300308 (2012).

Brighi, A., Guarini, A., Melotti, G., Galli, S. & Genta, M. L. Predictors of victimisation across direct bullying, indirect bullying and cyberbullying. Emot. Behav. Differ. 17 (3–4), 375–388. https://doi.org/10.1080/13632752.2012.704684 (2012).

Cowie, H. Cyberbullying and its impact on young people’s emotional health and well-being. The Psychia 37 (5), 167–170. https://doi.org/10.1192/pb.bp.112.040840 (2013).

Berryman, C., Ferguson, C. J. & Negy, C. Social media use and mental health among young adults. Psych. Q. 89 , 307–314. https://doi.org/10.1007/s11126-017-9535-6 (2018).

Verduyn, P., Ybarra, O., Résibois, M., Jonides, J. & Kross, E. Do social network sites enhance or undermine subjective well-being? A critical review. Soc. Issue Policy Rev. 11 (1), 274–302. https://doi.org/10.1542/peds.2007-0693 (2017).

Vogel, E. A., Rose, J. P., Okdie, B. M., Eckles, K. & Franz, B. Who compares and despairs? The effect of social comparison orientation on social media use and its outcomes. Person. Individ. Differ. 86 , 249–256. https://doi.org/10.1016/j.paid.2015.06.026 (2015).

Keles, B., McCrae, N. & Grealish, A. A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. Int. J. Adol. Youth 25 (1), 79–93. https://doi.org/10.1080/02673843.2019.1590851 (2020).

Boer, M. et al. Adolescents’ intense and problematic social media use and their well-being in 29 countries. J. Adol. Health 66 (6), 89–99. https://doi.org/10.1016/j.jadohealth.2020.02.014 (2020).

McHugh, B. C., Wisniewski, P., Rosson, M. B. & Carroll, J. M. When social media traumatizes teens: The roles of online risk exposure, coping, and post-traumatic stress. Int. Res. 28 (5), 1169–1188. https://doi.org/10.1108/IntR-02-2017-0077 (2018).

Trnka, R., Martínková, Z. & Tavel, P. An integrative review of coping related to problematic computer use in adolescence. Int. J. Public Health 61 , 317–327. https://doi.org/10.1007/s00038-015-0693-8 (2016).

Chen, L., Ho, S. S. & Lwin, M. O. A meta-analysis of factors predicting cyberbullying perpetration and victimization: From the social cognitive and media effects approach. New Media Soc. 19 (8), 1194–1213. https://doi.org/10.1177/1461444816634037 (2017).

Download references

Acknowledgements

We thank the 2017/2018 HBSC survey team/network, the coordinator and the Data Bank Manager for granting us access to the datasets. We duly acknowledge all school children who participated in the surveys.

Author information

Authors and affiliations.

Social Policy Research Centre, University of New South Wales, Sydney, Australia

Prince Peprah

Centre for Primary Health Care and Equity, University of New South Wales, Sydney, Australia

Pfizer Research and Development, PSSM Data Sciences, Pfizer, Inc., Connecticut, USA

Michael Safo Oduro

Centre for Disability and Rehabilitation Studies, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana

Godfred Atta-Osei

Centre for Social Research in Health, University of New South Wales, Sydney, Australia

Isaac Yeboah Addo

Concord Clinical School, University of Sydney, Sydney, Australia

Department of Planning, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana

Anthony Kwame Morgan

African Population and Health Research Center, Nairobi, Kenya

Razak M. Gyasi

National Centre for Naturopathic Medicine, Faculty of Health, Southern Cross University, Lismore, NSW, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

PP: conceptualization, methodology and writing; MOS: conceptualization, statistical methodology and analysis, and writing; GA-O: conceptualization and writing; IA: methodology and writing; RMG: conceptualization and writing. AKM: writing. All the authors reviewed and to the publication of this paper.

Corresponding author

Correspondence to Anthony Kwame Morgan .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Peprah, P., Oduro, M.S., Atta-Osei, G. et al. Problematic social media use mediates the effect of cyberbullying victimisation on psychosomatic complaints in adolescents. Sci Rep 14 , 9773 (2024). https://doi.org/10.1038/s41598-024-59509-2

Download citation

Received : 27 November 2023

Accepted : 11 April 2024

Published : 29 April 2024

DOI : https://doi.org/10.1038/s41598-024-59509-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adolescent well-being
  • Psychosomatic complaints
  • Cyberbullying victimisation
  • Sleep quality
  • Social media use

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

cyberbullying on social media research paper

Cyberbullying detection and machine learning: a systematic literature review

  • Published: 24 July 2023
  • Volume 56 , pages 1375–1416, ( 2023 )

Cite this article

cyberbullying on social media research paper

  • Vimala Balakrisnan 1 &
  • Mohammed Kaity 1  

1013 Accesses

Explore all metrics

The rise in research work focusing on detection of cyberbullying incidents on social media platforms particularly reflect how dire cyberbullying consequences are, regardless of age, gender or location. This paper examines scholarly publications (i.e., 2011–2022) on cyberbullying detection using machine learning through a systematic literature review approach. Specifically, articles were sought from six academic databases (Web of Science, ScienceDirect, IEEE Xplore, Association for Computing Machinery, Scopus, and Google Scholar), resulting in the identification of 4126 articles. A redundancy check followed by eligibility screening and quality assessment resulted in 68 articles included in this review. This review focused on three key aspects, namely, machine learning algorithms used to detect cyberbullying, features, and performance measures, and further supported with classification roles, language of study, data source and type of media. The findings are discussed, and research challenges and future directions are provided for researchers to explore.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

cyberbullying on social media research paper

Similar content being viewed by others

cyberbullying on social media research paper

Can Machine Learning Really Detect Cyberbullying?

cyberbullying on social media research paper

Harnessing the Power of Interdisciplinary Research with Psychology-Informed Cyberbullying Detection Models

cyberbullying on social media research paper

Cyber Analyzer—A Machine Learning Approach for the Detection of Cyberbullying—A Survey

https://trends.google.com/trends/explore?date=2011-01-01%202022-12-31&q=deep%20learning&hl=en .

Agrawal S, Awekar A (2018) Deep learning for detecting cyberbullying across multiple social media platforms. In European Conference on Information Retrieval (pp. 141–153). Springer, Cham

Aizenkot D, Kashy-Rosenbaum G (2018) Cyberbullying in WhatsApp classmates’ groups: evaluation of an intervention program implemented in israeli elementary and middle schools. New Media & Society 20(12):4709–4727

Article   Google Scholar  

Akhter MP, Zheng JB, Naqvi IR, Abdelmajeed M, Sadiq MT (2020) Automatic Detection of Offensive Language for Urdu and Roman Urdu. IEEE Access 8:91213–91226.

Aldhyani TH, Al-Adhaileh MH, Alsubari SN (2022) Cyberbullying identification system based deep learning algorithms. Electronics 11(20):3273

Al-Garadi MA, Hussain MR, Khan N, Murtaza G, Nweke HF, Ali I, …, Gani A (2019) Predicting cyberbullying on social media in the big data era using machine learning algorithms: review of literature and open challenges. IEEE Access 7:70701–70718

Al-garadi MA, Varathan KD, Ravana SD (2016) Cybercrime detection in online communications: the experimental case of cyberbullying detection in the Twitter network. Comput Hum Behav 63:433–443

Al-Harigy LM, Al-Nuaim HA, Moradpoor N, Tan Z (2022) Building towards Automated Cyberbullying Detection: A Comparative Analysis. Computational Intelligence and Neuroscience, 2022

Alom Z, Carminati B, Ferrari E (2020) A deep learning model for Twitter spam detection. Online Social Networks and Media 18:100079

Alpaydin E (2010) Introduction to machine learning, 2nd edn. MIT Press

Ates EC, Bostanci E, Guzel MS (2021) Comparative performance of machine learning algorithms in cyberbullying detection: using turkish language preprocessing techniques. arXiv preprint arXiv :2101.12718

Ayo FE, Folorunso O, Ibharalu FT, Osinuga IA (2020) Machine learning techniques for hate speech classification of twitter data: state-of-the-art, future challenges and research directions. Comput Sci Rev 38:100311

Balakrishnan V (2015) Cyberbullying among young adults in Malaysia: the roles of gender, age and internet frequency. Comput Hum Behav 46:149–157

Balakrishnan V, Khan S, Arabnia HR (2020a) Improving cyberbullying detection using Twitter users’ psychological features and machine learning. Computers & Security 90:101710

Balakrishnan V, Khan S, Arabnia HR (2020b) Improving cyberbullying detection using Twitter users’ psychological features and machine learning. Computers & Security 90:101710

Balakrishnan V, Khan S, Fernandez T, Arabnia HR (2019) Cyberbullying detection on twitter using Big Five and Dark Triad features. Pers Individ Differ 141, 252–257.

Bretschneider U, Wöhner T, Peters R (2014) Detecting online harassment in social networks.

Buan TA, Ramachandra R (2020) Automated Cyberbullying Detection in Social Media Using an SVM Activated Stacked Convolution LSTM Network. In Proceedings of the 2020 the 4th International Conference on Compute and Data Analysis (pp. 170–174)

Camerini AL, Marciano L, Carrara A, Schulz PJ (2020) Cyberbullying perpetration and victimization among children and adolescents: a systematic review of longitudinal studies. Telematics Inform 49:101362

Chatzakou D, Kourtellis N, Blackburn J, De Cristofaro E, Stringhini G, Vakali A (2017) Mean birds: Detecting aggression and bullying on twitter. In Proceedings of the 2017 ACM on web science conference (pp. 13–22)

Chavan VS, Shylaja SS (2015) Machine learning approach for detection of cyber-aggressive comments by peers on social media network. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 2354–2358). IEEE

Cheng L, Guo R, Silva YN, Hall D, Liu H (2021) Modeling temporal patterns of cyberbullying detection with hierarchical attention networks. ACM/IMS Trans Data Sci 2(2):1–23

Cheng L, Li J, Silva YN, Hall DL, Liu H (2019) Xbully: Cyberbullying detection within a multi-modal context. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining (pp. 339–347)

Chen Y, Zhou Y, Zhu S, Xu H (2012) Detecting offensive language in social media to protect adolescent online safety. In 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing (pp. 71–80). IEEE

Dadvar M, De Jong F (2012) Cyberbullying detection: a step toward a safer internet yard. In Proceedings of the 21st International Conference on World Wide Web (pp. 121–126)

Dadvar M, Jong FD, Ordelman R, Trieschnigg D (2012) Improved cyberbullying detection using gender information. In Proceedings of the Twelfth Dutch-Belgian Information Retrieval Workshop (DIR 2012) . University of Ghent

Dadvar M, Trieschnigg D, Ordelman R, de Jong F (2013) Improving cyberbullying detection with user context. In European Conference on Information Retrieval (pp. 693–696). Springer, Berlin, Heidelberg

Dey R, Bag S, Sarkar RR (2021) Identification of stable housekeeping genes for normalization of qPCR data in a pathogenic fungus. J Microbiol Methods 180:106106

Google Scholar  

Dinakar K, Picard R, Lieberman H (2015) Common sense reasoning for detection, prevention, and mitigation of cyberbullying. In IJCAI International Joint Conference on Artificial Intelligence .

Dinakar K, Reichart R, Lieberman H (2011) Modeling the detection of textual cyberbullying. In Proceedings of the International Conference on Weblog and Social Media 2011

Divyashree VH, Deepashree NS (2016) An effective approach for cyberbullying detection and avoidance. International Journal of Innovative Research in Computer and Communication Engineering , 14

Djuraskovic O, Cyberbullying Statistics F (2020) and Trends with Charts: First Site Guide; 2020. Available from: https://firstsiteguide.com/cyberbullying-stats/

Elmezain M, Malki A, Gad I, Atlam ES (2022) Hybrid deep learning model–based prediction of images related to Cyberbullying. Int J Appl Math Comput Sci 32(2):323–334

MATH   Google Scholar  

Fahrnberger G, Nayak D, Martha VS, Ramaswamy S (2014) SafeChat: A tool to shield children’s communication from explicit messages. In 2014 14th International Conference on Innovations for Community Services (I4CS) (pp. 80–86). IEEE

Fang Y, Yang S, Zhao B, Huang C (2021) Cyberbullying detection in social networks using bi-gru with self-attention mechanism. Information 12(4):171

Foong YJ, Oussalah M (2017), September Cyberbullying system detection and analysis. In 2017 European Intelligence and Security Informatics Conference (EISIC) (pp. 40–46). IEEE

Galán-García P, Puerta JGDL, Gómez CL, Santos I, Bringas PG (2016) Supervised machine learning for the detection of troll profiles in twitter social network: application to a real case of cyberbullying. Log J IGPL 24(1):42–53

MathSciNet   Google Scholar  

García-Recuero Á (2016) Discouraging abusive behavior in privacy-preserving online social networking applications. In Proceedings of the 25th International Conference Companion on World Wide Web (pp. 305–309)

Ge S, Cheng L, Liu H (2021) Improving cyberbullying detection with user interaction. In Proceedings of the Web Conference 2021 (pp. 496–506)

Goodboy AK, Martin MM (2015) The personality profile of a cyberbully: examining the Dark Triad. Comput Hum Behav 49:1–4

Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press

Haidar B, Chamoun M, Serhrouchni A (2017a) Multilingual cyberbullying detection system: Detecting cyberbullying in Arabic content. In 2017 1st Cyber Security in Networking Conference (CSNet) (pp. 1–8). IEEE

Haidar B, Chamoun M, Serhrouchni A (2017b) A multilingual system for cyberbullying detection: arabic content detection using machine learning. Adv Sci Technol Eng Syst J 2(6):275–284

Hani J, Nashaat M, Ahmed M, Emad Z, Amer E, Mohammed A (2019) Social media cyberbullying detection using machine learning. Int J Adv Comput Sci Appl 10(5):703–707

Hinduja S, Patchin JW (2010) Bullying, cyberbullying, and suicide. Archives of suicide research 14(3):206–221

Hosmer DW Jr, Lemeshow S, Sturdivant RX (2013) Applied Logistic Regression. Wiley

Hosseinmardi H, Mattson SA, Rafiq RI, Han R, Lv Q, Mishra S (2015b) Detection of cyberbullying incidents on the instagram social network. arXiv preprint arXiv:1503.03909

Hosseinmardi H, Mattson SA, Rafiq RI, Han R, Lv Q, Mishr S (2015a) Prediction of cyberbullying incidents on the instagram social network. arXiv preprint arXiv:1508.06257

Hosseinmardi H, Rafiq RI, Han R, Lv Q, Mishra S (2016) Prediction of cyberbullying incidents in a media-based social network. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 186–192). IEEE

Huang Q, Singh VK, Atrey PK (2014) Cyber bullying detection using social and textual analysis. In Proceedings of the 3rd International Workshop on Socially-Aware Multimedia (pp. 3–6)

Hutter F, Kotthoff L, Vanschoren J (2019) Automated machine learning: methods, systems, challenges. Springer

Kaity M, Balakrishnan V (2019) An automatic non-english sentiment lexicon builder using unannotated corpus. J Supercomputing 75(4):2243–2268

Kelleher JD, Tierney B, Tierney B (2018) Data science: an introduction. CRC Press

Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Tech Rep EBSE 1:1–57

Koutsou A, Tjortjis C (2018) Predicting hospital readmissions using random forests. IEEE J Biomedical Health Inf 22(1):122–130

Kumar A, Nayak S, Chandra N (2019) Empirical analysis of supervised machine learning techniques for Cyberbullying detection. In International Conference on Innovative Computing and Communications (pp. 223–230). Springer, Singapore

Kumar A, Sachdeva N (2020) Multi-input integrative learning using deep neural networks and transfer learning for cyberbullying detection in real-time code-mix data. Multimedia Systems

Kumar A, Sachdeva N (2021) Multimodal cyberbullying detection using capsule network with dynamic routing and deep convolutional neural network. Multimedia Syst, 1–10

LeCun Y, Bengio Y, Hinton G (2015) Deep Learn Nat 521(7553):436–444

Li W, Li X (2021) Cyberbullying among college students: the roles of individual, familial, and cultural factors. Int J Environ Res Public Health 18(11):1–17

López-Vizcaíno MF, Nóvoa FJ, Carneiro V, Cacheda F (2021) Early detection of cyberbullying on social media networks. Future Generation Computer Systems 118:219–229

Lu N, Wu G, Zhang Z, Zheng Y, Ren Y, Choo KKR (2020) Cyberbullying detection in social media text based on character-level convolutional neural network with shortcuts. Concurrency and Computation: Practice and Experience, e5627

Maity K, Sen T, Saha S, Bhattacharyya P (2022) MTBullyGNN: a graph neural network-based Multitask Framework for Cyberbullying Detection. IEEE Transactions on Computational Social Systems

Malik CI, Radwan RB (2020) Adolescent victims of cyberbullying in Bangladesh- prevalence and relationship with psychiatric disorders. Asian J Psychiatr 48:101893

Mangaonkar A, Hayrapetian A, Raje R (2015) Collaborative detection of cyberbullying behavior in Twitter data. In 2015 IEEE international conference on electro/information technology (EIT) (pp. 611–616). IEEE

Manning CD, Raghavan P, Schütze H (2008) Introduction to Information Retrieval. Cambridge University Press

McEvoy MP, Williams MT (2021) Quality Assessment of systematic reviews and Meta-analyses of physical therapy interventions: a systematic review. Phys Ther 101(4):pzaa226

Mercado RNM, Chuctaya HFC, Gutierrez EGC (2018) Automatic cyberbullying detection in spanish-language social networks using sentiment analysis techniques. Int J Adv Comput Sci Appl 9(7):228–235

Monteiro RP, Santana MC, Santos RM, Pereira FC (2022) Cyberbullying victimization and mental health in higher education students: the mediating role of perceived social support. J interpers Violence, 1–23

Nahar V, Al-Maskari S, Li X, Pang C (2014) Semi-supervised learning for cyberbullying detection in social networks. In Australasian Database Conference (pp. 160–171). Springer, Cham

Nahar V, Unankard S, Li X, Pang C (2012) Sentiment analysis for effective detection of cyber bullying. Asia-Pacific Web Conference

Nandhini BS, Sheeba JI (2015) Online social network bullying detection using intelligence techniques. Procedia Comput Sci 45:485–492

Niu M, Yu L, Tian S, Wang X, Zhang Q (2020) Personal-bullying detection based on Multi-Attention and Cognitive Feature. Autom Control Comput Sci 54(1):52–61

Noviantho, Isa SM, Ashianti L (2018) Cyberbullying classification using text mining. In Proceedings - 2017 1st International Conference on Informatics and Computational Sciences , ICICoS 2017

Patil C, Salmalge S, Nartam P (2020) Cyberbullying detection on multiple SMPs using modular neural network. Advances in Cybernetics, Cognition, and machine learning for Communication Technologies. Springer, Singapore, pp 181–188

Chapter   Google Scholar  

Pawar R, Raje RR (2019) Multilingual Cyberbullying Detection System. In 2019 IEEE International Conference on Electro Information Technology (EIT) (pp. 040–044). IEEE

Pires TM, Nunes IL (2019) Support vector machine for human activity recognition: a comprehensive review. Artif Intell Rev 52(3):1925–1962

Pradhan A, Yatam VM, Bera P (2020) Self-Attention for Cyberbullying Detection. In 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) (pp. 1–6). IEEE

Pérez PJC, Valdez CJL, Ortiz MDGC, Barrera JPS, Pérez PF (2012) MISAAC: Instant messaging tool for cyberbullying detection. In Proceedings of the 2012 International Conference on Artificial Intelligence , ICAI 2012 (pp. 1049–1052)

Rafiq RI, Hosseinmardi H, Han R, Lv Q, Mishra S (2018) Scalable and timely detection of cyberbullying in online social networks. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing (pp. 1738–1747)

Raisi E, Huang B (2018) Weakly supervised cyberbullying detection with participant-vocabulary consistency. Social Netw Anal Min 8(1):38

Reynolds K, Kontostathis A, Edwards L (2011) Using machine learning to detect cyberbullying. In 2011 10th International Conference on Machine learning and applications and workshops (Vol. 2, pp. 241–244). IEEE

Rosa H, Matos D, Ribeiro R, Coheur L, Carvalho JP (2018) A “deeper” look at detecting cyberbullying in social networks. In 2018 International Joint Conference on Neural Networks (IJCNN) (pp. 1–8). IEEE

Rosa H, Pereira N, Ribeiro R, Ferreira PC, Carvalho JP, Oliveira S, Coheur L, Paulino P, Veiga Simão AM, Trancoso I (2019) Automatic cyberbullying detection: A systematic review. Computers in Human Behavior, 93, 333–345

Salawu S, He Y, Lumsden J (2017) Approaches to automated detection of cyberbullying: a survey. IEEE Trans Affect Comput.

Sanchez H, Kumar S (2011) Twitter bullying detection. ser. NSDI , 12 (2011), 15

Shah N, Maqbool A, Abbasi AF (2021) Predictive modeling for cyberbullying detection in social media. J Ambient Intell Humaniz Comput 12(6):5579–5594

Singh A, Kaur, M (2020) Intelligent content-based cybercrime detection in online social networks using cuckoo search metaheuristic approach [Article]. J Supercomput 76(7):5402–5424

Singh VK, Ghosh S, Jose C (2017) Toward multimodal cyberbullying detection. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 2090–2099)

Soni D, Singh VK (2018) See no evil, hear no evil: Audio-visual-textual cyberbullying detection. Proceedings of the ACM on Human-Computer Interaction , 2 (CSCW), 1–26

Squicciarini A, Rajtmajer S, Liu Y, Griffin C (2015) Identification and characterization of cyberbullying dynamics in an online social network. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 (pp. 280–285)

Sugandhi R, Pande A, Agrawal A, Bhagat H (2016) Automatic monitoring and prevention of cyberbullying. Int J Comput Appl 8:17–19

Tahmasbi N, Rastegari E (2018) A socio-contextual approach in automated detection of public cyberbullying on Twitter. ACM Trans Social Comput 1(4):1–22

Tan SH, Zou W, Zhang J, Zhou Y (2020) Evaluation of machine learning algorithms for prediction of ground-level PM2.5 concentration using satellite-derived aerosol optical depth over China. Environ Sci Pollut Res 27(29):36155–36170

Tarwani S, Jethanandani M, Kant V (2019) Cyberbullying Detection in Hindi-English Code-Mixed Language Using Sentiment Classification. In International Conference on Advances in Computing and Data Sciences (pp. 543–551). Springer, Singapore

Tomkins S, Getoor L, Chen Y, Zhang Y (2018) A socio-linguistic model for cyberbullying detection. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 53–60). IEEE

van Geel M, Goemans A, Toprak F, Vedder P (2017) Which personality traits are related to traditional bullying and cyberbullying? A study with the big five, Dark Triad and sadism. Pers Indiv Differ 106:231–235

Van Hee C, Jacobs G, Emmery C, Desmet B, Lefever E, Verhoeven B, …, Hoste V (2018) Automatic detection of cyberbullying in social media text. PLoS ONE, 13(10), e0203794

Van Hee C, Lefever E, Verhoeven B, Mennes J, Desmet B, De Pauw G, …, Hoste V (2015) Detection and fine-grained classification of cyberbullying events. In International Conference Recent Advances in Natural Language Processing (RANLP) (pp. 672–680)

Wang W, Xie X, Wang X, Lei L, Hu Q, Jiang S (2019) Cyberbullying and depression among chinese college students: a moderated mediation model of social anxiety and neuroticism. J Affect Disord 256:54–61

Whiting P, Savović J, Higgins JP et al (2016) ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234

Witten IH, Frank E, Hall MA (2016) Data Mining: practical machine learning tools and techniques, 4th edn. Morgan Kaufmann Publishers

Wright MF (2017) Cyberbullying in cultural context. J Cross-Cult Psychol 48(8):1136–1137

Wu J, Wen M, Lu R, Li B, Li J (2020) Toward efficient and effective bullying detection in online social network. Peer-to-Peer Netw Appl, 1–10

Wu T, Wen S, Xiang Y, Zhou W (2018) Twitter spam detection: survey of new approaches and comparative study. Computers & Security 76:265–284

Yin D, Xue Z, Hong L, Davison BD, Kontostathis A, Edwards L (2009) Detection of harassment on web 2.0. Proceedings of the Content Analysis in the WEB , 2 , 1–7

Zhang X, Tong J, Vishwamitra N, Whittaker E, Mazer JP, Kowalski R, Hu H, Luo F, Macbeth J, Dillon E (2017) Cyberbullying detection with a pronunciation based convolutional neural network. In Proceedings - 2016 15th IEEE International Conference on Machine Learning and Applications, ICMLA 2016

Zhao R, Mao K (2017) Cyberbullying detection based on semantic-enhanced marginalized denoising auto-encoder. IEEE Trans Affect Comput 8(3), 328–339. Article 7412690

Zhao R, Zhou A, Mao K (2016) Automatic detection of cyberbullying on social networks based on bullying features. In Proceedings of the 17th international conference on distributed computing and networking (pp. 1–6)

Zhong H, Li H, Squicciarini AC, Rajtmajer SM, Griffin C, Miller DJ, Caragea C (2016) Content-Driven Detection of Cyberbullying on the Instagram Social Network. In IJCAI (pp. 3952–3958)

Download references

Author information

Authors and affiliations.

Faculty of Computer Science and Information Systems, Universiti Malaya, Kuala Lumpur, 50603, Malaysia

Vimala Balakrisnan & Mohammed Kaity

You can also search for this author in PubMed   Google Scholar

Contributions

VB wrote the original draft; performed analysis; revised the article; MK performed data collection; performed analysis; revised the article; All authors reviewed the manuscript.

Corresponding author

Correspondence to Vimala Balakrisnan .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Balakrisnan, V., Kaity, M. Cyberbullying detection and machine learning: a systematic literature review. Artif Intell Rev 56 (Suppl 1), 1375–1416 (2023). https://doi.org/10.1007/s10462-023-10553-w

Download citation

Accepted : 08 July 2023

Published : 24 July 2023

Issue Date : October 2023

DOI : https://doi.org/10.1007/s10462-023-10553-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cyberbullying
  • Machine learning
  • Systematic literature review
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Automatic detection of cyberbullying in social media text

Contributed equally to this work with: Cynthia Van Hee, Gilles Jacobs

Roles Conceptualization, Data curation, Methodology, Resources, Software, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Translation, Interpreting and Communication - Faculty of Arts and Philosophy, Ghent University, Ghent, Belgium

ORCID logo

Roles Conceptualization, Writing – review & editing

Affiliation Department of Linguistics - Faculty of Arts, University of Antwerp, Antwerp, Belgium

Roles Conceptualization, Methodology, Software, Writing – review & editing

Roles Conceptualization, Data curation, Methodology, Resources, Writing – review & editing

Roles Conceptualization, Data curation, Writing – review & editing

Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing

  • Cynthia Van Hee, 
  • Gilles Jacobs, 
  • Chris Emmery, 
  • Bart Desmet, 
  • Els Lefever, 
  • Ben Verhoeven, 
  • Guy De Pauw, 
  • Walter Daelemans, 
  • Véronique Hoste

PLOS

  • Published: October 8, 2018
  • https://doi.org/10.1371/journal.pone.0203794
  • Reader Comments

Table 1

While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a cyberbullying corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for the task. Experiments on a hold-out test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F 1 score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems.

Citation: Van Hee C, Jacobs G, Emmery C, Desmet B, Lefever E, Verhoeven B, et al. (2018) Automatic detection of cyberbullying in social media text. PLoS ONE 13(10): e0203794. https://doi.org/10.1371/journal.pone.0203794

Editor: Hussein Suleman, University of Cape Town, SOUTH AFRICA

Received: February 6, 2017; Accepted: August 28, 2018; Published: October 8, 2018

Copyright: © 2018 Van Hee et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Because the actual posts in our corpus could contain names or other identifying information, we cannot share them publicly in a repository. They can, however be obtained upon request, for academic purposes solely and via [email protected] or [email protected] . The replication data are available through the Open Science Framework repository https://osf.io/rgqw8/ with DOI 10.17605/OSF.IO/RGQW8 . This replication dataset allows interested researchers to download 1) the feature vectors of the corpus underlying the experiments described in this paper, 2) the indices corresponding to instances that were kept separately to test the experimental design (referred to as the "hold-out test set" in the paper), 3) a feature mapping dictionary that allows to trace all indices in the feature vector files back to the corresponding feature types (e.g. the feature indices 0 to 14,230 represent word 3-gram features). We also share the seed terms that were used to construct the corpora for our topic model features. Lastly, we provide an Excel spreadsheet presenting a results overview of all the tested systems. All of this information is made available for both the Dutch and English experiments.

Funding: The work presented in this paper was carried out in the framework of the AMiCA IWT SBO-project 120007 project to WD and VH, funded by the government Flanders Innovation & Entrepreneurship (VLAIO) agency; http://www.vlaio.be . The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Web 2.0 has had a substantial impact on communication and relationships in today’s society. Children and teenagers go online more frequently, at younger ages, and in more diverse ways (e.g. smartphones, laptops and tablets). Although most of teenagers’ Internet use is harmless and the benefits of digital communication are evident, the freedom and anonymity experienced online makes young people vulnerable with cyberbullying being one of the major threats [ 1 , 2 ].

Bullying is not a new phenomenon and cyberbullying has manifested itself as soon as digital technologies have become primary communication tools. On the positive side, social media like blogs, social networking sites (e.g. Facebook), and instant messaging platforms (e.g. WhatsApp) make it possible to communicate with anyone and at any time. Moreover, they are a place where people engage in social interaction, offering the possibility to establish new relationships and maintain existing friendships [ 3 , 4 ]. On the negative side however, social media increase the risk of children being confronted with threatening situations including grooming or sexually transgressive behaviour, signals of depression and suicidal thoughts, and cyberbullying. Users are reachable 24/7 and are often able to remain anonymous if desired: this makes social media a convenient way for bullies to target their victims outside the school yard.

With regard to cyberbullying, a number of national and international initiatives have been launched over the past few years to increase children’s online safety. Examples include KiVa ( http://www.kivaprogram.net/ ), a Finnish cyberbullying prevention programme, the ‘ Non au harcèlement ’ campaign in France, Belgian governmental initiatives and helplines (e.g. clicksafe.be, veiligonline.be, mediawijs.be ) that provide information about online safety, and so on.

In spite of these efforts, a lot of undesirable and hurtful content remains online. [ 2 ] analysed a body of quantitative research on cyberbullying and observed cybervictimisation rates among teenagers between 20% and 40%. [ 5 ] focused on 12 to 17 year olds living in the United States and found that no less than 72% of them had encountered cyberbullying at least once within the year preceding the questionnaire. [ 6 ] surveyed 9 to 26 year olds in the United States, Canada, the United Kingdom and Australia, and found that 29% of the respondents had ever been victimised online. A study among 2,000 Flemish secondary school students (age 12 to 18) revealed that 11% of them had been bullied online at least once in the six months preceding the survey [ 7 ]. Finally, the 2014 large-scale EU Kids Online Report [ 8 ] published that 20% of 11 to 16 year olds had been exposed to hate messages online. In addition, youngsters were 12% more likely to be exposed to cyberbullying as compared to 2010, which clearly demonstrates that cyberbullying is a growing problem.

The prevalence of cybervictimisation depends on the conceptualisation used in describing cyberbullying, but also on research variables such as location and the number and age span of the participants. Nevertheless, the above studies demonstrate that online platforms are increasingly used for bullying, which is a cause for concern given its impact. As shown by [ 9 – 11 ], cyberbullying may negatively impact the victim’s self-esteem, academic achievement and emotional well-being. [ 12 ] found that self-reported effects of cyberbullying include negative effects on school grades and feelings of sadness, anger, fear, and depression. In extreme cases, cyberbullying could even lead to self-harm and suicidal thoughts.

These findings demonstrate that cyberbullying is a serious problem the consequences of which can be dramatic. Early detection of cyberbullying attempts is therefore of key importance to youngsters’ mental well-being. Successful detection depends on effective monitoring of online content, but the amount of information on the Web makes it practically unfeasible for moderators to monitor all user-generated content manually. To tackle this problem, intelligent systems are required that process this information in a fast way and automatically signal potential threats. This way, moderators can respond quickly and prevent threatening situations from escalating. According to recent research, teenagers are generally in favour of such automatic monitoring, provided that effective follow-up strategies are formulated, and that privacy and autonomy are guaranteed [ 13 ].

Parental control tools (e.g. NetNanny , https://www.netnanny.com/ ) already block unsuited or undesirable content and some social networks make use of keyword-based moderation tools (i.e. using lists of profane and insulting words to flag harmful content). However, such approaches typically fail to detect implicit and subtle forms of cyberbullying in which no explicit vocabulary is used. This creates the need for intelligent and self-learning systems that go beyond keyword spotting and hence improve the recall of cyberbullying detection.

The ultimate goal of this type of research is to develop models that could improve manual monitoring for cyberbullying on social networks. We explore the automatic detection of textual signals of cyberbullying, in which cyberbulying is approached as a complex phenomenon that can be realised in various ways (see the Annotation guidelines section for a detailed overview). While the vast majority of the related research focuses on detecting cyberbullying ‘attacks’ (i.e. verbal aggression), the present study takes different types of cyberbullying into account, including more implicit posts from the bully, but also posts written by victims and bystanders. This is a more inclusive conceptualisation for the task of cyberbullying detection and should aid in moderation and prevention efforts by capturing different and more implicit signals of bullying.

To tackle this problem, we propose a machine learning method based on a linear SVM classifier [ 14 , 15 ] exploiting a rich feature set. The contribution we make is twofold: first, we develop a complex classifier to detect signals of cyberbullying, which allows us to detect different types of cyberbullying that are related to different social roles involved in a cyberbullying event. Second, we demonstrate that the methodology is easily portable to other languages, provided there is annotated data available, by performing experiments on an English and Dutch dataset.

The remainder of this paper is structured as follows: the next section presents a definition of cyberbullying and its participant roles and provides an overview of the state of the art in cyberbullying detection. The Data collection and annotation section describes the corpus construction and annotation. Next, we present the experimental setup and discuss our experimental results for English and Dutch. Finally, the Conclusion and future research section concludes this paper and provides some perspectives for further research.

Related research

Both offline and online bullying are widely covered in the realm of social sciences and psychology, and the increasing number of cyberbullying cases in recent years [ 16 ] has stimulated research efforts to detect cyberbullying automatically. In the following section, we present a definition of cyberbullying and identify its participant roles and we provide a brief overview of automatic approaches to cyberbullying detection.

Cyberbullying definition and participant roles

A common starting point for conceptualising cyberbullying are definitions of traditional (i.e. offline ) bullying, one of the most influential ones being formulated by [ 17 ]. The researcher described bullying based on three main criteria, including i) intention (i.e. a bully intends to inflict harm on the victim), ii) repetition (i.e. bullying acts take place repeatedly over time) and iii) a power imbalance between the bully and the victim (i.e. a more powerful bully attacks a less powerful victim). With respect to cyberbullying, a number of definitions are based on the above criteria. A popular definition is that of [ 18 , p. 376], which describes cyberbullying as “an aggressive, intentional act carried out by a group or individual, using electronic forms of contact, repeatedly and over time, against a victim who cannot easily defend him or herself”. However, opinion on the applicability of the above characteristics to cyberbullying is very much divided [ 19 ], and besides theoretical objections, a number of practical limitations have been observed. Firstly, while [ 17 ] claims intention to be inherent to traditional bullying, this is much harder to ascertain in an online environment. Online conversations lack the signals of a face-to-face interaction like intonation, facial expressions and gestures, which makes them more ambiguous than real-life conversations. The receiver may therefore get the wrong impression that they are being offended or ridiculed [ 20 ]. Another criterion for bullying that might not hold in online situations is the power imbalance between the bully and the victim. This can be evident in real life (e.g. the bully is taller, stronger or older than the victim), but it is hard to conceptualise or measure online, where power may be related to technological skills, anonymity or the inability of the victim to escape from the bullying [ 19 , 21 ]. Also empowering for the bully are inherent characteristics of the Web: once defamatory or confidential information is made public through the Internet, it is hard to remove.

Finally, while arguing that repetition distinguishes bullying from single acts of aggression, [ 17 ] himself states that such a single aggressive action can be considered bullying under certain circumstances. Accordingly, [ 21 ] claim that repetition in cyberbullying is problematic to operationalise, as it is unclear what the consequences are of a single derogatory message on a public page. A single act of aggression or humiliation may cause continued distress and humiliation for the victim if it is shared or liked by a large audience [ 21 ]. [ 22 , p. 26] compare this with the “snowball effect”: one post may be repeated or distributed by other people so that it becomes out of the control of the initial bully and has larger effects than was originally intended.

Given these arguments, a number of less ‘strict’ definitions of cyberbullying were proposed by among others [ 2 , 5 , 6 ], where a power imbalance and repetition are not deemed necessary conditions for cyberbullying.

The above paragraphs demonstrate that defining cyberbullying is far from trivial, and varying prevalence rates (see the Introduction section) confirm that a univocal definition of the phenomenon is still lacking in the literature [ 2 ]. Based on existing conceptualisations, we define cyberbullying as content that is published online by an individual and that is aggressive or hurtful against a victim . Based on this definition, an annotation scheme was developed [ 23 ] to signal textual characteristics of cyberbullying, including posts from bullies, as well as reactions from victims and bystanders.

Cyberbullying research also involves the identification of its participant roles. [ 24 ] were among the first to define the roles in a bullying situation. Based on surveys among teenagers involved in real-life bullying situations, they defined six participant roles: victims (i.e. who are the target of repeated harassment), bullies (i.e. who are the initiative-taking perpetrators), assistants of the bully (i.e. who encourage the bullying), reinforcers of the bully (i.e. who reinforce the bullying), defenders (i.e. who comfort the victim, take their side or try to stop the bullying) and outsiders (i.e. who ignore or distance themselves from the situation). In sum, in addition to the bully and victim, the researchers distinguish four bystanders (i.e. assistants, reinforcers, defenders and outsiders). [ 25 ], however, do not distinguish between reinforcers and assistants of the bullying. Their typology includes victims, bullies and three types of bystanders: i) bystanders who participate in the bullying, ii) bystanders who help or support the victim and iii) bystanders who ignore the bullying. The cyberbullying roles that are identified in our annotation scheme are based on existing bullying role typologies, given that traditional bullying roles are applicable to cyberbullying as well [ 26 , 27 ]. More details about the different roles that we take into account are provided in the Data collection and annotation section.

Bystanders and -to a lesser extent- victims are often overlooked in the related research. As a result, these studies can be better characterised as verbal aggression detection concerned with retrieving bully attacks. By taking bystanders into account, we capture different and more subtle signals of a bullying episode. Note that while in this work we did not include classification of the participant roles as such, they are essential to the conceptualisation of the current detection task.

Detecting and preventing cyberbullying

As mentioned earlier, although research on cyberbullying detection is more limited than social studies on the phenomenon, some important advances have been made in recent years. In what follows, we present a brief overview of the most important natural language processing approaches to cyberbullying detection, but we refer to the survey paper by [ 28 ] for a more detailed overview.

Although some studies have investigated the effectiveness of rule-based modelling [ 29 ], the dominant approach to cyberbullying detection involves machine learning. Most machine learning approaches are based on supervised [ 30 , 30 – 32 ] or semi-supervised learning [ 33 ]. The former involves the construction of a classifier based on labelled training data, whereas semi-supervised approaches rely on classifiers that are built from a training corpus containing a small set of labelled and a large set of unlabelled instances. Semi-supervised methods are often used to handle data sparsity, a typical issue in cyberbullying research. As cyberbullying detection essentially involves the distinction between bullying and non-bullying posts, the problem is generally approached as a binary classification task where the positive class is represented by instances containing (textual) cyberbullying, while the negative class is devoid of bullying signals.

A key challenge in cyberbullying research is the availability of suitable data, which is necessary to develop models that characterise cyberbullying. In recent years, only a few datasets have become publicly available for this particular task, such as the training sets provided in the context of the CAW 2.0 workshop ( http://caw2.barcelonamedia.org ), a MySpace ( https://myspace.com ) [ 34 ] and Formspring ( http://www.formspring.me ) cyberbullying corpus annotated with the help of Mechanical Turk [ 29 ], and more recently, the Twitter Bullying Traces dataset [ 35 ]. Many studies have therefore constructed their own corpus from social media websites that are prone to bullying content, such as YouTube [ 30 , 32 ], Twitter [ 36 , 37 ], Instagram [ 38 ], MySpace [ 31 , 34 ], FormSpring [ 29 , 39 ], Kaggle [ 40 ] and ASKfm [ 41 ]. Despite the bottleneck of data availability, cyberbullying detection approaches have been successfully implemented over the past years and the relevance of automatic text analysis techniques to ensure child safety online has been recognised [ 42 ].

Among the first studies on cyberbullying detection are [ 29 – 31 ], who explored the predictive power of n -grams (with and without tf-idf weighting), part-of-speech information (e.g. first and second pronouns), and sentiment information based on (polarity and profanity) lexicons for this task. Similar features were not only exploited for coarse-grained cyberbullying detection, but also for the detection of more fine-grained cyberbullying categories [ 41 ]. Despite their apparent simplicity, content-based features (i.e. lexical, syntactic and sentiment information) are very often exploited in recent approaches to cyberbullying detection [ 33 , 43 ]. In fact, as observed by [ 28 ], more than 41 papers have approached cyberbullying detection using content-based features, which confirms that this type of information is crucial for the task.

More and more, however, content-based features are combined with semantic features derived from topic model information [ 44 ], word embeddings and representation learning [ 43 , 45 ]. More recent studies have also demonstrated the added value of user-based information for the task, more specifically by including users’ activities (i.e. the number of posts) on a social network, their age, gender, location, number of friends and followers, and so on [ 32 , 33 , 46 , 47 ]. A final feature type that gains increasing popularity in cyberbullying detection are network-based features, whose application is motivated by the frequent use of social media data for the task. By using network information, researchers aim to capture social relations between participants in a conversation (e.g. bully versus victim), and other relevant information such as the popularity of a person (i.e. which can indicate the power of a potential bully) on a social network, the number of (historical) interactions between two people, and so on. [ 48 ] for instance used network-based features to take the behavioural history of a potential bully into account. [ 49 ] detected cyberbullying in tweets and included network features inspired by Olweus’ [ 17 ] bullying conditions (see supra). More specifically, they measured the power imbalance between a bully and victim, as well as the bully’s popularity based on interaction graphs and the bully’s position in the network.

As mentioned earlier, social media are a commonly used genre for this type of tasks. More recently, researchers have investigated cyberbullying detection in multi-modal data offered by specific platforms. For instance [ 38 ] explored cyberbullying detection using multi-modal data extracted from the social network Instagram. More precisely, they combined textual features derived from the posts themselves with user metadata and image features and showed that integrating the latter enhanced the classification performance. [ 37 ] also detected cyberbullying in different data genres, including ASKfm, Twitter, and Instagram. They took role information into account by integrating bully and victim scores as features, based on the occurrence of bully-related keywords in their sent or received posts.

With respect to the datasets used in cyberbullying research, it can be observed that corpora are often composed by keyword search (e.g. [ 43 , 44 ]), which produces a biased dataset of positive (i.e. bullying) instances. To balance these corpora, negative data are often added from a background corpus or data resampling [ 50 ] techniques are adopted [ 33 , 47 ]. For this research, data were randomly crawled across ASKfm and no keyword search was used to collect bullying data. Instead, all instances were manually annotated for the presence of bullying. As a result, our corpus contains a realistic distribution of bullying instances.

When looking at the performance of automatic cyberbullying, we see that scores vary greatly and do not only depend on the implemented algorithm and parameter settings, but also on a number of other variables. These include the metrics that are used to evaluate the system (i.e. micro- or macro-averaged F 1 , precision, recall, AUC, etc.), the corpus genre (i.e. Facebook, Twitter, ASKfm, Instagram) and class distribution (i.e. balanced or unbalanced), the annotation method (i.e. automatic annotations or manual annotations using crowdsourcing or by experts) and, perhaps the most important distinguishing factor, the conceptualisation of cyberbullying that is used. More concretely, while some approaches identify sensitive topics [ 30 ] or insulting language [ 29 ], others propose a more comprehensive approach by capturing different types of cyberbullying [ 41 ] or by modelling the bully-victim communications involved in a cyberbullying incident [ 37 ].

The studies discussed in this section demonstrated the variety of approaches that have been used to tackle cyberbullying detection. However, most of them focused on cyberbullying ‘attacks’, or posts written by a bully. Moreover, it is not entirely clear if different forms of cyberbullying were taken into account (e.g. sexual intimidation or harassment, or psychological threats), in addition to derogatory language or insults. In the present study, cyberbullying is considered a complex phenomenon comprising different forms of harmful online behaviour, which are described in more detail in our annotation scheme [ 23 ]. Purposing to facilitate manual monitoring efforts on social networks, we developed a system that automatically detects signals of cyberbullying, including attacks from bullies, as well as victim and bystander reactions, the latter of which are generally overlooked in related research.

Most similar to this research is the work by [ 44 ], [ 43 , 45 ], who investigated bullying traces posted by different author roles (e.g. bully, victim, bystander, assistant, defender, reporter, accuser, reinforcer). However, they collected tweets using the keywords bully, bullied and bullying . As a result, their corpus contained many reports or testimonials of cyberbullying (example 1), instead of actual cyberbullying. Moreover, their method implies that cyberbullying signals that are devoid of such keywords are not included in the training corpus.

  • 1. “Some tweens got violent on the n train, the one boy got off after blows 2 the chest… Saw him cryin as he walkd away: (bullying not cool” [ 44 , p. 658 ]

What clearly distinguishes these works from the present is that their conceptualisation of cyberbullying is not explained. It is, in other words, not clear which type of posts are considered bullying and which are not. In the present research, we identify different types of bullying and all are included in the positive class of our experimental corpus.

For this research, English and Dutch social media data were annotated for fine-grained forms of cyberbullying, based on the actors involved in a cyberbullying incident. After preliminary experiments for Dutch [ 41 , 51 ], we currently present an optimised cyberbullying detection method for English and Dutch and hereby show that the proposed methodology can easily be applied to different languages, provided that annotated data are available.

Data collection and annotation

To be able to build representative models for cyberbullying, a suitable dataset is required. This section describes the construction of two corpora, English and Dutch, containing social media posts that are manually annotated for cyberbullying according to our fine-grained annotation scheme. This allows us to cover different forms and participants (or roles ) involved in a cyberbullying event.

Data collection

Two corpora were constructed by collecting data from the social networking site ASKfm, where users can create profiles and ask or answer questions, with the option of doing so anonymously. ASKfm data typically consists of question-answer pairs published on a user’s profile. The data were retrieved by crawling a number of seed profiles using the GNU Wget software ( http://www.gnu.org/software/wget/ ) in April and October, 2013. After language filtering (i.e. non-English or non-Dutch content was removed), the experimental corpora comprised 113,698 and 78,387 posts for English and Dutch, respectively.

Data annotation

Cyberbullying has been a widely covered research topic recently and studies have shed light on direct and indirect types of cyberbullying, implicit and explicit forms, verbal and non-verbal cyberbullying, and so on. This is important from a sociolinguistic point of view, but knowing what cyberbullying involves is also crucial to build models for automatic cyberbullying detection. In the following paragraphs, we present our data annotation guidelines [ 23 ] and focus on different types and roles related to the phenomenon.

Types of cyberbullying

Cyberbullying research is mainly centered around the conceptualisation, occurrence and prevention of the phenomenon [ 1 , 52 , 53 ]. Sociolinguistic studies have identified different types of cyberbullying [ 12 , 54 , 55 ] and compared these types with forms of traditional or offline bullying [ 20 ]. Like traditional bullying, direct and indirect forms of cyberbullying have been identified. Direct cyberbullying refers to actions in which the victim is directly involved (e.g. sending a virus-infected file, excluding someone from an online group, insulting and threatening), whereas indirect cyberbullying can take place without awareness of the victim (e.g. outing or publishing confidential information, spreading gossip, creating a hate page on social networking sites) [ 20 ].

The present annotation scheme describes some specific textual categories related to cyberbullying, including threats, insults, defensive statements from a victim, encouragements to the harasser, etc. (see the Data collection and annotation section for a complete overview). All of these forms were inspired by social studies on cyberbullying [ 7 , 20 ] and manual inspection of cyberbullying examples.

Roles in cyberbullying

Similarly to traditional bullying, cyberbullying involves a number of participants that adopt well-defined roles. Researchers have identified several roles in (cyber)bullying interactions. Although traditional studies on bullying have mainly concentrated on bullies and victims [ 24 ], the importance of bystanders in a bullying episode has been acknowledged [ 56 , 57 ]. Bystanders can support the victim and mitigate the negative effects caused by the bullying [ 57 ], especially on social networking sites, where they hold higher intentions to help the victim than in real life conversations [ 58 ]. [ 25 ] distinguish three main types of bystanders: i) bystanders who participate in the bullying, ii) who help or support the victim and iii) those who ignore the bullying. Given that passive bystanders are hard to recognise in online text, only the former two are included in our annotation scheme.

Annotation guidelines

To operationalise the task of automatic cyberbullying detection, we elaborated a detailed annotation scheme for cyberbullying that is strongly embedded in the literature and applied it to our corpora. The applicability of the scheme was iteratively tested. Our final guidelines for the fine-grained annotation of cyberbullying are described in a technical report [ 23 ]. The objective of the scheme was to indicate several types of textual cyberbullying and verbal aggression, their severity, and the author participant roles. The scheme is formulated to be generic and is not limited to a specific social media platform. All messages were annotated in context (i.e. presented within their original content or conversation event) when available.

Essentially, the annotation scheme describes two levels of annotation. Firstly, the annotators were asked to indicate, at the message or post level, whether the text under investigation was related to cyberbullying. If the message was considered harmful and thus contained indications of cyberbullying, annotators identified the author’s participant role. Based on the literature on role-allocation in cyberbullying episodes [ 25 , 59 ], four roles are distinguished in the annotation scheme, including victim, bully, and two types of bystanders.

  • Harasser or bully: person who initiates the bullying.
  • Victim: person who is harassed.
  • Bystander-defender: person who helps the victim and discourages the harasser from continuing his actions.
  • Bystander-assistant: person who does not initiate, but helps or encourages the harasser.

Secondly, at the sub-sentence level, the annotators were tasked with the identification of fine-grained text categories related to cyberbullying. In the literature, different forms of cyberbullying are identified [ 12 , 54 , 55 ] and compared with traditional bullying [ 20 ]. Based on these forms, the annotation scheme describes a number of textual categories that are often inherent to a cyberbullying event, such as threats, insults, defensive statements from a victim, encouragements to the harasser, etc. Most of the categories are related to direct forms of cyberbullying (as defined by [ 25 ]), while one is related to outing [ 25 ], an indirect form of cyberbullying, namely defamation . Additionally, a number of subcategories were defined to make the annotation scheme as concrete and distinctive as possible (e.g., discrimination as a subcategory of insult ). All cyberbullying-related categories in the scheme are listed below, and an example post for each category is presented in Table 1 .

  • Threat/blackmail: expressions containing physical or psychological threats or indications of blackmail.
  • General insult: general expressions containing abusive, degrading or offensive language that are meant to insult the addressee.
  • Attacking relatives: insulting expressions towards relatives or friends of the victim.
  • Discrimination: expressions of unjust or prejudicial treatment of the victim. Two types of discrimination are distinguished (i.e. sexism and racism). Other forms of discrimination should be categorised as general insults.
  • Curse/exclusion: expressions of a wish that some form of adversity or misfortune will befall the victim and expressions that exclude the victim from a conversation or a social group.
  • Defamation: expressions that reveal confident or defamatory information about the victim to a large public.
  • Sexual Talk: expressions with a sexual meaning or connotation. A distinction is made between innocent sexual talk and sexual harassment.
  • Bystander defense: expressions by which a bystander shows support for the victim or discourages the harasser from continuing his actions.
  • Victim defense: assertive or powerless reactions from the victim.
  • Encouragement to the harasser: expressions in support of the harasser.
  • Other: expressions that contain any other form of cyberbullying-related behaviour than the ones described here.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0203794.t001

It is important to note that the categories were always indicated in text, even if the post in which they occurred was not considered harmful, for instance in the post “hi bitches, in for a movie?”, “bitches” was annotated as an insult while the post itself was not considered cyberbullying.

To provide the annotators with some context, all posts were presented within their original conversation when possible. All annotations were done using the brat rapid annotation tool [ 60 ], some examples of which are presented in Table 1 .

As can be deduced from the examples in the table, there were no restrictions as to what form the annotations could take. They could be adjectives, noun phrases, verb phrases, and so on. The only condition was that the annotation could not span more than one sentence and less than one word. Posts that were (primarily) written in another language than the corpus language (i.e. Dutch and English) were marked as such and required no further annotations.

We examined the validity of our guidelines and the annotations with an inter-annotator agreement experiment that is described in the following section.

Annotation statistics

The English and Dutch corpora were independently annotated for cyberbullying by trained linguists. All were Dutch native speakers and English second-language speakers. To demonstrate the validity of our guidelines, inter-annotator agreement scores were calculated using Kappa on a subset of each corpus. Inter-rater agreement for Dutch (2 raters) is calculated using Cohen’s Kappa [ 61 ]. Fleiss’ Kappa [ 62 ] is used for the English corpus (> 2 raters). Kappa scores for the identification of cyberbullying are κ = 0.69 (Dutch) and κ = 0.59 (English).

As shown in Table 2 , inter-annotator agreement for the identification of the more fine-grained categories for English varies from fair to substantial [ 63 ], except for defamation , which appears to be more difficult to recognise. No encouragements to the harasser were present in this subset of the corpus. For Dutch, the inter-annotator agreement is fair to substantial, except for curse and defamation . Analysis revealed that one of both annotators often annotated the latter as an insult, and in some cases even did not consider it as cyberbullying-related.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t002

In short, the inter-rater reliability study shows that the annotation of cyberbullying is not trivial and that more fine-grained categories like defamation , curse and encouragements are sometimes hard to recognise. It appears that defamations were sometimes hard to distinguish from insults, whereas curses and exclusions were sometimes considered insults or threats. The analysis further reveals that encouragements to the harasser are subject to interpretation. Some are straightforward (e.g. “I agree we should send her hate”), whereas others are subject to the annotator’s judgment and interpretation (e.g. “hahaha”, “LOL”).

Experimental setup

In this paper, we explore the feasibility of automatically recognising signals of cyberbullying. A crucial difference with related research is that we do not only model bully ‘attacks’, but also more implicit forms of cyberbullying and reactions from victims and bystanders (i.e. all under one binary label ‘signals of cyberbullying’), since these could likewise indicate that cyberbullying is going on. The experiments described in this paper focus on the automatic detection of such cyberbullying signals that need to be further investigated by human moderators when applied in a real-life moderation loop.

The English and Dutch corpus contain 113,698 and 78,387 posts, respectively. As shown in Table 3 , the experimental corpus features a heavily imbalanced class distribution with the large majority of posts not being part of cyberbullying. In classification, this class imbalance can lead to decreased performance. We apply cost-sensitive SVM as a possible hyperparameter in optimisation to counter this. The cost-sensitive SVM reweighs the penalty parameter C of the error term by the inverse class-ratio. This means that misclassifications of the minority positive class are penalised more than classification errors on the majority negative class. Other pre-processing methods to handle data imbalance in classification include feature filtering metrics and data resampling [ 64 ]. These methods were omitted as they were found to be too computationally expensive given our high-dimensional dataset.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t003

For the automatic detection of cyberbullying, we performed binary classification experiments using a linear kernel support vector machine (SVM) implemented in LIBLINEAR [ 65 ] by making use of Scikit-learn [ 66 ], a machine learning library for Python. The motivation behind this is twofold: i) support vector machines (SVMs) have proven to work well for tasks similar to the ones under investigation [ 67 ] and ii) LIBLINEAR allows fast training of large-scale data which allow for a linear mapping (which was confirmed after a series of preliminary experiments using LIBSVM with linear, RBF and polynomial kernels).

The classifier was optimised for feature type (see the Pre-processing and feature engineering section) and hyperparameter combinations (see Table 4 ). Model selection was done using 10-fold cross validation in grid search over all possible feature types (i.e. groups of similar features, like different orders of n -gram bag-of-words features) and hyperparameter configurations. The best performing hyperparameters are selected by F 1 score on the positive class. The winning model is then retrained on all held-in data and subsequently tested on a hold-out test set to assess whether the classifier is over- or under-fitting. The hold-out set represents a random sample (10%) of all data. The folds were randomly stratified splits over the hold-in class distribution. Testing all feature type combinations is a rudimentary form of feature selection and provides insight into which types of features work best for this particular task.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t004

Feature selection over all individual features was not performed because of the large feature space (NL: 795,072 and EN: 871,296 individual features). [ 68 ], among other researchers, demonstrated the importance of joint optimisation, where feature selection and hyperparameter optimisation are performed simultaneously, since the techniques mutually influence each other.

The optimised models are evaluated against two baseline systems : i) an unoptimised linear-kernel SVM (configured with default parameter settings) based on word n -grams only and, ii) a keyword-based system that marks posts as positive for cyberbullying if they contain a word from existing vocabulary lists composed by aggressive language and profanity terms.

Pre-processing and feature engineering

As pre-processing, we applied tokenisation, PoS-tagging and lemmatisation to the data using the LeTs Preprocess Toolkit [ 69 ]. In supervised learning, a machine learning algorithm takes a set of training instances (of which the label is known) and seeks to build a model that generates a desired prediction for an unseen instance. To enable the model construction, all instances are represented as a vector of features (i.e. inherent characteristics of the data) that contain information that is potentially useful to distinguish cyberbullying from non-cyberbullying content.

We experimentally tested whether cyberbullying events can be recognised automatically by lexical markers in a post. To this end, all posts were represented by a number of information sources (or features ) including lexical features like bags-of-words, sentiment lexicon features and topic model features, which are described in more detail below. Prior to feature extraction, some data cleaning steps were executed, such as the replacement of hyperlinks and @-replies, removal of superfluous white spaces, and the replacement of abbreviations by their full form (based on an existing mapping dictionary: http://www.chatslang.com/terms/abbreviations/ ). Additionally, tokenisation was applied before n -gram extraction and sentiment lexicon matching, and stemming was applied prior to extracting topic model features.

After pre-processing of the corpus, the following feature types were extracted:

  • Word n -gram bag-of-words: binary features indicating the presence of word unigrams, bigrams and trigrams.
  • Character n -gram bag-of-words: binary features indicating the presence of character bigrams, trigrams and fourgrams (without crossing word boundaries). Character n -grams provide some abstraction from the word level and provide robustness to the spelling variation that characterises social media data.
  • proper names: a gazetteer of named entities collected from several resources (e.g. Wikipedia).
  • ‘allness’ indicators (e.g. “always”, “everybody”): forms which indicate rhetorical superlativity [ 70 ] which can be helpful in identifying the often hyperbolic bullying language.
  • diminishers (e.g. “slightly”, “relatively”): diminishers, intensifiers and negation words were all obtained from an English grammar describing these lexical classes and existing sentiment lexicons (see further).
  • intensifiers (e.g. “absolutely”, “amazingly”)
  • negation words
  • aggressive language and profanity words: for English, we used the Google Profanity list ( https://code.google.com/archive/p/badwordslist/downloads ). For Dutch, a public profanity lexicon was consulted ( http://scheldwoorden.goedbegin.nl ).
  • Subjectivity lexicon features: positive and negative opinion word ratios, as well as the overall post polarity were calculated using existing sentiment lexicons. For Dutch, we made use of the Duoman [ 71 ] and Pattern [ 72 ] lexicons. For English, we included the Liu and Hu opinion lexicon [ 73 ], the MPQA lexicon [ 74 ], the General Inquirer Sentiment Lexicon [ 75 ], AFINN [ 76 ], and MSOL [ 77 ]. For both languages, we included the relative frequency of all 68 psychometric categories in the Linguistic Inquiry and Word Count (LIWC) dictionary for English [ 78 ] and Dutch [ 79 ].
  • Topic model features: by making use of the Gensim topic modelling library [ 80 ], several LDA [ 81 ] and LSI [ 82 ] topic models with varying granularity ( k = 20, 50, 100 and 200) were trained on data corresponding to each fine-grained category of a cyberbullying event (e.g. threats, defamations, insults, defenses). The topic models were based on a background corpus (EN: ± 1,200,000 tokens, NL: ± 1,400,000 tokens) scraped with the BootCaT [ 83 ] web-corpus toolkit. BootCaT collected ASKfm user profiles using lists of manually determined seed words that are characteristic of the cyberbullying categories.

When applied to the training data, this resulted in 871,296 and 795,072 features for English and Dutch, respectively.

In this section, we present the results of our experiments to automatically detect cyberbullying signals in an English and Dutch corpus of ASKfm posts. Ten-fold cross-validation was performed in exhaustive grid search over different feature type and hyperparameter combinations (see the Experimental setup section). The unoptimised word n-gram-based classifier and keyword-matching system serve as baselines for comparison. Precision, Recall and F 1 performance metrics were calculated on the positive class. We also report Area Under the Receiver Operator Curve (AUROC) scores, a performance metric that is more robust to data imbalance than precision, recall and F score [ 84 ].

Table 5 gives us an indication of which feature type combinations score best and hence contribute most to this task. It presents the cross-validation and hold-out scores of a set of feature combinations, which are explained in the feature groups legend ( Table 6 ). A total of 31 feature type combinations, each with 28 different hyperparameter sets have been tested. Table 5 shows the results for the three best scoring systems by included feature types with optimised hyperparameters. The maximum obtained F 1 score in cross-validation is 64.26% for English and 61.20% for Dutch and shows that the classifier benefits from a variety of feature types. The results on the hold-out test set show that the trained systems generalise well on unseen data, indicating little under- or overfitting. The simple keyword-matching baseline system has the lowest performance for both languages even though it obtains high recall for both languages, especially for English (80.14%), suggesting that profane language characterises many cyberbullying-related posts. Feature group and hyperparameter optimisation provides a considerable performance increase over the unoptimised word n -gram baseline system. The top-scoring systems for each language do not differ a lot in performance, except the best system for Dutch, which trades recall for precision when compared to the runner-ups.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t005

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t006

Table 7 presents the scores of the (hyperparameter-optimised) single feature type systems, to gain insight into the performance of these feature types when used individually. Analysis of the combined and single feature type sets reveals that word n -grams, character n -grams , and subjectivity lexicons prove to be strong features for this task. In effect, adding character n -grams always improved classification performance for both languages. They are likely to provide robustness to lexical variation in social media text, as compared to word n -grams. While subjectivity lexicons appear to be discriminative features, term lists perform badly on their own as well as in combinations for both languages. This shows once again (see the profanity baseline) that cyberbullying detection requires more sophisticated information sources than profanity lists. Topic models seem to do badly for both languages on their own, but in combination with other features, they improve Dutch performance consistently. A possible explanation for their varying performance in both languages would be that the topic models trained on the Dutch background corpus are of better quality than the English ones. In effect, a random selection of background corpus texts reveals that the English scrape contains more noisy data (i.e. low word-count posts and non-English posts) compared to the Dutch scraped corpus.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t007

A shallow qualitative analysis of the classification output provided insight into some of the classification mistakes.

Table 8 gives an overview of the error rates per cyberbullying category of the best performing and baseline systems. This could give an indication of the types of bullying are hard to detect by the current classifier. All categories are always considered positive for cyberbullying (i.e. the error rate equals the false negative rate), except for Sexual and Insult which can also be negative (in case of harmless sexual talk and ‘socially acceptable’ insulting language like “hi bitches, in for a movie?” the corresponding category was indicated, but the post itself was not annotated as cyberbullying) and Not cyberbullying , which is always negative. Error rates often being lowest for the profanity baseline confirms that it performs particularly well in terms of recall (at the expense of precision, see Table 5 ). When looking at the best system for both languages, we see that Defense is the hardest category to classify. This should not be a surprise as the category comprises defensive posts from bystanders and victims, which contain less aggressive language than cyberbullying attacks and are often shorter in length than the latter. Assertive defensive posts (i.e. a subcategory of Defense ) which attack the bully are, however, more often correctly classified. There are not sufficient instances of the Encouragement class for either language in the hold-out set to be representative. In both languages, threats, curses and incidences of sexual harassment are most easily recognisable, showing (far) lower error rates than the categories Defamation , Defense , Encouragements to the harasser , and Insult .

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t008

A qualitative error analysis of the English and Dutch predictions reveals that false positives often contain aggressive language directed at a second person, often denoting personal flaws or containing sexual and profanity words. We see that misclassifications are often short posts containing just a few words and that false negatives often lack explicit verbal signs of cyberbullying (e.g. insulting or profane words) or are ironic (examples 2 and 3). Additionally, we see that cyberbullying posts containing misspellings or grammatical errors and incomplete words are also hard to recognise as such (examples 4 and 5). The Dutch and English corpus are overall similar with respect to qualitative properties of classification errors.

  • 2. You might want to do some sports ahah x
  • 3. Look who is there… my thousandth anonymous hater , congratulations !
  • 4. ivegot 1 word foryou… yknow whatit is ? → slut
  • 5. One word for you : G—A—…

In short, the experiments show that our classifier clearly outperforms both a keyword-based and word n -gram baseline. However, analysis of the classifier output reveals that false negatives often lack explicit clues that cyberbullying is going on, indicating that our system might benefit from irony recognition and integrating world knowledge to capture such implicit realisations of cyberbullying.

Our annotation scheme allowed to indicate different author roles, which provides better insight into the realisation of cyberbullying. Table 9 presents the error rates of our classifier for the different author roles, being harasser, victim, and two types of bystanders. We observe that the error rates are high for bystander assistant and victim , but there are not sufficient instances in the hold-out set of the former role for either language to be representative. Error rates for the victim class of 50.39% and 54% in English and Dutch respectively indicate that the role is hard to recognise by the classifier. A possible explanation for this could be that victim posts in our corpus either expressed powerlessness facing the bully (example 6) or either contained explicit aggressive language as well (example 7).

  • 6. Your the one going round saying im a cunt and a twat and im ugly. tbh all im doing is sticking up for myself .
  • 7. You’re fucked up saying I smell from sweat , because unlike some other people I shower every day BITCH

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t009

According to the figures, the most straightforward roles in detection are bystander defender and harasser .

In the light of comparison with state-of-the-art approaches to cyberbullying, we observe that competitive results are obtained with regard to [ 30 – 32 , 41 ]. However, the fundamental differences with respect to data collection, sources, and conceptualisations of bullying hardly allow for direct comparison. Table 10 presents the experimental results obtained by [ 43 – 45 ] who, like the current study, approach the task as detecting posts from bullies as well as from victims and bystanders. Given their experimental setup (i.e. task description, data genre and classifier), their work can be considered most similar to ours so their results might function as benchmarks. Also here, a number of crucial differences with the current approach can be observed: Firstly, their corpora were collected using the keywords “bully”, “bullying” and “bullied”, which may bias the dataset towards the positive class and ensures that many explicit lexicalisations are present in the positive class. Second, it is not clear which types of cyberbullying (i.e. explicit and implicit bullying, threats, insults, sexual harassment) are included in the positive class. Furthermore, as can be deduced from Table 10 , the datasets are considerably smaller than ours and show a more balanced class distribution (respectively 39% cyberbullying posts in [ 43 ] and [ 44 ], and 29%/26% in [ 45 ]) than the ratio of bullying posts in our corpus (see Table 3 : 5% for English, 7% for Dutch). Hence, any comparison should be made with caution due to these differences.

thumbnail

https://doi.org/10.1371/journal.pone.0203794.t010

These studies obtain higher scores on similar task but vastly different datasets. Notably, [ 45 ] shows a great improvement in classification performance using deep representational learning with a semantic-enhanced marginalized denoising auto-encoder over traditional n -gram and topic modelling features.

Conclusions and future research

The goal of the current research was to investigate the automatic detection of cyberbullying-related posts on social media. Given the information overload on the web, manual monitoring for cyberbullying has become unfeasible. Automatic detection of signals of cyberbullying would enhance moderation and allow to respond quickly when necessary.

Cyberbullying research has often focused on detecting cyberbullying ‘attacks’ and hence overlook other or more implicit forms of cyberbullying and posts written by victims and bystanders. However, these posts could just as well indicate that cyberbullying is going on. The main contribution of this paper is that it presents a system to automatically detect signals of cyberbullying on social media, including different types of cyberbullying, covering posts from bullies, victims and bystanders. We evaluated our system on a manually annotated cyberbullying corpus for English and Dutch and hereby demonstrated that our approach can easily be applied to different languages, provided that annotated data for these languages are available.

A set of binary classification experiments were conducted to explore the feasibility of automatic cyberbullying detection on social media. In addition, we sought to determine which information sources contribute most to the task. Two classifiers were trained on an English and Dutch ASKfm corpus and evaluated on a hold-out test of the same genre. Our experiments reveal that the current approach is a promising strategy for detecting signals of cyberbullying on social media automatically. After feature and hyperparameter optimisation of our models, a maximum F 1 score of 64.32% and 58.72% was obtained for English and Dutch, respectively. The classifiers hereby significantly outperformed a keyword and an (unoptimised) n -gram baseline. A qualitative analysis of the results revealed that false positives often include implicit cyberbullying or offenses through irony, the challenge of which will constitute an important area for future work. Error rates on the different author roles in our corpus revealed that especially victims are hard to recognise, as they react differently in our corpus, showing either powerlessness facing the bully or reacting in an assertive and sometimes even aggressive way.

As shown in [ 45 ] deep representation learning is a promising avenue for this task. We therefore intent to apply deep learning techniques to improve classifier performance.

Another interesting direction for future work would be the detection of fine-grained cyberbullying categories such as threats, curses and expressions of racism and hate. When applied in a cascaded model, the system could find severe cases of cyberbullying with high precision. This would be particularly interesting for monitoring purposes. Additionally, our dataset allows for detection of participant roles typically involved in cyberbullying. When applied as moderation support on online platforms, such a system enables feedback in function of the recipient (i.e. a bully, victim, or bystander).

  • 1. Livingstone S, Haddon L, Görzig A, Ólafsson K. Risks and safety on the internet: The perspective of European children. Initial Findings. London: EU Kids Online; 2010.
  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 7. Van Cleemput K, Bastiaensens S, Vandebosch H, Poels K, Deboutte G, DeSmet A, et al. Zes jaar onderzoek naar cyberpesten in Vlaanderen, België en daarbuiten: een overzicht van de bevindingen. (Six years of research on cyberbullying in Flanders, Belgium and beyond: an overview of the findings.) (White Paper). University of Antwerp & Ghent University; 2013.
  • 8. Livingstone S, Haddon L, Vincent J, Giovanna M, Ólafsson K. Net Children Go Mobile: The Uk report; 2014. Available from: http://netchildrengomobile.eu/reports . [Accessed 30th March 2018].
  • 16. Dalla Pozza V, Di Pietro A, Morel S, Emma P. Cyberbullying among Young People. European Parliament; 2016.
  • 17. Olweus D. Bullying at School: What We Know and What We Can Do. 2nd ed. Wiley; 1993.
  • 23. Van Hee C, Verhoeven B, Lefever E, De Pauw G, Daelemans W, Hoste V. Guidelines for the Fine-Grained Analysis of Cyberbullying, version 1.0. LT3, Language and Translation Technology Team–Ghent University; 2015. LT3 15-01.
  • 25. Vandebosch H, Van Cleemput K, Mortelmans D, Walrave M. Cyberpesten bij jongeren in Vlaanderen: Een studie in opdracht van het viWTA (Cyberbullying among youngsters in Flanders: a study commissoned by the viWTA). Brussels: viWTA; 2006. Available from: https://wise.vub.ac.be/fattac/mios/Eindrapport%20cyberpesten%20viwta%202006.pdf . [Accessed 30th March 2018].
  • 27. Salmivalli C, Pöoyhönen V. In: Cyberbullying in Finland. 2nd ed. Wiley-Blackwell; 2012. p. 57–72.
  • 28. Salawu S, he Y, Lumsden J. Approaches to Automated Detection of Cyberbullying: A Survey. IEEE Transactions on Affective Computing. forthcoming;p. 1.
  • 29. Reynolds K, Kontostathis A, Edwards L. Using Machine Learning to Detect Cyberbullying. In: Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops. ICMLA’11. Washington, DC, USA: IEEE Computer Society; 2011. p. 241–244.
  • 30. Dinakar K, Reichart R, Lieberman H. Modeling the Detection of Textual Cyberbullying. In: The Social Mobile Web. vol. WS-11-02 of AAAI Workshops. AAAI; 2011. p. 11–17.
  • 31. Yin D, Davison BD, Xue Z, Hong L, Kontostathis A, Edwards L. Detection of Harassment on Web 2.0. In: Proceedings of the Content Analysis in the Web 2.0 (CAW2.0). Madrid, Spain; 2009.
  • 32. Dadvar M. Experts and machines united against cyberbullying [PhD thesis]. University of Twente; 2014.
  • 33. Nahar V, Al-Maskari S, Li X, Pang C. Semi-supervised Learning for Cyberbullying Detection in Social Networks. In: ADC.Databases Theory and Applications. Springer International Publishing; 2014. p. 160–171.
  • 34. Bayzick J, Kontostathis A, Edwards L. Detecting the Presence of Cyberbullying Using Computer Software. In: Proceedings of the 3rd International Web Science Conference. WebSci11; 2011.
  • 35. Sui J. Understanding and Fighting Bullying with Machine Learning [PhD thesis]. Department of Computer Sciences, University of Wisconsin-Madison; 2015.
  • 37. Raisi E, Huang B. Cyberbullying Detection with Weakly Supervised Machine Learning. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017. ASONAM’17. New York, NY, USA: ACM; 2017. p. 409–416.
  • 38. Hosseinmardi H. Survey of Computational Methods in Cyberbullying Research. In: Proceedings of the First International Workshop on Computational Methods for CyberSafety. CyberSafety’16. New York, NY, USA: ACM; 2016. p. 4–4.
  • 40. Chavan VS, S SS. Machine learning approach for detection of cyber-aggressive comments by peers on social media network. In: 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI); 2015. p. 2354–2358.
  • 41. Van Hee C, Lefever E, Verhoeven B, Mennes J, Desmet B, De Pauw G, et al. Detection and fine-grained classification of cyberbullying events. In: Angelova G, Bontcheva K, Mitkov R, editors. Proceedings of Recent Advances in Natural Language Processing, Proceedings; 2015. p. 672–680.
  • 43. Zhao R, Zhou A, Mao K. Automatic Detection of Cyberbullying on Social Networks Based on Bullying Features. In: Proceedings of the 17th International Conference on Distributed Computing and Networking. No. 43 in ICDCN’16. New York, NY, USA: ACM; 2016. p. 43:1–43:6.
  • 44. Xu JM, Jun KS, Zhu X, Bellmore A. Learning from Bullying Traces in Social Media. In: Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. NAACL HLT’12. Stroudsburg, PA, USA: Association for Computational Linguistics; 2012. p. 656–666.
  • 46. Huang Q, Singh VK, Atrey PK. Cyber Bullying Detection Using Social and Textual Analysis. In: Proceedings of the 3rd International Workshop on Socially-Aware Multimedia. SAM’14. New York, NY, USA: ACM; 2014. p. 3–6.
  • 48. Squicciarini A, Rajtmajer S, Liu Yh, Griffin C. Identification and Characterization of Cyberbullying Dynamics in an Online Social Network. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015. ASONAM’15. New York, NY, USA: ACM; 2015. p. 280–285.
  • 49. Chatzakou D, Kourtellis N, Blackburn J, De Cristofaro E, Stringhini G, Vakali A. Mean Birds: Detecting Aggression and Bullying on Twitter. In: Proceedings of the 2017 ACM on Web Science Conference. WebSci’17. New York, NY, USA: ACM; 2017. p. 13–22.
  • 51. Van Hee C, Lefever E, Verhoeven B, Mennes J, Desmet B, De Pauw G, et al. Automatic detection and prevention of cyberbullying. In: Lorenz P, Bourret C, editors. International Conference on Human and Social Analytics, Proceedings. IARIA; 2015. p. 13–18.
  • 55. Willard NE. Cyberbullying and Cyberthreats: Responding to the Challenge of Online Social Aggression, Threats, and Distress. 2nd ed. Research Publishers LLC; 2007.
  • 60. Stenetorp P, Pyysalo S, Topić G, Ohta T, Ananiadou S, Tsujii J. brat: a Web-based Tool for NLP-Assisted Text Annotation. In: Proceedings of the Demonstrations Session at EACL 2012. Avignon, France; 2012. p. 102–107.
  • 67. Desmet B. Finding the online cry for help: automatic text classification for suicide prevention [PhD thesis]. Ghent University; 2014.
  • 68. Hoste V. Optimization Issues in Machine Learning of Coreference Resolution [PhD thesis]. Antwerp University; 2005.
  • 71. Jijkoun V, Hofmann K. Generating a Non-English Subjectivity Lexicon: Relations That Matter. In: Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg, PA, USA; 2009. p. 398–405.
  • 72. De Smedt T, Daelemans W. “Vreselijk mooi!” (“Terribly Beautiful!”): A Subjectivity Lexicon for Dutch Adjectives. In: Proceedings of the Eight International Conference on Language Resources and Evaluation. LREC’12. Istanbul, Turkey; 2012. p. 3568–3572.
  • 73. Hu M, Liu B. Mining and summarizing customer reviews. In: Proceedings of the 10th ACM SIGKDD international conference on Knowledge discovery and data mining. KDD04. ACM; 2004. p. 168–177.
  • 74. Wilson T, Wiebe J, Hoffmann P. Recognizing Contextual Polarity in Phrase-level Sentiment Analysis. In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. HLT’05. Association for Computational Linguistics; 2005. p. 347–354.
  • 75. Stone PJ, Dunphy DCD, Smith MS, Ogilvie DM. The General Inquirer: A Computer Approach to Content Analysis. The MIT Press; 1966.
  • 76. Nielsen FÅ. A New ANEW: Evaluation of a Word List for Sentiment Analysis in Microblogs. In: Rowe M, Stankovic M, Dadzie AS, Hardey M, editors. Proceedings of the ESWC2011 Workshop on ‘Making Sense of Microposts’: Big things come in small packages. vol. 718 of CEUR Workshop Proceedings. CEUR-WS.org; 2011. p. 93–98.
  • 77. Mohammad S, Dunne C, Dorr B. Generating High-coverage Semantic Orientation Lexicons from Overtly Marked Words and a Thesaurus. In: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2. EMNLP’09. Stroudsburg, PA, USA: Association for Computational Linguistics; 2009. p. 599–608.
  • 78. Pennebaker JW, Francis ME, Booth RJ. Linguistic Inquiry and Word Count: LIWC 2001. Mahwah, NJ: Lawrence Erlbaum Associates; 2001.
  • 80. Rehurek R, Sojka P. Software framework for topic modelling with large corpora. In: The LREC 2010 Workshop on new Challenges for NLP Frameworks. University of Malta; 2010. p. 45–50.
  • 83. Baroni M, Bernardini S. BootCaT: Bootstrapping Corpora and Terms from the Web. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation. LREC’04; 2004. p. 1313–1316.
  • University Home
  • Campus Life

Clemson News

Clemson News

search

I researched the dark side of social media − and heard the same themes in ‘The Tortured Poets Department’

Upset girl sitting in the dark while using her smartphone. The light from the screen is illuminating her face.

Angeline Close Scheinbaum , Clemson University

As an expert in consumer behavior, I recently edited a book about how social media affects mental health.

I’m also a big fan of Taylor Swift .

So when I listened to Swift’s latest album, “ The Tortured Poets Department ,” I couldn’t help but notice parallels to the research that I’ve been studying for the past decade.

It might seem like an outlandish comparison. What can the bestselling album of 2024 have to do with research into the dark side of social media?

But bear with me: Taylor Swift lives in the same social media-saturated universe as the rest of us. That may be why the melancholic themes of her album resonate with so many people.

With young people out of school for the summer and spending free time on social media, now is a time to put on some tunes and think about mental health and what is called “consumer well-being” in the transformative consumer research area of scholarship.

Here are three Taylor-made takeaways that shed light on some of the themes in my latest edited book, “ The Darker Side of Social Media: Consumer Psychology and Mental Health .”

Lesson 1: Modern life through the social media lens can get you down

If you’ve been feeling out of sorts lately, you’re hardly alone: Anxiety and depression can be exacerbated by overuse of social media, research summarized in Chapter 1 shows. And social media use is on the rise.

The average American teenager spends nearly five hours every day scrolling TikTok, Instagram and the like, polling shows, while adults clock more than two hours a day on social media. Such could be compulsive social media use and overall overuse.

Digital life can simulate addiction and sometimes manifest as a distinct form of anxiety called “disconnection anxiety,” researchers Line Lervik-Olsen, Bob Fennis and Tor Wallin Andreassen note in their book chapter on compulsive social media use. This can breed feelings of depression – a mood that recurs throughout “The Tortured Poets Department.”

Oftentimes, depression goes hand in hand with feelings of loneliness. Social media has, in some ways, made people feel even lonelier – nearly 4 in 5 Americans say that social media has made social divisions worse , according to Pew Research. In our book chapter, my graduate student Betül Dayan and I consider the prevalence of loneliness in the digital world.

The pandemic showed the world that social media relationships can’t replace physical company. Even celebrities with hundreds of millions of followers simply want someone to be with. In the song “The Prophecy,” Swift sings of loneliness and wanting someone who simply enjoys her presence:

Don’t want money/ Just someone who wants my company ( “The Prophecy” )

Lesson 2: Comparisons will make you miserable

Social media is a breeding ground for comparisons . And since people tend to portray idealized versions of themselves on social media – rather than their authentic selves – these comparisons are often false or skewed . Research has shown that people on social media tend to make “upward comparisons ,” judging themselves relative to people they find inspiring. Social media can breed false comparisons, as what someone is aspiring to may not be authentic.

This can lead to what researchers call a “negative self-discrepancy” – a sense of disappointment with one’s failure to meet a personal ideal. As researchers Ashesh Mukherjee and Arani Roy note in their book chapter , social media makes people more dissatisfied with their own sense of control, intelligence and power. This, in turn, can worsen stress and anxiety.

The theme of comparisons comes through loud and clear in the song “ The Tortured Poets Department ,” in which Swift castigates a partner with literary pretensions – and herself for dating him. Swift may be the most rich, famous and successful pop star on the planet, but comparing yourself with even more heroic figures is sure to make anyone feel worse:

You’re not Dylan Thomas, I’m not Patti Smith. This ain’t the Chelsea Hotel, we’re modern idiots. ( “The Tortured Poets Department” )

Lesson 3: Bullying isn’t a minor problem

In today’s social media-focused world, bullying has transitioned to online platforms. And arguably, platforms breed bullying: People are more likely to engage in cruel behavior online than they would face to face.

Policymakers increasingly recognize bullying as an important political concern. In their book chapter, researchers Madison Brown, Kate Pounders and Gary Wilcox have examined laws intended to fight bullying.

One such effort, the Kids Online Safety Act , which among other things would require online platforms to take steps to address cyberbullying , recently passed the U.S. Senate.

Lawmakers aren’t the only ones taking bullying seriously. In her latest album, Swift refers to bullies in her own life as vipers who “disgrace her good name” and who say insults that stick with her for a long time. Themes of reputation and bullying have run throughout Swift’s entire body of work – hardly surprising for someone who has lived such a public life , both online and off.

I’ll tell you something ’bout my good name. It’s mine alone to disgrace. I don’t cater to all these vipers dressed in empath’s clothing. ( “But Daddy I Love Him” )

It is not known whether overall social media use or overuse alone causes some of these outcomes, but our research does demonstrate that in many ways there’s a darker side to social media when it comes to consumer well-being – even for celebrities. So if you’re going to see the Eras Tour in Europe this summer , you might want to leave your phone back at the hotel.

Angeline Close Scheinbaum , Associate Professor of Marketing, Clemson University

This article is republished from The Conversation under a Creative Commons license. Read the original article .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Cyberbullying and Adolescents

Vidhya lakshmi kumar.

MassGeneral Hospital for Children, Department of Pediatrics, Harvard Medical School

Mark A. Goldstein

Division of Adolescent and Young Adult Medicine, MassGeneral Hospital for Children, Department of Pediatrics, Harvard Medical School, 175 Cambridge Street, Room 508, Boston, MA 02114

Purpose of Review

Cyberbullying is an aggressive behavior involving a type of electronic communication intending to harm a victim that can have profound effects on adolescents. This review examines the epidemiology, issues from cyberbullying, presentation to care of its victims and proposed interventions to this behavior.

Recent Findings

There are a variety of physical and psychological effects on victims of cyberbullying that can include recurrent abdominal pain, headaches and difficulty with sleep. In addition, victims have higher rates of anxiety, depression, suicidal ideation and a lower level of well-being. Unfortunately, victims may remain silent, so screening for cyberbullying is encouraged in a variety of settings. Interventions can be designed at the level of the victim (and perpetrator), family, school and other support networks. Prevention of cyberbullying can be a focus for providers of healthcare.

Cyberbullying can have profound biopsychosocial effects on its victims. There are strategies currently in use and under development to identify and intervene on behalf of those affected by these behaviors.

Introduction

Michelle Carter, age 20, was convicted of involuntary manslaughter and sentenced in 2017 to prison for her role in the 2014 suicide of her then 18-year-old boyfriend, Conrad Roy Jr. The case against Carter, according to prosecutors, rested on text messages that she sent to Roy that encouraged him to end his life which he did by carbon monoxide poisoning. Phoebe Prince, a 15-year-old immigrant from Ireland, committed suicide in 2010 by hanging after bullying online and in school by her peers.

Bullying has been a well-documented phenomenon across the United States and internationally as well. Within Massachusetts, the stories of Michelle Carter, Conrad Roy Jr and Phoebe Prince serve as powerful reminders of the impact of cyberbullying, verbal bullying and intimidation.

Though there is not one standard definition, in the state of Massachusetts, bullying is defined by the Department of Education as “ the severe or repeated use by one or more students of a written, verbal, or electronic expression, or a physical act or gesture, or any combination thereof, directed at another student that has the effect of: (i) causing physical or emotional harm to the other student or damage to the other student’s property; (ii) placing the other student in reasonable fear of harm to himself or of damage to his property; (iii) creating a hostile environment at school for the other student; (iv) infringing on the rights of the other student at school; or (v) materially and substantially disrupting the education process or the orderly operation of a school” ( 1 ). It is this electronic expression, in particular, that has catapulted in recent years with the advancement in technology, the ease of communication via social media, as well as the dissemination and access to technology among grade school children and beyond.

Definition of Cyberbullying

Cyberbullying has evolved in many forms, which has created difficulty in establishing a unified definition that is widely accepted by clinicians. The definition of bullying itself does not easily translate to the cyber arena, but at its core, primarily refers to “an intentional act of aggression, carried out to harm another individual using electronic forms of contacts or devices” ( 2 ). Though initially limited to electronic mail, cyberbullying has slowly begun to incorporate a wider array of forms of electronic communication, ranging from personal blogs, text messaging, video content posted to streaming websites, such as You Tube, and more recently, social media formats including Instagram, SnapChat and TikTok.

Further exacerbating the potential for a severe impact of cyberbullying is access to smartphone technology, the audience involved in cyberbullying efforts, the opportunity for “anonymity by perpetrators,” the “permanency of bullying displays on the internet,” as well as the ability of bullying to occur regardless of distance from the victim and with “minimal constraints on time ( 3 ).” Cyberbullying can take on the following forms: flaming (online fights using electronic messages with angry and vulgar language), harassment, cyber stalking, denigration, impersonation, outing, trickery and exclusion ( 4 ). In the case of Michelle Carter, she used text messages to Conrad Roy to encourage him to end his life.

Epidemiology

Given the lack of consensus on a definition for cyberbullying, it has been difficult to easily quantify its true prevalence in the United States and the global arena. In a small sample of global studies, prevalence of middle and high school cyberbullying ranged from 1–30% for suspected perpetrators, and from 3–72% for suspected victims ( 3 ). The prevalence has been thought to vary due to a multitude of factors including varying definitions for what constitutes an act of cyberbullying, cross-cultural differences in victim reporting, as well as access to technology, which could limit the ability to participate in cyberbullying. Studies available across the U.S. and internationally identify vulnerable populations of adolescents for whom special attention should be made, including females, LGBTQ youth, younger adolescents and youth with disabilities ( 5 , 6 ).

Studies have also demonstrated gender differences in the prevalance of cyberbullying vicitimization, with female adolescents reporting a higher prevalence of victimization (9.4% for single encounter, 13.3% with two or more encounters) than their male counterparts (8.3% for single encounter, 7.8% with two more encounters) ( 7 ). Being bullied is further associated with increased suicidal ideation, delinquency and global psychological distress among both male and female adolescents, though more marked in females and more pronounced with repeated cyberbullying encounters or incidences ( 7 ).

Surveys of cyberbullying victims population further identify a large proportion of youth who identified as a part of the LGBTQ community, as well as youth with disabilities. In a Taiwanese study reviewing 500 homosexual or bisexual men between the ages of 20 and 25, there were reported significant associations between low family support, early coming out and traditional bullying victimization with cyberbullying ( 8 ).

In addition, adolescents and young adults with mental health needs or disabilities have often been targets of cyberbullying efforts. A Chinese study examining associations between cyberbullying and social impairment, attention-deficit-hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) in adolescents with high functioning autism spectrum disorder demonstrated that older adolescents and those with more severe ODD symptoms were more likely to be victims of cyberbullying. The victims of cyberbullying in this population were more likely to report symptoms associated with depression, anxiety and suicidality ( 9 ).

Issues from Cyberbullying

Cyberbullying has been associated with a variety of psychological and physical effects on its victims ( Table 1 ) ( 10 – 12 ). Victims of cyberbullying have higher rates of depression when compared to other forms of traditional bullying. In addition, victims may have more anxiety and suicidal ideation compared to peers who do not face victimization ( 3 , 8 ). A varying percentage of cyberbullying victims pursue suicide. Some studies suggest that children and adolescents who are both victims and perpetrators of cyberbullying constitute a distinct group with the highest risk for psychosocial problems, such as depressive and anxiety symptoms, as well as for lower levels of well-being in general. Victims of cyberbullying have also shown impacts in their family dynamics and relationships with friends, with many demonstrating increasing isolation and loneliness as well as decreased trust in their support groups ( 13 ). Some studies have indicated that reactions to cyberbullying may depend on the form of media (video vs. text conversation vs. phone calls) with some suggestion that pictures and video were the most negatively impactful on adolescents ( 14 ).

Signs and Symptoms of Cyberbullying ( 10 – 12 )

• Decreased self-esteem or feelings of helplessness
• Increased depression and/or anxiety
• Sudden loss of friends, isolation from peers or withdrawal at home
• Reported health problems (e.g., stomach aches, headaches) for which adolescent wants to stay at home or fake illnesses
• Increased truancy or school absences
• Decline in academic performance or loss of interest in school work
• Changes in eating habits or appetite
• Difficulty sleeping or frequent nightmares
• Sudden anger, rage or other emotional swings
• Self-harm behaviors, such as cutting or suicidal ideation

There have been relatively few studies examining the effect of cyberbullying on adolescents’ physical health. Grade school adolescent cyberbullying victims are often more likely to report somatic symptoms including difficulty sleeping, recurrent non-specific abdominal pain and frequent headaches ( 3 ). However, certain studies indicate that cyberbullies might be better off than victims with some studies finding no relation between the role of perpetrator and depressive symptoms ( 2 ). Other studies have focused on health impact as opposed to specific health problems by examining self-reported health-related quality of life (HRQOL). Survey data collected from college students have demonstrated long term impacts on physical health due to pre-college bullying experiences with lower HRQOL, likely mediated through depression ( 15 ). Furthermore, the study proposed that precollege exposure to cyberbullying might have latent effects that could be triggered by future bullying-related traumatization, including reduced confidence in social situations as well as isolation ( 15 ).

In addition, there have been links between cyberbullying and increased risky behaviors including substance abuse across a variety of substances. In a study examining a population of Greek national undergraduates, both male and female late adolescents who were victims of bullying during middle and high school were less likely to use condoms during college years when compared to non-victimized students ( 16 ). Furthermore, men who were bullies or victims of bullying were twice as likely to experience excessive drunkness and three times as likely to pay for sex. In addition, for males, cyberbullies and cybervictims were more likely to report smoking ( 16 ). Compared with traditional bullying, cyberbullying may have a stronger link to substance abuse, with one longitudinal study demonstrating that cyberbullying victimization predicted depression and substance abuse six months later ( 17 ). In addition, both victims and perpetrators of cyberbullying have been linked with increased use of marijuana with an implication that this may be indicative of a larger substance abuse problem among this population ( 18 ). This highlights the emergence of gender specific risks and behaviors associated with cyberbullying that require further evaluation.

The relationship between cyberbullying and an adolescent’s use of the internet has also been explored. A study of 845 adolescents with a median age of 15 years demonstrated that cyberbullying victims were at increased risk for having problematic internet use (PUI), which included a preoccupation with the internet, an inability to control their use of the internet, as well as continued use despite negative consequences ( 19 ). However, it remains unclear whether the increased time spent on the internet is deleterious or protective, as victims may be using the internet as an escape mechanism to mitigate anxiety and reduce negative feelings of isolation. Nevertheless, increased time on the internet by cyberbullying victims does place them at risk for harassment, invasion of privacy and exploitation ( 19 ).

Presentation to Care

Unfortunately, despite the deleterious effects of cyberbullying on a victim’s mental and physical health, many victims remain silent and hesitate to reach out for help. The onus, therefore, remains on others: educators, providers, family members and social supports to recognize common signs and symptoms of cyberbullying. Most often, individuals will notice that such victims begin to avoid school, a primary setting in which they face the effects of cyberbullying. In addition, a large majority of perpetrators may be members of the victim’s school community.

Accordingly, the victim may have increased school absenteeism due to somatic symptoms (frequent stomachaches, headaches, sleeping disruption or nightmares) or academic difficulties due to lack of school attendance or problems with concentration. Victims may demonstrate lower self-esteem, increased depressive symptoms and anxiety with detachment from friends or sudden withdrawal at home or school. On the contrary, these affected youth may show sudden bursts of anger or demonstrate increased self-destructive behaviors, such as cutting, or acts of truancy ( 10 – 12 ). Ultimately, since a victim may not come forward to seek help, it is important that support groups bring the individual to care.

The ability to prevent or intervene in cyberbullying most effectively hinges upon screening to detect and identify victims, as well as perpetrators. There is difficulty in determining the best method to screen for bullying in the medical setting, whether this is in the emergency department or at a primary care visit. Though direct questioning may be effective, studies have posited that it may be more effective to use a questionnaire to elicit accurate responses from patients. The “Guidelines for Adolescent Preventive Services” form includes screening across a variety of health behaviors and experiences, including bullying ( 20 ). Couching inquiries about bullying in the setting of assessing adolescent behavior may serve to normalize questioning about bullying and in turn allow adolescents to open up to providers about their experiences. These screens can focus on questions such as ( 21 ):

- How often do you get bullied or bully others?

- How long have you been bullied or bullied others?

- Where are you bullied or bully others?

- How are you bullied or how do you bully others?

Screening for cyberbullying should be an important element of adolescent care. Furthermore, screening should not be limited to non-urgent scenarios. Studies have shown that adolescents report exposure to cyberbullying and violence in a variety of urgent medical situations as well, including emergency rooms, inpatient hospital stays and school-based clinics. This underscores the importance screening for cyberbullying during any patient interaction.

Though victims may present to their pediatrician’s office for assistance, often these youth present to the emergency department. These encounters may be due to mental health needs, in the setting of suicidal ideation or attempts at self-harm, previously identified as significant symptomatology in cyberbullying victims. Studies demonstrate that over three quarters of victims of cyberbullying will present to the emergency department with a mental health need as their chief complaint and that more than three quarters of adolescents presenting with suicidal ideation as their chief complaint have endorsed previous incidences of cyberbullying ( 22 ). Cyberbullying was also found to be the strongest predictor of suicidal ideation, while controlling for other important factors, such as age, gender and psychiatric diagnosis ( 22 ). Therefore, it remains important that providers caring for adolescents and young adults presenting with suicidal ideation pointedly ask about bullying and cyberbullying in the patient’s life. In a Canadian population of adolescents, cyberbullying victims were more likely to attempt, or complete suicide compared to those who had not been bullied ( 18 ). It is further postulated that cyberbullying victims may seek help less frequently or underreport incidences compared to those who have been traditionally bullied and that increases their risk of suicidal ideation ( 22 ).

Types of Interventions

Interventions designed to target and mitigate cyberbullying remain as important as attempts to intervene and provide support for victims. These efforts should not solely focus on victims; they should also work with perpetrators. Programs need to reinforce positive values in school age children to reduce the number of cyberbullying perpetrators.

Though these interventions may occur in a multitude of settings, many studies have primarily focused on school-based interventions. This seems appropriate given that a large proportion of cyberbullying incidents take place amongst school classmates. Social support has been shown to be an important buffer when adolescents experience cyberbullying ( 23 ). As previously suggested by the efficacy of school-based interventions, perceived social support from family and teachers has been shown to potentially ameliorate the association between cyberbullying and several outcomes at the psychosocial level. A study of 131 pupils with developmental disorders who had received social support from parents and teachers demonstrated reduced depressive symptoms one year after a cyberbullying experience ( 24 ).

A viable intervention program and cyberbullying prevention mechanism may rely on specific strategies such as improved access to resources, as well as efforts to increase the potential protective effects of social support figures in an adolescent’s life, including family members, friends and teachers ( 2 ). This study in particular suggested that there may be differences between male and female victims as to which form of social support is more efficacious with an implication that girls may benefit more from social supports than their male counterparts ( 2 ). However, the efficacy of social support in preventing cyberbullying or supporting its victims is often contingent upon adolescents seeking help or divulging their victim status.

Some studies suggest that effective interventions focus on enhancing an adolescent’s empathy, promoting positive social relationships with family and decreasing screen time ( 13 ). In particular, given the lack of nonverbal cues inherent in the nature of cyberbullying, it is postulated that adolescents who serve as cyberbullying perpetrators may demonstrate little empathy for their cyber victims. Furthermore, given that poor self-esteem has been shown to be a significant factor among victims and perpetrators alike, both educators and health care providers should focus on an adolescent’s emotional status, particularly with those who seem to demonstrate not only a decline in their self-esteem but also who are showing more troublesome behaviors such as truancy and substance use ( 18 ).

Another potential focus of intervention may hinge on coping strategies for adolescents ( 25 ). Coping strategies are divided into two types: emotion-focused and problem-focused. There are two emotion-based strategies that victims of cyberbullying can utilize: self-control and escape-avoidance. The self-control strategy employs inhibitions of emotional expressions and spontaneous behavior ( 26 ). The desire to regulate emotions brought on by a stressful situation is usually carried out when there is a belief that nothing can be done to change the unfavorable conditions ( 27 ). This may lead to increased avoidance and depression-based coping in a cyberbullying victim’s day-to-day activities with increased depressive symptoms and health complaints.

Problem-focused strategies may be particularly helpful to cyberbullying victims, as they often cannot face (or identify) their aggressor or stand up to the bully ( 28 ). As a result, coping strategies that attempt to either manage or solve the problem may be more beneficial to victims of cyberbullying, motivating them to implement changes, both internally and environmentally. Although there is no one right way to cope, adolescents employing “more approach and problem solving” as opposed to avoidance strategies, and assessing a stressor to be a challenge were shown to have more adaptive outcomes ( 29 ). Such strategies teach the importance of standing up for oneself as well as using methods to not only deal with cyberbullying but manage the daily stress ( 30 ).

A validated tool, such as the Utrecht Coping List for Adolescents, has been a long-standing tool used to help adolescents work through their current emotional coping-based mechanisms and transition to thinking in a more pro-active problem-based fashion. This underscores the importance of both social skills and assertiveness training which inspire victims to adopt more active problem-based strategies, such as telling someone about their bullying or making new friends ( 31 ). These coping strategies, in conjunction with school, peer group and teacher-based efforts to prevent bullying, may bolster the prevention and resiliency efforts currently underway.

Prevention of cyberbullying should be a focus for healthcare providers. Anticipatory guidance remains a cornerstone of the well child and well adolescent visit, and should include strategies conveyed to both patients and their parents on how to identify signs of cyberbullying, In addition, discussion of stigma and myths about cyberbullying should occur. This could include discussions about the use of technology in the home, as well as the best and safest social media practices for the adolescent. Furthermore, taking a history about the signs and symptoms of cyberbullying from caregivers independently of the adolescent may be helpful in determining the patient’s source of distress and to appropriately plan interventions.

A variety of screening tools have been developed ( Table 2 ) that represent the potential to identify victimization as well as serve as an opportunity to respond and intervene ( 32 ). However, these tools address the larger umbrella phenomenon of bullying and are not specific to cyberbullying. Therefore, instruments and tools that can be used adequately to identify victims and aggressors of cyberbullying still remain a large area of need.

Current Bullying Assessment Tools ( 32 )

Current Bullying Assessment Tools
The Bully Survey
Gatehouse Bullying Scale
Olweus Bullying Questionnaire
The Peer Relations Assessment Questionnaires
Peer Relationship Survey
“My Life in School” Checklist
The Personal Experiences Checklist
California Bullying Victimization Scale

Many states have responded to the surge of cyberbullying with legislation focusing on prevention, intervention and consequences. In Massachusetts, as a response to the deaths of Phoebe Prince and others, legislation was enacted so that all school staff (including educators, nurses, custodians, athletic coaches, advisors to extracurricular activities, administrators, cafeteria workers, bus drivers, and paraprofessionals) must report bullying to the school administration ( 1 ). These individuals are also required to receive training on bullying prevention and intervention ( 1 ). That stated, effective interventions to prevent cyberbullying-related suicide or suicidal ideation have not yet been identified or vetted through research.

Currently, there are a variety of school-based interventions focused on adolescent suicide awareness, typically presented between the ages of 12 and 18. Preventative interventions focus on suicide awareness campaigns or screening as primary preventative measures, or secondary approaches to provide support to those affected by suspected suicides. Some schools have implemented psychologic interventions in those who have already demonstrated attempts at self-harm, including cognitive behavioral therapy (CBT), dialectic behavioral therapy (DBT) and home-based family interventions ( 33 ). However, these services are not routinely available in school systems and their efficacy in identifying cyberbullying victims and pro-actively preventing attempts at suicide are not well understood. Ultimately, though there are school-based interventions in place for suicide awareness, only a few are evidenced-based and there is little to demonstrate the true efficacy of these interventions for preventing suicide and suicide attempts in the adolescent population. Therefore, the adolescent population serves as an untapped area of research into evidence-based interventions and policies, potentially to be extrapolated from other high-risk populations and proven efficacious efforts.

Much of the current literature focuses on an older adolescent population (i.e. high school and undergraduate). It may, therefore, behoove the community to understand the effects of cyberbullying in younger adolescents (less than 12 years of age) and how this may inform prevention efforts. This is a particularly important focus given the ubiquity of technology and internet access in a young child’s life. The large majority of children regularly use the internet ( 17 ). Some studies have demonstrated similarly negative effects on psychological well-being of younger adolescents secondary to cyberbullying victimization, poor self-esteem and decreased peer socialization ( 34 ). The ability to identify these negative effects at a younger age may allow us to build more effective programs and coping strategies at an earlier age to ultimately foster a population of adolescents with increased resiliency and skills to face the stressors of life.

Ultimately, the prevention of cyberbullying rests not only on the shoulders of victims and their families, but on educators, providers and researchers. More focused studies and evaluations of interventions may not only reduce the prevalence of cyberbullying but also lower the mental health sequelae seen in the short and long term. The serious consequences of cyberbullying, particularly surrounding mental health issues and suicidal ideation, underscore the importance of effective and evidence-based bullying prevention programs and support groups in school-based settings. In addition, the multitude of factors associated with victimization in cyber sexuality-related bullying as well should be factored into developing prevention and intervention strategies.

Acknowledgments

Funding information : This paper was funded in part by NIH grant 5 R01 MH103402.

The authors wish to thank Dr. Karen Sadler for reviewing their manuscript.

Compliance with Ethics Guidelines

Conflict of Interest

The authors declare no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Publisher's Disclaimer: This Author Accepted Manuscript is a PDF file of a an unedited peer-reviewed manuscript that has been accepted for publication but has not been copyedited or corrected. The official version of record that is published in the journal is kept up to date and so may therefore differ from this version.

Contributor Information

Vidhya Lakshmi Kumar, MassGeneral Hospital for Children, Department of Pediatrics, Harvard Medical School.

Mark A. Goldstein, Division of Adolescent and Young Adult Medicine, MassGeneral Hospital for Children, Department of Pediatrics, Harvard Medical School, 175 Cambridge Street, Room 508, Boston, MA 02114.

IMAGES

  1. (PDF) Cyberbullying in the World of Teenagers and Social Media:: A

    cyberbullying on social media research paper

  2. Cyberbullying as a Social Issue Free Essay Example

    cyberbullying on social media research paper

  3. (PDF) Assessing the consequences of cyberbullying on mental health

    cyberbullying on social media research paper

  4. (PDF) Social Media Use and Cyber-Bullying: A Cross-National Analysis of

    cyberbullying on social media research paper

  5. (PDF) The Social Networks of Cyberbullying on Twitter: Concepts

    cyberbullying on social media research paper

  6. (PDF) Cyberbullying on Social Media: an Analysis of Teachers’ Unheard

    cyberbullying on social media research paper

COMMENTS

  1. Cyberbullying on social networking sites: A literature review and

    1. Introduction. Cyberbullying is an emerging societal issue in the digital era [1, 2].The Cyberbullying Research Centre [3] conducted a nationwide survey of 5700 adolescents in the US and found that 33.8 % of the respondents had been cyberbullied and 11.5 % had cyberbullied others.While cyberbullying occurs in different online channels and platforms, social networking sites (SNSs) are fertile ...

  2. Associations between social media and cyberbullying: a review of the

    There was a steady increase in the number of cyberbullying studies published during the 3-year review period: 1 each in 2013 and 2014 (4.5%, respectively), 7 in 2014 (31.8%), and 11 in 2015 (50%). Appendix A summarizes the 22 papers that were reviewed. There was a general consensus that cyberbullying only affects youths.

  3. Cyberbullying Among Adolescents and Children: A Comprehensive Review of

    Increasing Prevalence of Global Cyberbullying With Changing Social Media Landscape and Measurement Alterations. This comprehensive review suggests that global cyberbullying rates, in terms of victimization and perpetration, were on the rise during the 5 year period, from 2015 to 2019.

  4. Cyberbullying and its influence on academic, social, and emotional

    1. Introduction. Cyberbullying is defined as the electronic posting of mean-spirited messages about a person (such as a student) often done anonymously (Merriam-Webster, 2017).Most of the investigations of cyberbullying have been conducted with students in elementary, middle and high school who were between 9 and 18 years old.

  5. (PDF) Cyberbullying in the World of Teenagers and Social Media:: A

    The increased use of social media by teenagers, has led to cyberbullying becoming a major issue. Cyberbullying is the use of information and communication technology to harass and harm in a ...

  6. Cyberbullying via social media and well-being

    Prevention and intervention. Cyberbullying via social media and well-being. Since 2004, there has been increased research attention on cyberbullying, with a particular focus on predictors and outcomes of victimization and perpetration. Outcomes have included measures of well-being, such as self-esteem, depression, and social support [1].

  7. Problematic social media use mediates the effect of cyberbullying

    Marengo, N. et al. Cyberbullying and problematic social media use: An insight into the positive role of social support in adolescents—data from the Health Behaviour in School-aged Children study ...

  8. Cyberbullying research

    The second most cited paper by Babvey et al. (2021) (TC:33) analyzed the conversational data on two social media platforms - i.e Twitter and Reddit - related to abuse and Cyberbullying. Together with testimonials of violence, they conclude the large increase in occurrences of Cyberbullying as primarily stemming from home confinement during ...

  9. Cyberbullying on social networking sites: A literature review and

    Abstract and Figures. Cyberbullying on social networking sites is an emerging societal issue that has drawn significant scholarly attention. The purpose of this study is to consolidate the ...

  10. Cyberbullying detection and machine learning: a systematic ...

    The rise in research work focusing on detection of cyberbullying incidents on social media platforms particularly reflect how dire cyberbullying consequences are, regardless of age, gender or location. This paper examines scholarly publications (i.e., 2011-2022) on cyberbullying detection using machine learning through a systematic literature review approach. Specifically, articles were ...

  11. (PDF) Cyberbullying: A Review of the Literature

    cyberbullying, in which individuals or groups of individuals use the media to inflict emotional distress on. other individuals (Bocij 2004). According to a rece nt study of 743 teenager s and ...

  12. Full article: The Effect of Social, Verbal, Physical, and Cyberbullying

    The method of delivery, however, can substantially vary from slapping, name-calling, exclusion from groups, or even harassment/embarrassment on social media. Early research on the phenomenon of bullying spearheaded by Dan Olweus, has led to a more robust and systematic classification of the unique types of bullying students can experience ...

  13. Cyberbullying: next‐generation research

    Cyberbullying: next‐generation research. Cyberbullying, or the repetitive aggression carried out over elec­tronic platforms with an intent to harm, is probably as old as the Internet itself. Research interest in this behavior, variably named, is also relatively old, with the first publication on "cyberstalking" ap­pearing in the PubMed ...

  14. Cyberbullying: Concepts, theories, and correlates informing evidence

    In prior research, cyberbullying was characterized as occurring outside of school with negative interactions continuing into the next school day ... Social media and cyberbullying: Implementation of school-based prevention efforts and implications for social media approaches ... Paper presented at 'Etmaal van de Communicatiewetenschap ...

  15. Cyberbullying on social media platforms among university students in

    Recent research studies have revealed that cyberbullying and online harassment are considerable problems for users of social media platforms, especially young people. A 2016 report of the Cyberbullying Research Centre indicates that 33.8% of middle-and high-school students aged between 13 and 17 are at some point subject to being victims of ...

  16. Automatic detection of cyberbullying in social media text

    The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a cyberbullying corpus for English and Dutch and perform a series of binary classification experiments to determine the ...

  17. College students and cyberbullying: how social media use affects social

    Studies have shown that in college students, cyberbullying perpetrators had more issues with problematic social media use, which is defined as an "unmanageable urge to use social media and spending too much time on it in which real life relationships and areas are negatively affected" (Kircaburun et al., 2018). As for cyberbullying victims ...

  18. An explorative qualitative study of cyberbullying and cyberstalking in

    The literature suggests an increased risk of cybervictimisation during the COVID-19 pandemic. This paper explores student and staff experiences in a higher education community as victims of cyberbullying and cyberstalking during the COVID-19 lockdowns. An online semi-structured questionnaire was administered to self-identified students and staff victims within a higher education context. By ...

  19. Accurate Cyberbullying Detection and Prevention on Social Media

    The usage of digital/social media is increasing day by day with the advancement of technology. ... They haven't considered the evolution of language which makes a big impact on cyberbullying text. This paper describes a system for automatic detection and prevention cyberbullying considering the main characteristics of cyberbullying such as ...

  20. (PDF) CYBERBULLYING DETECTION USING MACHINE LEARNING

    Cyberbullying occurs when someone harasses, threatens, or mistreats someone else online or on social media. Cyberbullying leads to threats, public embarrassment, and reprimands. Cyberbullying has ...

  21. Prevalence and related risks of cyberbullying and its effects on

    Introduction. Cyberbullying is an intentional, repeated act of harm toward others through electronic tools; however, there is no consensus to define it [1-3].With the surge in information and data sharing in the emerging digital world, a new era of socialization through digital tools, and the popularization of social media, cyberbullying has become more frequent than ever and occurs when ...

  22. I researched the dark side of social media − and heard the same themes

    Lesson 2: Comparisons will make you miserable. Social media is a breeding ground for comparisons.And since people tend to portray idealized versions of themselves on social media - rather than their authentic selves - these comparisons are often false or skewed.Research has shown that people on social media tend to make "upward comparisons," judging themselves relative to people they ...

  23. Cyberbullying and Adolescents

    Recent Findings. There are a variety of physical and psychological effects on victims of cyberbullying that can include recurrent abdominal pain, headaches and difficulty with sleep. In addition, victims have higher rates of anxiety, depression, suicidal ideation and a lower level of well-being. Unfortunately, victims may remain silent, so ...

  24. Cyberbullying and its influence on academic, social, and emotional

    The data were collected using the Revised Cyber Bullying Survey, which evaluates the frequency and media used to perpetrate cyberbullying, and the College Adjustment Scales, which evaluate three aspects of development in college students. ... Fraping - where a person accesses the victim's social media account and impersonates them in an attempt ...