Electronic Thesis and Dissertation Repository

Thesis Format



Doctor of Philosophy


Library & Information Science


Rubin, Victoria L.


The term “internet trolling” has come to encompass a wide range of disparate behaviours: ranging from abusive speech and computer hacking to sarcastic humour and friendly teasing. While some of these behaviours are clearly antisocial and, in extreme cases, criminal, others are harmless and can even be prosocial. Previous studies have shown that self-identified internet trollers tend to credit internet trolling’s poor reputation to misunderstanding and overreaction from people unfamiliar with internet culture and humour, whereas critics of trolling have argued that the term has been used to downplay and gloss over problematic transgressive behaviour. As the internet has come to dominate much of our everyday lives as a place of work, play, learn, and connection with other people, it is imperative that harmful trolling behaviours can be identified and managed in nuanced ways that do not unnecessarily suppress harmless activities.

This thesis disambiguates some of the competing and contrary ideas about internet trolling by comparing perceptions of trolling drawn from two sources in two studies. Study 1 was a content analysis of 240 articles sampled from 11 years of English language news articles mentioning internet trolling to establish a ”mainstream” perspective. Study 2 was a series of in-depth, semi-structured interviews with 20 participants who self-identified as avid internet users familiar with internet trolling as part of their everyday internet use. Study 1 found that 97% of the news articles portrayed internet trolling in a negative light, with reporting about harassment and online hostility being the most common. By contrast, Study 2 found that 30% of the 20 participants held mostly positive views of trolling, 25% mostly negative, and 45% were ambivalent.

Analysis of these two studies reveal four characteristics of internet trolling interactions which can serve as a framework for evaluating potential risk of harm: 1) targetedness, 2) embodiedness, 3) ability to disengage, and 4) troller intent. This thesis argues that debate over the definition of “trolling” is not useful for the purposes of addressing online harm. Instead, the proposed framework can be used to identify harmful online behaviours, regardless of what they are called.

Summary for Lay Audience

“Internet trolling” is a term that has been used to describe a wide variety of online activities: ranging from abusive speech and computer hacking to sarcastic jokes and friendly teasing. Some of these activities are simply amusing and harmless, while others are clearly harmful or even criminal. Internet users and online communities should be protected from the harmful consequences of trolling, but ambiguity over the definition of “trolling” makes effective regulation difficult. This thesis argues that debates over what does or does not count as trolling are not useful for the purposes of addressing online harm. Instead, efforts should focus on ways to distinguish harmful online interactions from harmless under the trolling umbrella.

This thesis looks at the ways in which internet trolling has been described in mainstream news reporting from 2004-2014 in comparison with the ways in which trolling is understood by people who are familiar with trolling as part of their everyday internet use. These data were used to determine the different types of online interactions that have been called “trolling,” the different situations and people involved in trolling, and whether or not trolling was considered to be a problem. Distinct differences of opinion were found between the “outsider” perspectives from the news and the “insider” perspectives of the internet users, but most importantly, common elements of problematic trolling behaviours could be identified in both perspectives. Through an analysis of these data, a framework for evaluating the potential risk of harm in online interactions was proposed based on four characteristics: 1) whether the interaction is targeted, 2) whether there is a tangible or physical component, 3) whether a potential victim can easily disengage from the interaction, and 4) whether the interaction was intended to be harmful. Using this framework, policy makers and regulators of internet spaces may be able to more accurately target problematic online behaviours while avoiding over-policing of innocuous ones.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License