Not your garden-variety troll: a trolling guide

|

Dealing with trolls, flames, and common bad behaviour within your community is generally pretty cut-and-dried.

If someone starts howling, you're apt to whip out the proverbial muzzle and lay down the law as needed. A fake or sock puppet account, like a spammer or a drive-by troll, is typically easily bannable and simple enough to deal with. They register, start causing havoc to some degree, they're reported/flagged by your community members or by your moderation staff, and you get rid of them. Quick and painless... most of the time.

The more insidious cases are when your users (or their content) are not being trolled at random by anonymous users just out for a quick laugh. These individuals come in a few different varieties and tend to be somewhat more of a pain in the arse to contend with.

It's particularly unpleasant on smaller social sites (i.e. when you don't have millions of users), where they can potentially gain more traction and visibility, and harass your members in a much shorter timespan. If you happen to look up trolling on Wikipedia, you'll see a reference to a definition that breaks trolls down into four distinct categories. These categorizations appear a bit dated, but it's safe to say that the behaviour itself hasn't changed all that much online in the last fifteen years or so.

That said, let's take a brief look at this extended encyclopaedia of trolls.

Not your garden-variety trolls...
Not your garden-variety trolls...

The cast of characters

We covered several techniques for weeding out spammers in a previous post, so this time we'll take a look at coping with the other side of the spectrum.

Huge, large-scale social networks like Facebook, Google+ or LinkedIn are much less prone to troll-like behaviour for two reasons: they generally require actively connecting with other users first via a friend request of some sort, and they often insist on associating your real-life identity with your account(s).

On the flip side, Twitter's tolerance of near-anonymity (they require pseudonyms, but nothing else) makes it a much larger target for trolls and spammers. Other sites like 4chan and Slashdot (and to a lesser extent, Craigslist) are well known for allowing fully anonymous interactions to occur. and In turn, their reporting systems, moderation tools, and automatic spam management appear to be particularly refined, most likely out of sheer necessity.

All three positions on the identity dial work well (and are successful) in their own way(s), so I'm not going to argue which is the better approach (suffice it to say that I lean towards the middle ground of pseudonymity.) However, from my experience, you'll see three broad strokes of bad behaviour in social networks and community sites, outside of the easy targets we've already covered. Of course, this depends entirely on how you handle identity on your site, and on your site's intended purpose.

1. The persistent troll who just won't quit.

This is probably the most common of the three variants you'll see. Often (although not always), these are tech-savvy users who know how to anonymize their accounts and bypass your basic security restrictions (i.e. IP/email address bans, content filtering). Most often you'll have to be just as persistent as they are in order to handle them, and be diligent about monitoring your site — which means banning quickly and quietly.

These can often be the noisiest types of trolls. If they're motivated, they may create a new account every few hours, post vulgar or offensive comments or content, and disappear, only to re-surface elsewhere under a different handle, from a different IP, a few hours later. Eventually they'll get bored and leave, or become frustrated at your bans, but it's still not a lot of fun to deal with.

If you want to be a little less conspicuous, a commonly used moderation pattern in this situation is to use a special "troll flag" to limit these accounts without explicitly banning them. Depending on the software you use to operate your community, this may or may not be the simplest thing to implement. The limitation you choose is totally arbitrary; in our case, we use two tiers of account-limiting; the first prevents interacting with other users directly, the second prevents any kind of interaction at all aside from with their own content.

The flip side to this technique is to make your flag hide the troll's content from other users, but not from their own account. This gives them the appearance of causing trouble, when in effect they're only talking to themselves. This is a little tougher to accomplish, but often the most effective route. However, your troll may get wise if they logout and notice that their content is hidden (assuming it's visible to unauthenticated users.) If that happens, you may need to consider altering your "troll flag" to hide the troll's post from only registered users. It's somewhat annoying from an SEO and external visibility standpoint, but you're protecting the users who actually participate in your community. Again, it can become obvious to the troll very quickly if they decide to create another fake account and notice that none of their content is visible to registered users. The trade-off is obvious, though, and trolls with multiple accounts will figure this out pretty quickly. This approach is really a stop-gap best combined with other techniques.

Finally, you can just let them be. Like a puppy whining incessantly when left alone for the first time, they may eventually quiet down on their own. It's really up to you and your moderators to gauge which type of reaction makes more sense. Some will say the "laissez-faire" approach shows that your site's moderation is weak and vulnerable to being overpowered by trolls, which in turns creates a vicious cycle. It really depends. I've seen it handled in a variety of ways depending on the intensity of the trolling, but often just letting them flame out while the rest of your community shouts them down can be the most sensible tactic.

Each approach has its merits and its drawbacks, but particularly in custom-built applications, implementing one (or many) of them is definitely something you should consider. While this should be handled on a case-by-case basis, limiting the offending user's account gradually so as they have less of a chance of noticing the changes works for me the majority of the time.

2. The ballot-stuffing sock puppeteer.

Created strictly in order to game your community's systems anonymously, a sock puppeteer (or several) creates a multitude of accounts with which they interact positively with their primary account's content (by rating, recommending, complimenting, commenting, up-voting, or back-patting, etc.) This also applies when done on behalf of another user. Basically, they're anonymous shills. More recently, this type of attack on social networks and reputation systems has become known as the "Sybil attack."

The ballot-stuffing sock puppetAlso known as the ballot stuffer, this is the converse of the more traditional sock puppet, which is created primarily to cast the content of others in a negative light (down-voting, reporting/flagging, posting negative comments, giving negative feedback, etc.) The ballot stuffer can be difficult to track down without the assistance of tools to give you an edge in detecting them. A malicious example would be a user automatically creating dozens or hundreds (or more) of fake accounts in order to stilt the results of a contest or competition. The last several years of NHL All-Star Game voting are a perfect example of how this can be achieved and how complex prevention can be (Rory Fitzpatrick, anyone?).

A less malicious example of this would be a younger user on a community site creating a handful of fake accounts (and not knowing any better) in order to prop up their own work, improve their reputation, and give them more credibility. This is quite common in communities that rank overall popularity or influence in one form or another. It's also somewhat harder to detect, particularly when dealing with only one or two "mule" accounts. In my sites, this is much more prominent.

Avoiding it is as simple as not using systems on your site that blatantly encourage linear, cumulative behaviour (see the former incarnation of Digg.) Try not to have "Top X" listings and in turn work out an algorithm or ranking structure that is based on good or aggregate behaviour. Encouraging these users to participate in your community the way you want them to, by rewarding those who do it right.

With the rise in popularity of recommendations engines and more elaborate content ratings systems in social networks, this type of "attack" (and sometimes it is not nearly as malicious as this sounds) has become increasingly common. When your site houses interactions ranging from straightforward commenting to more atomic actions like user recommendations, favourites, ratings, or compliments, the simplest way to handle these types of situations is to monitor activity between accounts over time. In one of my sites, based mainly on trends within the site's community, we've devised a number of statistical breakdowns that build daily and hourly listings of bad behaviour. A few simple queries of your data are enough to detect overly negative or positive trends in rating and recommendation behaviour. Filtering these outliers often gives us a head start (if not an obvious target) in tracking down those who are attempting to game the system. They can then be dealt with as needed.

We'll elaborate more on the tools we use in a future case study.

3. The long-time user who "goes rogue."

Going rogue (the pirate way)This is the most egregious of our three archetypes, but also the most rare. In the nearly four years we've run our largest community site, I can count on two hands the times we've had to deal with this situation. In these cases, the culprit is most often a user who has a solid track record in your community. Often, they've developed a reputation and identity amongst your users (be it good or not.)

It's never fun to handle, as a community manager or moderator, it can be a major headache to deal with. And it happens more often than you might think.

A working example would be a user who feels they've been slighted in some way by (a) another user in your community, or (b) you, or one of your moderation staff. Depending on their mindset, "going rogue" may involve a number of things. It could range from something like harassing one or more users continually, to something harder to detect — for instance, abusing your Terms of Use, or badmouthing your moderators when they're not around. In more extreme cases, particularly when dealing with more tech-savvy users, it could trigger something as nasty as a DoS or XSS attack. At that point, things are getting pretty hairy, and it can be a sign that you or your team have mis-managed the situation.

These users can be the most time-consuming to deal with and sometimes devolve into a persistent troll (see #1.) Thankfully, they can also be the easiest to sort out.

My first approach in these cases is to engage in a dialogue with the user in question, outside of the site (or in a private area), usually via email. I will ask them to air their grievances, and let me know what they're looking for in terms of a resolution. More often than not, things can be resolved through discussion; it's usually founded on a misunderstanding, an unreasonable reaction, or another user's bad behaviour. Many aggravated users appreciate having a community manager contact them personally in order to resolve their issues directly. This is the approach I tend to take first in these situations. Just remember to keep an open mind and to treat every complaint fairly.

Use your noggin

In my experience, the above situations rarely get met with an insta-ban. They usually require a little more thought; potentially, some internal discussion amongst your moderation team, and most often, a different type of outcome. Unsurprisingly, as the host, manager, or developer of your community, there's a good chance you're going to take much of the above behaviour personally the first few times it happens. Hell, we still get peeved at some of it, even after running the same sites for years.

Sometimes, it's as simple as as brief talking to, or a "time-out" of some sort. In our communities, we use one- or two-week-long bans (or temporary account deactivations) as a deterrent. This is most applicable when dealing with users who have established identities and emotional investment in your community. In most cases, the user with repent and cut back on the abusive behaviour. It's really up to your moderation team (and the offense) to decide on the severity and type of punishment, if merited.

Other times, it's easier to let your community handle the moderation themselves via peer moderation, and not get involved yourself. Many more elaborate commenting and discussion systems allow vote-down or "bury" mechanisms, as well as configurable viewing thresholds. These will allow your community to do much of the work before you have to get involved. Reddit, Metafilter, and Slashdot are canonical examples of this type of functionality online, yet even they are met with problems (granted, much has to do with volume and scale.) In terms of blog plugins, Disqus does a good job of handling comment moderation as well, as does Automattic's IntenseDebate plugin for WordPress. Modeling your community moderation around these types of systems is a good start. Afterwards, tailor the functionality to your community's needs as required.

As well (as mentioned earlier), it's possible to create tools to automate detection of some of this type of behaviour, but much of it will depend on keeping your eyes open and on listening to your users... although having a solid content reporting system helps.

Do you have any alternate situations to add, or other moderation war stories to share from your community? We'd love to hear them.

Garden gnomes image via M C Morgan, sock puppet image via Guerilla Futures & rogue image via Frank Kovalchek on Flickr.

Comments on this post are closed.