CIO

CIOs: Techniques for Handling Social Media Negatives

Part 6 of our series, What CIOs Need to Know About Social Media.

A post on Mashable from a year and a half ago is still relevant to enterprise CIOs grappling with the impact of social media on the enterprise. In the post, Lon S. Cohen lists seven things CIOs should be considering. We’re taking a closer look at each of the item in Cohen’s framework. In this post, we continue our look at Cohen’s third item.

  • Web 2.0 Content and Presentation Standards
  • Review and Approval Processes
  • Managing Corporate Reputation
  • Versions and Update Controls
  • Impact On Operating Environment
  • Establishing Project Priority
  • Compliance

Dealing with Trolls

Trolls can wreck your community. And pretty much every community eventually has its trolls. Trolls exhibit negative, hostile, antisocial, and deliberately provocative behavior. They may have an axe to grind, or they may just be people who thrive on discord, on getting a rise out of people, and who may not really value the community. We say may not because there are some trolls who just can’t help themselves. They may actually be the most committed members of your community. They just have the type of personality that produces antisocial behavior.

Offline, the troll might be the person in your book club who never shuts up. Or the busybody that, while often productive, needs to poke her nose into everything. Or the guy who always offers off-the-wall solutions during meetings and insists on bringing them up repeatedly, long after the decision has been made.

Online, trolls are empowered. If there are no policies and procedures in place to check them, they can dominate every conversation and sidetrack every productive dialog.

Types of Trolls

The Communities Online site[1] categorizes trolls into four types, which we adapt below, adding our own fifth category:

  • Mischievous
    Mischievous trolls have a humorous intent. Often, they might be a regular community member playing a good-natured prank. They are not abusive and rarely create trouble. Generally there is no harm in responding to them. Some members may find mischievous trolls annoying, particularly if their presence leads to lengthy threads that distract the community from its true intent. Other members find that the troll’s humor and light-hearted antics provide the community with an opportunity to laugh together.
  • Mindless/Attention Seeking
    Mindless trolls have a tendency to post lengthy stories of questionable veracity, or commenting on every post with off-topic or provocative statements. Mindless trolls are generally harmless, although their activities can rise to the level of extreme annoyance. On rare occasion, the fictitious posts of a mindless troll may lead to insightful debate and discussion. There is generally no harm in you responding, but it is often best to simply ignore them. If response is necessary, let the community respond.
  • Malicious
    A malicious troll is blatantly abusive to the group and/or specific individuals within the group. One of their characteristics is that within a very short time of gaining access they begin targeting and harassing members. In some cases, the troll has a prior history with the group or someone within the group. In other scenarios, the troll is simply looking for a fresh meat market. As a community manager, respond to such trolls carefully. Generally, community members will step up and enforce community norms themselves.
  • Destructive
    Around 1999, destructive trolls began to appear in mail groups and online communities. The primary purpose of this type of troll is to completely destroy the group it has infiltrated. Destructive trolls may work on their own, or possibly in teams or gangs. As a community manager, you may need to directly confront this type of troll, and eventually may need to ban them. Be sure to enlist the support of the community to take any enforcement action. If the troll does actual damage to the community forums or software, feel free to immediately ban them, assuming you are supported in doing so by your published community policies.
  • Trollbots
    Sometimes a troll is not actually a person, but an automated program called a trollbot. Generally, these bots are not interactive, and usually just post canned text as comments to other posts. An example of a recent trollbot was the Ron Paul trollbot from the 2008 presidential campaign. Such bots are an annoyance, but if you run an open community — one that doesn’t require registration and approval — you will get visited by trollbots. Enlist the community in identifying their posts and feel free to delete them.

Our next post will go into more depth about General Approaches to Trolls.

For soup-to-nuts, strategy to execution processes, procedures and how-to advice, see our book, Be a Person: the Social Media Operating Manual for Enterprises. The book (itself part of a series for different audiences), is available in paper form at http://bit.ly/OrderBeAPerson save $5 using Coupon Code 62YTRFCV


[1] Community Online’s Communities Online: Trolling and Harassment: bit.ly/cuCoEG

CIOs: Dealing with Negatives on Social Media

Part 5 of our series, What CIOs Need to Know About Social Media.

A post on Mashable from a year and a half ago is still relevant to enterprise CIOs grappling with the impact of social media on the enterprise. In the post, Lon S. Cohen lists seven things CIOs should be considering. We’re taking a closer look at each of the item in Cohen’s framework. In this post, we continue our look at Cohen’s third item.

  • Web 2.0 Content and Presentation Standards
  • Review and Approval Processes
  • Managing Corporate Reputation
  • Versions and Update Controls
  • Impact On Operating Environment
  • Establishing Project Priority
  • Compliance

Dealing with Social Media Negatives

One big question that comes up almost immediately when enterprises start to use social com­puting is: What do you do about negative comments?

As we said in the previous post, when dealing with this question, it’s helpful to recognize that if you act in the world, you probably have detractors. The great thing about social media is that for the first time you can find and address negativity, in real-time.

The old techniques of responding – libel laws or lawsuits, pressuring media outlets, and using traditional media to confront and refute naysayers – not only don’t work online, but can result in generating even more negativity.

A recent example of the traditional approach, and one that has made it into our Social Media Hall of Shame, involved international food giant Nestlé. Like a lot of large food companies, Nestlé is the target for various groups who disagree with their business and agricultural methods. Some of these groups had taken to posting defaced versions of the Nestlé logo on Nestlé’s Facebook fan page as a critique and protest of the company’s policies.

In series of posts widely seen as an attempt to silence or intimidate these critics, Nestlé posted, “We welcome your comments, but please don’t post using an altered version of any of our logos as your profile pic — they will be deleted.”[1]

This post breaks a cardinal rule about running online communities that we discuss in the Community section of our book, , Be a Person: the Social Operating Manual for Enterprises (being slowly syndicated via this blog): Govern your community with a light hand. Your community members expect to be involved in major community decisions, and they certainly do not expect to be arbitrarily censored.

The company also threatened action for trademark infringement if critics didn’t comply. Incredibly, Nestlé also posted sarcastic replies to negative posts.

Nestlé’s old-media attempt to stem negativity was, unfortunately, all too predictable, as was the result. Rather than doing anything to respond to, placate, dissuade, or even just acknowledge the dissenters, Nestlé whipped up a storm of protest that eventually made the mainstream media news — blowing up a relatively unpublicized group of protesters into media darlings.

Here’s a typical post from their Facebook followers after Nestlé’s blunder:

[W]ould like to personally thank Nestlé for providing a place for all the people who see their unethical, disgusting and lethal practices for what they are to share their opinions. Finally we have a way to share how much we hate their practices. If you don’t boycott Nestlé already, start now, please.

One poster stated she’s not a fan and wanted to have a “Register My Disgust” button on the Facebook fan page. Another was a bit more reasonable:

I like some Nestle products so I qualify as a ‘fan.’ I would like Nestle to make them even better by removing palm oil. I would like to enjoy my Kit-Kats without feeling responsible for rainforest destruction and orangutan deaths.

And this wasn’t Nestlé’s only social media blunder. When Greenpeace posted a critical video on YouTube, the company lobbied to have it removed based on use of its logo, generating lots of free publicity for Greenpeace.

The poor besieged person in charge of the Nestlé Facebook page did try to do some damage control, posting:

This [deleting logos] was one in a series of mistakes for which I would like to apologize. And for being rude. We’ve stopped deleting posts, and I have stopped being rude.

This was a good move. It does three things: It acknowledges the mistakes; it pledges to stop deleting the logos; and it humanizes the company by taking personal responsibility for the action. Our first rule for using social media is to Be a Person, not an organization.

So what went wrong here? Well, obviously, Nestlé has the right to protect its logos and trademarks. But was it really the best approach to sarcastically criticize and threaten the dissenters? What the company failed to realize is that social computing gives the same power to individuals as it gives to big enterprises. You need to keep that in mind whenever you make a decision to deal with negativity about your business.

By the way, you may be interested in the end of the story. After a two-month campaign led by Greenpeace against Nestle for its use of palm oil, the company gave in and announced in May 2010 that it will rid its supply chain of any sources involved in the destruction of rainforests.[2] There’s no telling what role the bungled responses on YouTube and Facebook had in this resolution, but they sure didn’t help.

Our next post will go into more depth about Techniques for Handling Negatives.

For soup-to-nuts, strategy to execution processes, procedures and how-to advice, see our book, Be a Person: the Social Media Operating Manual for Enterprises. The book (itself part of a series for different audiences), is available in paper form at http://bit.ly/OrderBeAPerson save $5 using Coupon Code 62YTRFCV


[1] Bnet’s Nestle’s Facebook Page: How a Company Can Really Screw Up Social Media: bit.ly/asqGGB

[2] Mongabay: bit.ly/aZLjio

CIO’s Social Media Review and Approval Processes

A post on Mashable from a year and a half ago is still relevant to enterprise CIOs grappling with the impact of social media on the enterprise. In the post, Lon S. Cohen lists seven things CIOs should be considering. We’re taking a closer look at each of the item in Cohen’s framework. In this blog, we consider look at Cohen’s second item.

  • Web 2.0 Content and Presentation Standards
  • Review and Approval Processes
  • Managing Corporate Reputation
  • Versions and Update Controls
  • Impact On Operating Environment
  • Establishing Project Priority
  • Compliance

Social Media Processes

As part of your social media strategy, CIOs should consider what policies should govern the enterprise’s social computing use. The first thing that might occur to you when you think of social media policies are those that control who speaks and what they say.

Yes, social media usage policies that control who and what are important. But policies, practices and procedures laying out how to speak may be even more important. Don’t assume that because your employees are social-media-savvy that they know best how to be evangelists for your enterprise. The following, excerpted from our Community Building Checklist chapter in our book, Be a Person: the Social Operating Manual for Enterprises (being slowly syndicated via this blog), can help you as you think through your social media processes.

  • Establish, in writing, best practices and procedures
  • Ensure staff is on message
  • Empower staff to be proactive and participative
  • Position community as means to engage, not a distraction
  • Create Rules of Engagement
    • What to do with negative content
    • What to do with negative members (more later)
    • What to do with staff that blabs
    • Study how the US Air Force deals with various types of community members, in the next figure

      Air Force Web Posting Assessment Flowchart

      Figure 85 — Air Force Web Posting Assessment Flowchart[1]

  • Decide whether to hold employees and other community members personally responsible for content they publish
  • Decide how staff should Identify themselves in posts
  • Decide if staff members who post elsewhere should add a disclaimer to their posts: “The postings on this site are my own and don’t necessarily represent [Organization’s] positions, strategies or opinions.”
  • Encourage all members to respect copyright, fair use and financial disclosure laws and set penalties for non-compliance
  • Confidentiality: Decide whether to prohibit citing or referencing clients, partners or suppliers without their approval
  • Create a linkback policy for material reposted from other sources
  • Create a prohibited language policy restricting hate speech, ethnic slurs, personal insults, obscenity
  • If you are regulated, ensure all employees understand what can and cannot be said online
    • Understand the legal ramifications of creating a public record or a public meeting by discussing topics online
    • User-Generated Content (UGC) may need to comply with policy, copyright, trademark
    • May need to treat information as part of records subject to retention policies
  • Be careful out there: Some laws may restrict your ability to censor employees online:
    • Political Opinions
      • Many states, (such as California) prohibit employers from regulating their employees’ political activities
      • Unionizing
      • In many states, talking or writing about unionizing is strongly protected; union contracts may permit blogging; states may protect “concerted” speech — protecting two or more people who discuss workplace conditions
    • Whistleblowing
      • Many may believe reporting regulatory violations or illegal activities online is protected, but whistleblowers must report problems to the appropriate regulatory or law enforcement bodies first
    • Reporting on Your Work for the Government
      • Government workers writing online about their work is protected speech under the First Amendment except for classified or confidential information
    • Legal Off-Duty Activities
      • Some states may protect an employee’s legal off-duty blogging, especially if the employer has no policy or an unreasonably restrictive policy with regard to off-duty speech activities
    • Reporting Outside Social Media Site Memberships
      • Some organizations require employees to report other places where they contribute online
    • Set Guidelines for At-Work Social Media Use
      • Most enterprises believe at-work use of social media saps productivity, but some studies find just the opposite.
    • Review the following policies for ideas for your social media policy:[2]

    Our next post will take a look at Use Social Media to Manage Corporate Reputation.

For soup-to-nuts, strategy to execution processes, procedures and how-to advice, see our book, Be a Person: the Social Media Operating Manual for Enterprises. The book (itself part of a series for different audiences), is available in paper form at http://bit.ly/OrderBeAPerson save $5 using Coupon Code 62YTRFCV


  • [1] Air Force Web Posting Assessment Flowchart v.2 (PDF): bit.ly/dvdtGS

    [2] See SocialMedia.biz for a great list of social media usage policies: bit.ly/cyou3a