Social news sites are becoming hot-beds of shilling. Shilling is the act of pretending to like something, in order to convince someone else to take an action. (There's a fun discussion on the talk page of the Wikipedia article named "Shill" about exactly what activity should be classed as shilling, and what activity is just lying. I tend to take an inclusive view on the subject: if you're lying to try to make someone think something is better, you're shilling.)
Niall Kennedy has a lovely blog entry about how he was able to track down the back-story on a couple of shilling attacks on the news aggreggation site digg. His article contributes two particularly fun parts of the story:
1) He carefully explains the financial motivations behind shilling this sort of health-related activity. The price per click is surprisingly high.
2) He observes that after an initial burst of shilling activity, most of the rest of the ratings may well have been "real". That is, the shills may have worked hard to get their entry on the digg front page, after which it stuck because of genuine interest from the community. This is something Tony Lam and I speculated about in his WWW Shilling paper, because of a result that Dan Cosley and crew had demonstrated earlier: if you tell someone they will like a recommended item, they will tend to score that item higher than they otherwise would have. (Cosley used an elegant technique: he asked people to rerate items they had previously rated, while sneakily showing them a system "prediction" on the side. When he manipulated the prediction up (or down), many users systematically rerated higher (or lower) than their original ratings.)
There are a bunch of fun issues with this last point:
1) Is it really shilling if it just helps people find something they really like? How do we define activities that serve to boost an item above a threshold for visibility if after that boost they take off organically? My tendency is to dislike the activity if it involves lying, even if only initially — but is this consistent with a social value function that most wants to find people things they will like?
2) There is a need for a more serious look at how to deal with new items in social recommender environments. There's a great paper by Avery, Resnick, and Zeckhauser that looks at the problem of building economically consistent systems that encourage users of a recommender system to spend time with new items. (I'm talking about the 1989 American Economic Review paper; while you're there, check out the recent papers on eBay: Paul and his colleagues have been making the case that lying of various types is an important component of the eBay ecosystem. Even under my broad definition many of these behaviors do not qualify as shilling, but they're still interesting examples of the Dark Side of social network behavior.) The core problem is that the new items are riskier than old items, so it's better for each individual member of the community to wait for someone else to sample the new items first.
3) Lots of other ideas; what do you think?