Lex Neva's thoughts blog of Lex Neva in Second Life

December 27, 2006

Reputation in Second Life

Filed under: Reputation — lex @ 10:31 pm

In this article, I’m going to hash out my idea for a Reputation system in Second Life. This, I believe, is a feasible, workable solution to the griefer problem currently plaguing Second Life and many other online gathering places. This article’s going to be long, so bear with me. My underlying idea is fairly simple, but it’s going to take a fair amount of text to really get it across and nip potential detracting arguments in the bud.

It’s worth reading, though. If I’m right, you can use this article as a blueprint to design and build a Reputation system that could make SL a much better place to live. It might even make you rich. All I want to do is get this idea out of my head and into yours, so that I can have the benefit of its ultimate implementation (or find out why it’s doomed to failure). So sit back, grab a beverage, and dig in.

The Problem

Virtual worlds, especially Second Life, have a fundamental problem: people can be jerks. People can do a surprising amount of things to cause problems, and while technical solutions exist to solve these problems (such as banning), there’s usually a fairly trivial way to get around them and cause more trouble, if one is sufficiently motivated. The real problem is anonymity: the Internet lends you a fair amount of anonymity, so you have less stake in your identity, which means that there are far less reasons to be a good person.

There are plenty of technical solutions to the problem of jerks (aka griefers) in SL. Some of them are fairly effective against certain kinds of grief, and some are spectacularly ineffective and just make your tormentors laugh. Some nasty things people can do have no technical solution, such as using CopyBot-like tools to circumvent your IP rights. Some people are angry enough to continue to find ways around the technical solutions in order to do things like bomb the grid with grey goo.

Other, more subtle problems arise when we start talking about the Big Thing techies are touting in SL: Open Sourcing the platform. There are several meanings for “Open Source SL”, but the one I’m most interested in right now is the feature lots of people are clamoring for: user-run Simulators. In short, people are getting fed up with LL’s servers, and they want to run their simulator on their own computer hardware so that they’re in charge.

The problem with this is less in terms of how technically feasible it is, and more in terms of what to do about the intellectual property that makes the SL world what it is. Simply put, if you bring your content to a simulator run on my server, I can copy anything you’re wearing and anything you’re rezzing, scripts included. This is much more severe than the kind of copying that is currently possible with CopyBot. As long as all servers remain in LL’s control, they’re the ones that control the data flow, and therefor they can enforce some degree of IP rights, in the form of the permissions system. Allowing me to run my own server is tantamount to tossing the permissions system out the window — and, in theory, no amount of encryption can possibly remedy this.

Ultimately, the problem is about consequences. The Internet allows us an unprecedented amount of freedom to do whatever we want, and the aforementioned anonymity makes it very difficult for others to enforce any kind of rules. In Second Life, we have the Abuse Report function, which everyone knows is currently almost completely non-operational (including Linden Lab themselves). We can ban people, but, with free accounts provided to anyone with no real-world information required, there’s nothing to stop a jerk from grabbing another account and hopping around your ban within 5 minutes. They can also, with a little work, get around being completely banished from the game by LL… and LL is hard pressed to fight this.

Many argue that the solution is simply to reinstate the credit card requirement, but remember: the advent of free registration has opened the doors of SL to many awesome people in countries outside of the US that simply couldn’t get in before. These Residents have already made huge contributions to the overall good of the SL community, but they do so quietly, while the tiny percentage of jerks cause their problems loudly. Close the doors, and you throw out the baby with the bathwater, and worse yet, I would argue that you won’t actually make any efffective progress in solving griefing at all.

In the wake of the CopyBot miniscandal (which was what got me thinking about this), people started to realize that there really wasn’t much they could do to enforce even real-world laws in Second Life. Abuse reporting is horrendously slow. You can file a DMCA request, but you have to prove ownership of your intellectual property, and you have to provide your real-world identity. The problem is that laws governing cybercrimes are largely untested, and the Internet moves incredibly quickly. Before you can say “preliminary injunction”, your prize product is in the hands of every dork on the grid at a rate of a $1L apiece. Suing takes a lot of time and resources, and there’s very little guarantee that you’ll actually get any benefit from it, assuming you can even find the real name of the person who’s violating your IP rights. In most cases, there’s not a damn thing you can do, and therefore there are no real consequences for bad behavior.

What I propose is a system that provides consequences. We need a system that can react swiftly and sufficiently to injustice, because the existing options available to the griefed are woefully inadequate in both areas. The system must be accurate, flexible, and widely used. In my opinion, it should be resident-run: LL is too busy trying to keep their grid running to deal with all of the jerks running around in it. Plus, there might even be a possibility for profit.
So, without further introduction, here it is: my proposal for a resident-run Reputation system to help combat grief in SL. I’ll start off describing it in the easiest way possible, with a series of examples of interaction with the system.

Case Study 1: Let Me In Your Club

I want to go to a Prestigious Club, but when I get to the door, I find out that they’re using Reputation Service™. They won’t let me in since I’m not registered, and therefore I have no reputation. I sign up, and I discover that half of my friends are already in the Reputation system, so I ask them to give me some positive votes, upping my trust score. It’s a little boost, but it’s not enough to get into the Club yet.

So I go about my business, being a fine, upstanding member of the community. I open source a useful script tool, or release freebie, and my users provide positive feedback. I behave well on people’s land. My popularity grows. Soon, I can get into the Prestigious Club. And you know I probably won’t act like a twit like I’m there; I’ve spent awhile earning this trust.

Case Study 2: Bad Neighbor

I go back home and find that someone moved into the plot next door. They saw my nice, home-made textures on my buildings, and they helped themselves with GLIntercept.

I’m angry, so I go and post a negative review on their Reputation page. I’m fairly trusted at this point, so my opinion is taken seriously, but how do we know that I’m not slandering someone? Well, the feedback goes on their profile for all to see, with my name attached to my comment. By itself, my one comment won’t make a big difference in my neighbor’s Reputation Score, but others can vote on what I’ve said. Some voters will have high trust values, and some low, and their opinions will be factored accordingly. The more my peers agree with me, the more we’ll impact my neighbor’s Reputation score: safety in numbers.

Case Study 3: Slander

Here’s where it gets interesting. say, for the sake of argument, that my neighbor is falsely accused. Seeing my negative feedback, she talks to her friends, posts on her blog, and explains what happens to convince people of her side of the story. They come vote on my feedback, indicating a lack of confidence in what I’ve said about my neighbor. Soon, the dissenting votes outweigh the assents, possibly to a huge degree if this becomes a big issue in the community. My neighbor’s trust is no longer affected by my feedback. Not only that, but now I find myself on the wrong side of a big crowd of people. My reputation score falls.

Conversely, if they did agree with me, my trust would increase. I heldd an opinion that many otehrs agreed with: my neighbor is scum. My neighbor’s trust decreases, and mine increases, because I brought the issue to others’ attention, and a large group of people agreed with me. I get a boost corresponding to the trust levels of the people who agreed with me.

Case 4: Safety in Numbers

Following this out, each of the “yes” voters gets a boost from being in a group that shares their opinion. If you vote to agree with me that my neighbor is scum, and 100 people also vote to agree with me, then clearly you and they share something in common. You share an opinion, and therefore you trust each other in some way. There is an incentive to come weigh in on an issue and there may be those who are willing to stake their immense reputations on the issues most important to the community. A trust market forms.

On the other hand, if you vote “no”, and there are a large number of people who voted “yes” and only a small group that voted “no” with you, your Reputation decreases. The community as a whole doesn’t share your opinion, and so, on this issue, they trust you less. This means that there are both positive and negative potential consequences to your votes. When you weigh in on an issue, you’re staking your reputation on what you say.

People can always post an explanation of any vote they make. Maybe I disagree with the reasons you voted a particular way on an issue. Say you see a landslide decision on a popular issue that’s topping the Reputation Charts, and so you vote with the crowd just to sponge off their trust and ride the wave. Largely diluted votes like this should be worth less, and especially so if you don’t bother to provide an explanation for your vote. If you can’t make it clear that you’re not just there to sponge off other people’s trust, I can make a negative vote about your vote. Suddenly, your vote and my criticism of it hits your Reputation Profile, and the whole process starts anew, and now you’re the one facing the court of public opinions.

What your reputation does

So how do we use this? For a fee, I can subscribe to the Reputation Service. I get a scritped device that managages my land ban list for me. When new people arrive, it checks their Reputation Score against a limit I’ve set. If a guest doesn’t meet my minimum cutoff, they are gently shown the door, and told to come back when their Reputation Score increases.

In short, I’m paying for the service of finding out who’s good and who’s bad. The Reputation Service’s own reputation depends on the accuracy of their judgements and the quality of advice they give.

Say the Service lets a griefer in. I’m angry, so I tell it that it let a griefer in and I’m not happy. Maybe it rasies my minimum Reputation Score threshold on my land. I also lodge a negative vote on the griefer’s reputation, or vote on one if someone else has beaten me to it. Additionally, the reputation of the Service as a whole can go down… especially if I go tell Reputation Service B that Reputation Service A keeps letting a ton of griefers into my land. Competition forms.

Case 5: Privately Run Simulators

Reputation screening goes the other way, too. Let’s go back to the case of opening the Second Life Grid to allow any person to run a sim on their own server. If Fred’s running his own Simulator, he needs to uphold his reputation, or no one will visit. If he’s evil, he may get a chance to steal 3 people’s things, but very soon, the community will catch on, rumors will spread, and in the blink of an eye, Fred’s trust level will plummet. Suddenly, no one will visit his Simulator. Worse yet, he’ll find that there’s nothing he can really do with the small amount of content he’s managed to steal. People will think twice before buying from someone that nobody trusts.

I value my avatar, so I’ll install the Reputation plugin in my SL client (here’s where we might need LL’s help). Beofre I connect to a Simulator run by a third party, my client will stop and check the Reputation Service to find out the score of the person running the Simulator. If the owner’s Reputatoin Score is too low, I’ll strip down and wear a default “Ruth” avatar before entering… if I enter at all. I’m going to get rid of all my nifty content, and only bring in content that I don’t care about, because, for all I know, anything I bring in might be stolen. The same goes for Clubs and other areas of the Second Life Grid: I’m not going somewhere if the owner has a poor reputation due to failing to keep out griefers.

Pretty soon, the Reputation “Market” becomes important. Everyone watches Reputation Scores, and if you don’t, your reputation may suffer, and you’ll find you can’t really do anything in SL.

What About Newbies?

Then again, what about new people? We want people to be able to join the community and participate, but we also need to deal with the problem of Unverified Alt Accounts, that is, throw-away accounts created for the purpose of acquiring a “clean slate”. New people should be given a relatively low Reputation Score, although perhaps not rock-bottom. We can give them a little bit of rope here… after all, if they’re bad, their reputation score will very quickly plummet to reflect that.

It should be possible to jump-start your reputation, so that the barriers to entering the community are not to arduous. Currently in Second Life, one rudimentary way of increating your reputation as a new player is to provide LL with some real-world billing information. You could, instead, provide some information to the Reputation Service. You can trust them with your info — after all, their entire business is trust. If confidence in the Reputation Service falters, then everyone will jump ship, and their business will fail.

Providing real-world information gives a baseline assurance that you won’t do anything grossly illegal. If you do, your information can be subpoenaed from the Reputation Service by a real-world court. So, you convince the Reputation Service that your information is legit, through whatever means they deem necessary, and they agree to say you did so, giving your reputation score a boost. The Reputation Service is making a statement about you, and so is staking its reputation on you, and therefore you’re borrowing from the Service’s reputation. It wouldn’t be unreasonable for the Service to charge a small fee for this service.

You can also ask people for sponsorship. If your friend is already in the system, they can give you a positive review to get you started, and gush about what a great friend you are, and how they’ve known you in RL since childhood. IF you go bad, however, they will find themselves in the unfortunate position of having endorsed a jerk, which negatively affects their Reputation — so they will have incentive to think twice before endorsing you. On the other hand, if you become the next Torley, a widely-lauded cultural icon, then your friend’s Reputation goes up, because they said you were cool before your Reputation Score had increased so much. You get dividends on your investment.

Maybe you don’t have friends. That’s okay, there will always be places you can go to get started. You can build your reputation organically, through good behavior, or you can seek a few endorsements from a Professional.

A Professional Endorser speculates in reputations. For a commission, they will stake their carefully-earned reputation on judging your character. You pay them, and they spend time with you and get to know you. If they like you, they leave positive feedback. If they don’t, they leave negative feedback. The more you pay them, the stronger an opinion, positive or negative, they will be willing to stand by, which means more work for them in reaching that opinion. Remember, they’re putting their reputation on the line too. If a Professional endorses the next grid-bomber alt, people will see this and the their Reputation may take a nosedive.

Case 6: Saying You’re Sorry

What if you make mistakes, but you want to do better? Maybe you can salvage your reputation by balancing out your negative feedback with positive feedback earned with good behavior, but maybe your reputation has tanked too low to save. This is reminiscent of a case I deal with a lot as an admin of a private island, in which someone feels that my decision to ban them wasn’t fair.

The solution is simple and fair: get an alt. Behave yourself. If you do, no one has to know that you were the last grid-bomber. So long as you behave yourself, everyone wins! You get to go where you want to and visit all the hot Prestigious Clubs, and the community gets the benefit of your good behavior. And if you don’t behave yourself in your new account? Well, pretty soon your shiny new clean slate will be marred by negative feedback, and your Reputation Score will drop accordingly.

Case 7: Age Verification

This system is all about making statements that are backed by your Reputation. Aside from your behavior, one thing people seem to really want to know about you in Second Life is your age. On the one hand, I personally believe that there’s less of a direct correspondence between age and behavior than most people do. There may be some amount of correlation, but correlation does not always imply causation. If your reputation’s high, and everyone loves you, who cares if you’re 12? This Reputation System makes it easy for age to matter a whole lot less, since it’s no longer necessary to rely on it as a coarse-grain filter for behavior.

On the other hand, it’s still often very important to know someone’s age, or more specifically, whether they’re over the Age of Majority. The law makes it necessary to ensure that a sexual partner is above the Age of Majority before engaging in certain acts with them. For companies in the Adult Industry, it’s critically important to know wether someone is 18 or over. While LL’s official policy creates a separate grid for teenagers and bans them from the Main Grid, their policy and ToS states directly that they cannot provide 100% assurance of their members’ ages. Even requiring a credit card is not enough; I had a MasterCard number attached to my checking account when I was 16.

The Reputation Service allows for several different ways that Age Verification could be implemented. The first would be through an Age Verification Service. These services already exist, and their purpose is two-fold: first, they allow Clients to register and provide certain age-identifying personal information to them, and second, they use this information to make assurances to their Customers that a Client, interested in doing business with the Customer, is over a certain age. One or both parties may be charged.

In the Reputation System, the Age Verification Service is staking their Reputation on the statements they make. Their Reputation Score directly affects whether or not Customers and Clients will use their service, so they have an incentive to provide correct information. They make whatever requirements they feel are necessary of a Client to convince themselves of the Client’s age, and they may provide different avenues of meeting their requirements. Perhaps the most trusted Age Verification Service would only accept Clients that were willing to make an in-person appearance or subject themselves to an extensive background check.

Another possibility doesn’t involve a separate service. A person would simply make a statement of their reputation in a public place, perhaps even on their Profile in the Reputation Service. Anyone can say they’re any age at all, of course, but remember, everything you say in the Reputation Service is backed by your own reputation. Perhaps you could give a trusted few friends enough personal information that they feel they can back your statement of age with their own reputations. If someone discovers you’re lying, you’ll find that your reputation tanks, and no one believes that you’re actually 25.

Potential Problems

So why will this work? Why don’t I think it will fail? Why not some other solution? Most griefing solutions I’ve seen proposed, especially in reaction to the CopyBot issue, are technical solutions. The problem with technical solutions is that we’re dealing with a human problem. It’s very difficult to construct a technical solution that is simultaneously specific enough to deal with the problem, and not easy for a jerk to circumvent. The problem is that this is a “people” problem, and people are very creative when it comes to finding ways around computer-based rules.

The Reputation System, on the other hand, is a people-based solution. The reason it can be so flexible and resistant to being circumvented or gamed is that the underlying activity that makes it all work comes from humans. In a technical solution, someone can come up with a creative way of using the system that wasn’t considered by its designers, and thereby subvert the system to their own ends. In a people-based system, so long as the system itself is simple enough, people can catch on to what you’re doing and call you out on it.

Do you remember the old ratings system in Second Life? I started in the Fall of 2004, and the old ratings system was still in existence. You could rate anyone positively or negatively in up to three categories: Appearance, Behavior, or Building. You could choose to attach a note to your rating if you wanted, but only the recipient would see it. Each rating cost $1L. Every week, the people who had collected the most ratings got a big share of a “Stipend Bonus” pool added onto their Stipend. This could be in the range of several hundred Lindens, and often even dwarfed a person’s stipend itself.

Spending about five minutes in-world as a newbie made it clear how this system was gamed. Everywhere you went, people would pass ratings out like candy. “Good job, way to stand there and say nothing, here’s a positive rating,” they seemed to say. Many profiles would have tidbits like “will rateback!!!1″, and their total received ratings were in the thousands. It was a win-win game: rate someone, they rate you back, you both get a higher stipend. Ratings had absolutely nothing to do with whether or not you liked a person’s behavior. Worse yet, negative ratings were largely washed out in the flood of positive ratings you’d get just for walking down the street, and if you felt the need to negatively rate someone for actual bad behavior, they’d negatively rate you back in retaliation. Your profile would reflect the negative rating forever.

Eventually, LL raised the rating price to $25L per rating. Shortly after the price increase, they also abolished the ratings-based Stipend Bonus and removed the ability to rate negatively. Now ratings are given much more rarely, if at all, and they don’t mean much due to the lack of negative ratings. At the time, LL made some brief mention of a “more robust feedback system”. This article is an example of one.

In the Reputation System, there will definitely be incentive to constantly pat each other on the back. But remember, everything you do in the system is a statement that you’re backing with your own reputation. If you run around giving people ratings that are obviously frivolous, someone like me might see fit to point out that you’re not using the system as intended, in the form of negative feedback on your Reputation Profile. If I get enough people to agree with me, my reputation increases. The system is self-policing.

What about other ways of gaming the system? There are rumors around that there are certain blocs of griefers who band together in order to act like jerks with impunity. If you cross one of them, they might make a concerted effort to trash your Reputation Score. Remember, one feedback acting alone won’t have much of an impact on your score, but opinions gathered in force have a much greater effect, especially if those opinions are held by people with good Reputations.

So theoretically, if a griefer bloc wants to trash someone’s reputation, their first step will be to gather a fairly large group of people who have high Reputations. They’ll have to earn those Reputations… possibly by doing the technical equivalent of patting themselves on the back by giving each other constant good feedback, but there’s at least some room for presumption that they’re going to have to be on their best behavior for awhile. This is not insignificant, because it means the community gets the benefit of their good behavior right off the bat.

Then, at some signal, they would launch a feedback campaign against their target. One might lodge negative feedback, and the rest would vote on it. Several feedbacks later, the target’s reputation is trashed. This sucks.

Fortunately, people can catch on to this kind of thing. Obviously, the target’s going to wake up the next day and realize that their Reputation is tanking, possibly the next time they try to go somewhere in SL, or, if they’re obsessive like me, when they grab their morning tea and hop over to their Reputation Profile right after checking their email. Either way, they’ll see what’s happened, and pretty soon they’ll be able to figure out that they’ve been Reputation-Griefed.

Their reaction is to tell all of their friends, post on their blog, and make a big loud stink so the community knows what’s going on. The community frowns on gaming the system to falsely ruin a reputation, and they’ll vote a lack of confidence on all of the victim’s negative feedback, quickly rescuing their reputation and dimishing the reputation of the griefer bloc in the process. They’ll also lodge negative feedback on the bloc, and since the public as a whole is a lot bigger than the bloc, there’s not much the bloc can do to save their reputations.

In short, this starts to sound a lot more like a “roll with the punches” defense than a “put up a really thick, rigid wall” defense. My feeling is that the former is the only kind of system that can work in this case. You can make a wall as thick as you want, but sooner or later someone’s going to find a way to crack it, and then it’s useless. A flexible wall, on the other hand, rolls with the attack, and then rebounds against the attacker with more force than they applied. You might get away with poor behavior for a little while, but pretty soon, you’ll find it blowing up in your face.

Even still, I know there would be complaints of the form, “___ said something really nasty about my mother on my Profile, and I demand that you take it down!” The answer to this is simple: nothing will ever be taken down from the reputation system. It’s necessary for there to be no intervention of this sort in order to maintain trust in the system. Your one and only recompense will be to lodge a complaint against the person in question, and attempt to sway the community to your side. For that and other reasons, I think that providing feedback about a person should be completely free of charge… but I’ll get into that in a minute.

One more thing: adoption. That’s the Achilles’ Heel of this system. It’s not going to work until it hits a certain critical mass of participation, and of course it’s not going to work if no one picks this up and implements it. As to the former, my hope is that, as the system is adopted, pretty soon you’ll find that you can’t help but be part of the system. For one thing, people can lodge feedback on you all you want, whether or not you have actually “Registered”. If people begin to slander you, or you feel you’re getting treated unfairly, your sole recourse will be to register (for free!) and start to counter the negative feedback. Furthermore, my hope is that people will begin to find that they can’t go anywhere or do anything without someone checking their Reputation Score. Fail to pay attention and participate in the system, and you’ll find you’re able to do less and less in Second Life.

Along with the question of adoption is the problem of complexity. If the system is too complex for SL Residents to understand, then they’re not going to use it. Plus, their first interaction with the system might be when they discover that someone’s said something bad about them, so they’ll already be in an anxious mindset. Simplicity is key. It’s critically important that someone can understand the basics of how they interact with the system within a very short amount of time. Fortunately, I think this can be achieved.

The system seems like it might be complex, and from a mathematical, implementation viewpoint (which I’ll discussed later), it’s non-trivial. However, the basic concepts that the user must understand are relatively few and simple:

  • you can leave feedback about anyone, affecting their reputation
  • if people vote to agree with your feedback
    • it counts for a whole lot more, and
    • your reputation increases
  • if people vote to disagree with your feedback,
    • it counts for a whole lot less, and
    • your reputation suffers
  • every feedback and vote you make is a statement backed by your own Reputation

It’s a little tricky, but the basic rules are simple enough, and the last can probably be removed. More importantly, it’s easy to explain these rules in several different ways. One way might be by example: illustrate a scenario involving feedback and positive and negative votes, and how they affect the reputation of the parties involved. Tips can also be provided as you use the system, such as a tidbit reminding you that the feedback you’re leaving will be a whole lot more effective if you get your friends to come vote on it for you. The underlying key is simplicity, because if it’s not simple, people won’t understand it and won’t use it, and without people using it, the Reputation System crumbles.


Who should create this system? Why? What’s in it for them? My proposal sounds like a good idea (I hope), and I’m pretty sure that most people would participate in it if it existed. People love to be able to bitch about someone who offends them in a public forum. We love the idea of real consequences for someone who’s maligned us.

Unfortunately, just because this is a good idea doesn’t mean it’ll get adopted or implemented. For things to get done in this world, sometimes it takes an incentive other than moral goodness. Fortunately, economics comes to the rescue: I think that someone with a good business sense could easily make this system turn a profit. In fact, I think there’s enough of a market that two or three or maybe more such systems could compete with each other.

My proposal for a business model (note, I don’t have much business sense) is simple: accept feedback for free, but charge to view a Reputation Score, and possibly to view existing issues on someone’s Reputation Profile. Simply charge me every time I want to view someone’s Reputation Profile on the website. Charge me once each time my land-based reputation manager looks up someone’s Reputation Score in order to decide whether they get into my land. Make it relatively cheap, and I promise that I’ll be constantly checking then profiles of those around me. In fact, give me a HUD, and I’ll click a button every time I meet someone to check up on their Reputation Score, which will help me decide whether I want to bother to continue to interact with them. If we can convince LL, give me a hook in the Buy and Pay windows that lets me check a vendor’s reputation before I do business with them. Heck, let me put a script in my vendors that decides whether or not to sell a product to someone based on their reputation. I don’t want to do business with a jerk, because they might try to steal my product, and they might harass me constantly for technical support when they should just read the manual.

The System should accept feedback for free, though. This is important for several reasons. First, people are providing the system information, and it’s collecting it into a huge database, mining it, and determining Trust and Reputation based on that information. That’s where the Service comes in. In fact, it might even make sound business sense to PAY for feedback, but this is probably unnecessary, because people love to gossip about other people. Another reason that keeping feedback free is important is that people need to be able to counteract false feedback. I should be able to be completely broke, and yet still have my voice heard when I’m falsely accused.

The real key to the system is the algorithms used to turn that gigantic stream of feedback into Reputation Scores that can provide useful information. I can take a few guesses at what form these calculations might take, but I think that they need to be considered very carefully in order to produce an effective system. In fact, this is where companies can innovate and competitors can gain an advantage on each other. It’s important to make it clear what kinds of computations are made, so that people know how to use the system. For example, it needs to be clear to me that if I vote “no” on something that the entire community has said “yes” on, then my reputation score will decrease. But how much will it decrease? How will the decrease be computed? That can remain behind the scenes.

One potential solution would be to actually produce a Reputation Economy. This has been proposed in several books such as Charles Stross’ Accelerando (read it free here!). When you “stake your reputation” on a certain statement that you make about someone, make that an actual Investment of some portion of your Reputation Currency. If it turns out that the community agrees with you, you gain Dividends on your investment and your score goes up. Those who disagree find their investments losing value. Potentially, this could result in a fixed circulation of Reputation Currency in the system. This solves one hidden problem in what I’ve described above: inflation. If everyone behaves well, and reputation scores continue to go up, pretty soon everyone will have a high score, and the system could become diluted. An Economy-based model could counteract that. However, it’s complicated, and we’d need a full-fledged professional economist to design the system. (Hint hint.)

If you like what you’ve seen so far and have a mind for comptuer algorithms, I’d love to hear what you think about how we could make this system work. As I said, I’ve got a few good ideas, and I might post more on this later, depending on the reaction I get.

Take this idea

So here it is. I hope that I’ve provided a mature, solid idea that can be implemented to produce a system that could make a serious step toward combating the griefing problem. I’d love to hear your feedback, so comment here and trackback from your own blog. If this won’t work, let me know why. Maybe we can come up with a solution together. If it will work, tell all of your friends and spread the word.

Better yet, take this idea and implement it. I would like nothing better than to see a system like this in action, and to have the benefit of its judgements on my own land so that I can stop having to put up with griefers. I would love to implement this myself, but I’m only one person, and I’ve got some pretty serious RL-based barriers to taking on a project as big as this. Plus, I’ve got no business sense, and I really don’t want to run a business. I just want to tinker, design algorithms, come up with creative solutions to problems, and, if you’d have me, maybe even help you write some of the code. I hate web programming, though.

So, I present this idea to the Community. You can take it, follow what I’ve said here, and quite possibly turn it into a moneymaker and become everyone’s hero. I gain in that I get to use such a system, and I’m pretty sure that I’d get a big boost in my reputation for being the mother of such a vital tool, assuming it works ;) I’m unable to make this happen on my own right now, and I feel that the need for a system like this is so great that the community just can’t wait until I’m able to implement it. Instead, I present it to the world, wide-open, so that the basic idea cannot be patented (just the specifics of your calculation algorithms, if you see fit). If you make this work and get rich, it might be nice if you made a donation to the Let Lex Eat fund, and maybe mentioned me in the credits… but let’s just see how this goes, shall we?

lex.jpgLex Neva has been a resident of Second Life for over two years. She is a hacker, a tinkerer, and a brainstormer, and she loves nothing more than to spend hours trying to make LSL do things no one knew it could do before. Her accomplishments include the scripting in The Settlers of Second Life (a system so complex that it made Philip Linden say, “I think she might be insane…”), PipeMaker, Rez-Faux, and tons of other freebies, scripting techniques, and ideas. She lives in Suffugium, a cyberpunk sim created and owned by her and the rest of the Squidsoft Collective. Drop by, and you’ll see her brain-children flitting about all around you.


  1. One concern with the griefer bloc would be that getting a large chunk of the ‘cooperators’ to come to your aid may be difficult. Griefers are likely to be more motivated than the general community.

    We see a similar thing happen in RL elections – people who really really care about an issue will get the vote out, whereas the majority who don’t really like the extreme view but don’t care *that* much can take a long time to get around to realising what’s going on.

    I think there’s also a danger of people getting bad reviews for bad reasons, and not enough people with enough clout going around sorting out injustices. The voters really do need to be well informed about it, and engaged. The wikipedia system comes to mind.

    There are some touchy areas: we presumably wouldn’t want people getting neg rated based on religious beliefs, free speech (?) etc. However, if SL is going to be as widespread as we think it will be, then all the same fights over obscenity etc etc will happen here as well.

    It strikes me that multiple reputation systems may not actually be competitive. It would make some sense to have separate PG and adult reputation systems (and ditto for many other subsets of society).

    In terms of the theoretical basis behind the implementation, this reminds me of the way grading systems work for online board games. Might be worth looking into http://en.wikipedia.org/wiki/ELO_rating_system

    Comment by Seifert Surface — January 9, 2007 @ 1:26 am

  2. These are good points you bring up, and you may have hilighted a key weakness in my proposal.

    However, there’s one key thing that separates the Reputation System from an election as far as getting people to come and vote. In the real-world election scenario, there a relatively small (as you imply) incentive to come and vote against an issue that a vocal minority is pushing, and that’s the reward of preventing that issue from passing. In my system, not only do you get the reward of bringing consequences for someone who you think is being a jerk, but you ALSO get a dividend to your reputation for participating, assuming others agree with you.

    This also applies to the “bad reviews for bad reasons” issue. If you believe that someone gave a negative (or positive) review for a bad reason, you have two incentives for stating your opinion on the review: first, you diminish the effectiveness of the review, and second, you could quite possibly see your own reputation go up as a result. As people become more familiar with the system and realize how important it is to have a high reputation score, my hope is that a high emphasis will be placed on wanting to raise your own reputation score, and that will make you weigh in on more issues, which will keep the system going.

    Comment by lex — January 9, 2007 @ 2:27 pm

  3. This is a great idea, make people accountable and they might behave!

    I like the basic approach of having a social solution to a social problem, and actually I think most of the adoption and utilization of a reputation would work.

    My only worry is similar to Seifert. The system *can* be gamed, just like an election because there is literally a game system in place. It’s the game of who has the most friends who agree with him, also known as Politics or, my favorite middle school variant, Cliques vs Losers. Also, unless I misunderstood the proposal, it’s a negative feedback loop – so it’s easy for the winners to become very strong and the losers to get ground down. The more unpopular I am the less my opinion counts.

    This could be even worse than the passionate people having the upper hand – as you have in a get out the vote scenario. Because the reputation system is based on popularity *and* alignment of opinion, you could have a reputation get trashed simply because they have unpopular opinions on their profile, or they engage in behavior is legit and protected by the terms of service but considered undesirable by some group. As an extreme example think of furries. Most people don’t think about them, some folks hate them, some of them hate non furries. What if folks (furry and not) started rating people based on their amount of fur? Someone who spends part time as a furry might be “outed” in their reputation profile. Ultimately the majority would win and furry folk would not be welcome in SL because their other activities were limited by their communal trashing of reputation. By the system they would be treated like a griefer bloc on a large scale.

    Heck, any subject two folks could dissagree on, could eventually be subject to the global popularity test – ultimately resulting a population that thinks, acts and looks exactly the same about everything!


    Well, why not? It’s actually very similar to the historical reasons minority groups have been persecuted. If you wear funny clothes I can tell you don’t belong in my night club, shouldn’t shop at my grocery store, etc. Only now we’re not just judging on outward signs of conformity and acceptance – every aspect of a person’s behavior and thoughts is publicly available for scrutiny and subject to a popularity test which might be grounds for segregation.

    Actually, thinking about it, a more likely outcome to the furry scenario is different cliques adopting different reputation systems – less dystopian than a grid full of clones but just as sucky. The diversity in the grid only hangs out with their clones because those are the ones using the reputation system where their clique wins.

    Simply put, my concerns are the way that reputations build on reputations (popularity contest), and the way that opinions or any attribute can become filters for reputations (segregation).

    But since I don’t like to poke holes without being constructive, here’s a few random ideas which might help. ^_^

    I wonder if there is not a more ‘natural’ way of getting an aggregate reputation on somebody than voting and networks of endorsement? I know these are very popular techniques these days and are very useful for problems like rating website value, product quality or even product review trust worthiness. But maybe reputation requires something different. Maybe it should be based on how much time I spend interacting with someone. Time I spend with you works to give an implicit increase in reputation. Maybe there’s other actions that I can take, or that more importantly, groups of people can take together that act as endorsements. There is probably a lot to explore in this direction but I’m not sure if it could be done in such a way that a malicious person couldn’t run their own clients and servers to spoof good activity…

    As another alternative entirely you could try and take the scheme as proposed and simplify it further with the goals of eliminating the popularity and the segregation effect. For example, what happens if everybody rates for whatever reason and completely drop the accusation / justification side of things. You can give one rating to any given individual for whatever reason you want. You can change it whenever you want. Everything is completely anonymous. This more accurately reflects how my opinion of you works anyhow – my opinion is valid and reflective of something no matter how unpopular I am. This makes the ratings less useful for reducing griefing though, and I imagine we start re-treading ground of other reputation systems. How to create an economy of reputation? Well maybe you can’t, maybe reputation doesn’t follow economic laws because it’s not a finite resource…

    My third idea is prompted by the failings of the second idea. Namely absolute reputation is essentially a public absolute referendum on any individual. That sucks! Sure it stops griefers but it also stops any kind of “deviant” behavior. So lets take the system and instead of making it absolute lets make it relative and make the effects work only along existing relationships. In other words lets get out of the reputation game and instead get into the trust game. Say I have an infinite amount of trust I’m willing to share, but I don’t share the same amount with everybody. If I trust Lex completely and I trust Torley somewhat, and Lex trusts me somewhat and Torley trusts me very little, then it could make sense that Lex should trust Torley very little and Torley should trust Lex not at all. Lex’s trust of Torley in this case is based on the relationship which passes trust from one person to another – naturally decaying with distance and modified by the level of trust relating each person in the chain. Note that Lex’s automatic trust of Torley is very little because she only has one relationship/endorsement though me, and that this would be true even if Torley was insanely popular and highly trusted by every person who knows Torley. Now I’m not sure that this will also yield a useful way to stop griefers but I think it points towards a system that is not popularity based but based on a network of relationships. In the furry example I started with such a system *could*, *maybe*, lead to cliques where only furries can shop in certain districts and only non furries in others, but I think because it lacks a feedback loop it’s much less likely to escalate that way.

    I dunno – what do you think?

    Comment by aestival — February 28, 2007 @ 7:35 pm

  4. There are a few technical issues with the literal suggestion. For example, storage requirements to keep track of, “invested,” opinions will be O(n^2) at the least, and even LL doesn’t keep O(n^2) data around except for friends lists and the “My Notes” section of profiles. I have the feeling that both are very sparsely populated, but this reputation system would not be. Millions of residents makes that a bit of a problem.

    Comment by Yoyo Bean — June 19, 2007 @ 4:22 am

  5. The biggest problem I see with opensourcing, will be the ease of stealing content. Griefers are tolerable.

    Unfortunately, this system isn’t really going to help with that. You may choose not to enter a non public sim. But it just takes someone else to buy a copy of your product, and take it there themselves. Unless you plan to restrict sales to only the trusted elite with high reps, (which would destroy any business), there’s nothing that can be done.

    I don’t see hopw any system, technical or social, can prevent that.

    Comment by WarKirby Magojiro — August 30, 2007 @ 7:10 pm

  6. Aestival — Sorry it’s taken me so long to reply here.

    First of all, you raise a good point about the problem of creating a nation of conformists. The last thing I want is to subtly push everyone to act like everyone else. In some way, I guess, that IS what I want; I want everyone to act NICELY. But you’re right… when reputation can be about ANYTHING, like I’ve described, then that does have the danger of leading to conformism.

    On to your suggestions.

    1. I like this idea. It’s definitely true that you’re likely to spend more time in the presence of people you like than people you don’t. Even if you don’t know everyone who’s around you, if someone’s acting out, it’s likely that they’ll drive a percentage of people in their vicinity to leave. That’s the kind of emergent statistic that will show up when a lot of data is aggregated.

    On the downside: this specific metric would be easy to game, by having a group of people make their avatars all sit around in a group, harvesting reputation. It also could suck for people like me… I tend to hang around in Suffugium a lot, often in my little hidden home, and yet people still like me.

    2. If you go back to just one vote per person, than you might well end up with the system we had before, where people just voted each other up sight unseen at parties.

    3. I definitely like this idea of making reputation relative. When I read what you said, I envisioned some kind of “6 degrees of Kevin Bacon” type system, where it tries to find the shortest chain of people to get from you to the person in question. Then it could tell you whether it thinks you’d like this person, based on how the people you like feel about the people that like them. It’s possible that this could be fairly hairy to implement, computation-wise… but then again, Orkut, the social networking service, seemed to be able to tell you a chain of friendships that would lead to any person you cared to find on the system.

    Comment by lex — August 31, 2007 @ 11:19 am

  7. Yoyo, you make a good point, and I did realize that as I was writing this. But keep in mind, the data would not NECESSARILY be O(N^2). You have to take human behavior into account. Every person is not going to meet every single other person, and they’re definitely not going to vote on them! Lots of social networking websites have data that would appear to be O(N^2), and they don’t run out of space. Maybe this would be the same.

    Comment by lex — August 31, 2007 @ 11:20 am

  8. That’s a good point, Warkirby. What I described in the essay works fine for my PERSONAL scripts and objects, but it breaks down immediately when I want to sell stuff. Since I make a fair amount of money in-world every month from sales, I’m sensitive to what you say, and I realize that we’re pretty much stuck.

    The only solution to this that I can see would be for LL to vette people running their own sims before allowing them to connect to the main grid… and that strikes me as kinda nasty. The only reason I can sell things in SL is because LL has a complete and artificial enforcement of IP rights.

    If we ever go to the kind of grid that will resemble the WWW, ie, a truly heterogenous grid run by disparate service providers, IP rights will probably go out the window, and every in-world asset will effectively become open source.


    Comment by lex — August 31, 2007 @ 11:25 am

  9. […] I have no way of knowing if I can trust the person releasing it. If we had a system like the Reputation System I proposed previously, then it would be much easier for me to make a quick and yet confident […]

    Pingback by Lex Neva’s thoughts » Linden Lab Open Sources SL Client — May 10, 2008 @ 9:36 pm

  10. Recent cases of the “Sim Mafia” in the Sims online virtual world who accepts payment of in-world money to gang up on a Sim by bombarding it with negative ratings have emerged. Basically it is collusion to bad mouth about someone, I think your model cannot handle this situation which could be a fairly easy to do attack.

    Comment by Carl — January 15, 2009 @ 5:20 am

  11. Your system basically assumes people are obsessed with checking reputations and want an orderly world. You just don’t get it that what really draws the vast majority of people to the virtual worlds is their relative anarchy. People want to do what they can’t get away with in the real world here because we are all animals inside and always want to eat up smaller guys subconsciously. The day SL is really stupid enough to implement any reputation system to government the virtual world like the “Brave New World” is the day it will lose out its 15 million registered users and fall into obscurity.

    Comment by Tim — January 15, 2009 @ 5:49 am

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress