I will attempt to try to organize my beliefs about playtesting in a chronological order over a series of posts. That will be more useful as a reference for when you run your own playtests. If you do use it, just remember that you are taking advice from someone who has never been published. Kids, don’t try this at home.
I’m not saying that your testers will always be right. Remember that their evaluations will always be from their own limited perspective. You would be surprised at how often I can predict someone’s response based on whether they won or lost and how much they lost by. Those evaluations should be part of a formula for assessing the problem.
Design flaws have two elements: magnitude and probability of occurrence. As the magnitude of the problem rises, the less probability is required for action to become an imperative. If a flaw would completely and utterly break a game, it has to be dealt with even if it happens only one time in a thousand. On the other hand, if a problem is going to show up at least once a game, it doesn’t have to have a large impact to deserve attention.
Use individual testers to measure the magnitude. Use your own observations over the course of many tests to determine the probability. There are also times when a complaint is not really a complaint. The question is whether if the complained-about event had been the deciding factor in the game, would the beneficiary feel as if he or she had gained a hollow victory or would the victim of the flaw feel cheated. Those are the red flags to look for with complaints: hollow victory and being cheated. If those words ever come up use a highlighter on your notes and then tattoo a reminder to your forearm.
You may be tempted to think that problems will occur so infrequently that the magnitude is unimportant. But think of it this way: if you were taking a 45-minute to an hour-long demo of a game at a convention and that problem occurred, wasting the time you invested in playing that game, would you still buy it just because the guy at the booth said, “I swear, that almost never happens.” There are far too many games waiting a few feet away in the next booth, games that people have heard of on Board Game Geek or from their friends and have been told are good. Why should they take a chance on some new one from a completely unknown designer when the one time they tried it the game fell apart?
This is a tough thing to be sure about. I believe you should err on the side of caution. If the game feels too bland or fiddly later, you can always change it back.
Part 1 - Create a Playtest TeamSomething else to keep in mind during playtesting: know when to shut the hell up. As I’m sure many people will tell you, I have a fondness for . . . debate. Though I try my best not to do this, I sometimes become so sure that my design is correct that during feedback sessions I substitute explaining why I’m right for listening to learn if I am wrong. I have heard countless stories from friends who have playtested various CCGs and board games about their exasperation with trying to get a designer to admit that there is a problem that requires a fix. This is especially true with the biggest problems that are in dire need of action.
Part 2 - Organize a Playtest Session
Part 3 - Introduce the Game
Part 4 - Play the Game
Part 5 - Get Feedback
Part 6 - Make Changes
I’m not saying that your testers will always be right. Remember that their evaluations will always be from their own limited perspective. You would be surprised at how often I can predict someone’s response based on whether they won or lost and how much they lost by. Those evaluations should be part of a formula for assessing the problem.
Design flaws have two elements: magnitude and probability of occurrence. As the magnitude of the problem rises, the less probability is required for action to become an imperative. If a flaw would completely and utterly break a game, it has to be dealt with even if it happens only one time in a thousand. On the other hand, if a problem is going to show up at least once a game, it doesn’t have to have a large impact to deserve attention.
Use individual testers to measure the magnitude. Use your own observations over the course of many tests to determine the probability. There are also times when a complaint is not really a complaint. The question is whether if the complained-about event had been the deciding factor in the game, would the beneficiary feel as if he or she had gained a hollow victory or would the victim of the flaw feel cheated. Those are the red flags to look for with complaints: hollow victory and being cheated. If those words ever come up use a highlighter on your notes and then tattoo a reminder to your forearm.
You may be tempted to think that problems will occur so infrequently that the magnitude is unimportant. But think of it this way: if you were taking a 45-minute to an hour-long demo of a game at a convention and that problem occurred, wasting the time you invested in playing that game, would you still buy it just because the guy at the booth said, “I swear, that almost never happens.” There are far too many games waiting a few feet away in the next booth, games that people have heard of on Board Game Geek or from their friends and have been told are good. Why should they take a chance on some new one from a completely unknown designer when the one time they tried it the game fell apart?
This is a tough thing to be sure about. I believe you should err on the side of caution. If the game feels too bland or fiddly later, you can always change it back.
No comments:
Post a Comment