• Welcome to Thousand Roads! You're welcome to view discussions or read our stories without registering, but you'll need an account to join in our events, interact with other members, or post one of your own fics. Why not become a member of our community? We'd love to have you!

    Join now!

Review Blitz 2020: Post-Event Feedback


golden scars | pfp by sun
the warmth of summer in the songs you write
  1. silvally-grass
  2. lapras
  3. golurk
  4. custom/booper-kintsugi
  5. custom/meloetta-kint-muse
  6. custom/meloetta-kint-dancer
Hey all! Hopefully you took a breather from the downright insane amount of reviews completed in the past four weeks. If you've got a moment, I'd love to get some feedback on how things went in hindsight -- this was my first time running an event like this and I'm sure there are a lot of changes that could make things go more smoothly next time! Please feel free to be as candid as you want; if you'd rather do things anonymously or in private, my DM's are also open!

In general, feel free to respond however you see as fit; any feedback will be really helpful. I've also listed a few questions as starting-off points for discussion, but don't feel obligated to stick to just those areas! Answer as many or as few as you see fit.
  • Why did you participate/not participate in this event? What could have encouraged you to participate more?
  • How was the timing of this event? This includes topics such as the overall length of the event, the length/timing of the themes, and the time of year that this took place. Are there changes to the timing that would've made things easier for you?
  • How was the points distribution? This includes topics such as what the weekly themes were and the actual types/quantity of points breakdown (3 base / 4 theme / scaling wordcount). Do you feel like the points you received adequately reflected the effort you put in during the event? Would you change them to make them more representative? (Note: I will be doing a more complete points breakdown based on the event stats in the following post)
  • How was the event run? This includes topics such as how you logged reviews, how the leaderboard was updated/published, and how updates/new information was processed. Are there changes that would've made things easier for you?
  • other??


golden scars | pfp by sun
the warmth of summer in the songs you write
  1. silvally-grass
  2. lapras
  3. golurk
  4. custom/booper-kintsugi
  5. custom/meloetta-kint-muse
  6. custom/meloetta-kint-dancer
The points system for this event was a large topic of discussion before the event, so I do think it's valuable for us to revisit it once more here! If you'd like to run your own stats on this year's sample, it's available here or I can make you an editable copy.

As a refresher, this year's point system was as follows:
  • Base Review (any review posted during this time): 3 points
  • Weekly Theme Adherence: +4 points
  • Review Size:
    • 1-100 words: +1 point
    • 101 - 500 words: + 2 points
    • 501 - 1500 words: + 3 points
    • 1501 or more words: + 4 points
And here's some rushed statistics:

The current review size points bonus ends up having almost no outcome on the leaderboard. This is best quantified in two ways:
  • If the word count bonus were removed entirely from the system, the order of the leaderboard would remain completely unchanged, except for one user.
  • Of all the points earned during the event, 71% of them come from base points and weekly theme review, while the remaining 29% were assigned based on review size.
Reviews fell into the following length categories:
  • 1-100 words: 0
  • 101 - 500 words: 71
  • 501 - 1500 words: 112
  • 1501+ words: 8
There were significantly more entries in the 501 - 600 word range (38 reviews) than in any other span of equivalent length. Additionally, there were significantly fewer entries in the 401-500 word range (6 reviews) than in nearby spans. This does suggest that people were incentivized to pad out reviews that might've otherwise been below 500 words in order to get the additional point. However, this behavior has no impact on the leaderboard under the current system (in other words -- there was no individual group of users that padded in the same way enough times to change the ranking order).

The 501-1500 word count range is largely over-stratified for what it represents -- there are a lot more reviews in the lower half of the range (501 - 1000; 94 reviews) than in the upper half of the range (1001 - 1500; 18 reviews). A more ideal distribution would have roughly equal numbers of reviews across each span.

Based on the data gathered in this event, the points cutoffs were not reflective. A better distribution would be closer to:
  • 1-350 words
  • 351 - 750 words
  • 751 - 1000 words
  • 1001 - 1250 words
  • 1251+ words
Words per Point
The total number of words written per point earned varies significantly across users. The user with the lowest number of words per point (most efficient conversion rate between reviews and points) is at 19.7 words/point, while the user with the most number of words per point (least efficient conversion rate) is at 129.6 words/point. Individual users varied widely on their respective w/p values, driven by both upper and lower outliers in user w/p, with an average value of 74.9 words/point across all users.

Additionally: while the distribution of words/point is close to the average among the top 10 scoring users (74.85 w/p), the average among the top 4 scoring users is significantly lower (54.7 w/p). This suggests that users with a lower word/point score are much more effective at maintaining higher positions on the leaderboard than users with a higher word/point score.

As a lover of numbers, the points spread is the most interesting for me -- we had multiple precursor discussions about if word count tracks effort/quality, if higher word count is even a desirable or rewardable behavior, how much word count should be rewarded/incentivized, and so forth, but it was also fun to see how things played out in the flesh.

The data in the Relevance and Words per Point section suggest that the current system is influenced almost exclusively by the number of reviews written and the adherence to the weekly theme, with word count points having almost no impact on points whatsoever. Is that a good thing? Is that still something we want? What even is quality and how do we quantify it easily?

Personally, in hindsight I would rather not have a word count point system than have the one outlined above -- it's a lot of extra work for me to count up and there was pretty much no point (pun intended) to having it under the current score distribution.

What I would prefer, though, is to tweak the points values to make word count at least slightly more relevant, as below:
  • Base review / 1-350 words: 2 points
    • 351 - 750 words: 3 points
    • 751 - 1000 words: 4 points
    • 751 - 1250 words: 5 points
    • 1251+ words: 6 points
  • Weekly theme: 4 points
t h o u g h t s ?
Last edited:


Bidoof Fan
  1. custom/sneasel-nip
  2. bidoof
  3. absol
  4. kirlia
  5. custom/windskull-bidoof
Alright, time to work out my thoughts.

First I think I should briefly share my thoughts and experience I had with the event before talking about the logistics in general.

About halfway through the second week, I believe, I switched from reviewing 3 chapters per review to one, because I didn't have the time or energy to keep reading and reviewing at that rate. At that point, most of my reviews dropped from about 500-600 words to 300-400. Despite that, I don't really think my score changed much. In fact, since I was able to spread out and do more reviews, I think my score was higher than it would have been if I had done 2-4 longer reviews per week.

As a result, I definitely think the bonus needs some tweaking. For what it's worth, I do think a wordcount bonus is a good idea, but it needs to be more balanced. One really long review should probably score just as well as two or three shorter ones. That said, let's look at the system you've suggested right now, if I understand it correctly, with a couple examples. As a note, I'm assuming in these examples that the weekly theme is adhered to.

Person A writes 1 review with >1250 words and scores 10 points
Person B writes 2 300 word reviews, scoring 6 points each for a total of 12 points.

Person A writes 2 reviews in the 1001-1250 range, scoring a total of 18 points
Person B writes 3 reviews in the 1-350 range, scoring 18 points.

Person A writes 3 reviews in the 751-1000 range, scoring 24 points
Person B writes 4 reviews in the 1-350 range, scoring 24 points

On the other hand, looking at example 1 again, but with theme adherence in mind...
Person A writes 1 theme adherent review, > 1250 words and scores 10 points.
Person B writes 3 300 word reviews, with only one being theme adherent, and scores 6, 2, and 2 points, totaling 10

I think the point I'm trying to get at with this is that theme adherence and number of reviews plays just as much, if not more of a role in the total points. Which, to an extent, is really probably how it should be. Looking at the final scores from this run, for the most part the people with the most reviews scored higher. And as is, it looks to me like that would carry over. If that's what we want, then I think this new proposal is fine. But if not, it'll probably need some more tweaking.

Anyways, I think that's all I have to say right now? If I think of anything else though, I'll bring it up.


Event Horizon
I thought this was an awesome event all around, and it ran really smoothly! Thanks a ton for organizing this and then going above and beyond with stuff like this big stats breakdown. I love the stats, give me them ALL.

To answer your questions:

- I participated in this event because I thought it looked like it would be a lot of fun, and it helped me do something I wanted to do anyway, which is write a lot of reviews! tbh I think I pretty much maxxed my participation in this event, there really isn't much you could have done to encourage me to review more. Though if you're offering, you could pay my salary for a week so I could take off work and have allll the time to write reviews. ;)

- Overall I think the timing was pretty spot-on. Having it over the holidays meant that some people had a break/time off work and therefore an easier time participating, although on the flip side some people had family stuff that may have made it harder than usual. But I think there does tend to be a bump in online time around the new year, so on balance I think it's a good time for this kind of event. I also thought that a week worked well for each theme, and a month was a solid amount of time overall... If anything you could probably cut it down to three weeks, I think people were flagging a little by the end. But not too badly; running it at a month again would be fine, too.

- Points... I definitely noticed that I was getting pretty much the same "length points" for all my reviews, and that most others were, too. To me this suggests that there isn't much of a point in having a length bonus, if it doesn't do much to discriminate between reviews. Obviously you can re-tier things so that length points do matter more, but I didn't feel like this event was missing anything for them not mattering much.

- Thought the whole event ran quite smoothly! No problems with the logging or anything, you were really the one having to deal with all of that, and I thought you handled it well.

- Otherwise, a couple thoughts on prizes... I like to have some small thing that you can get for writing even, say, 1-3 reviews, just so that people who come to the event late or who fall behind don't feel discouraged and like there isn't much point in joining in or even attempting to catch up with the more prolific reviewers; even if you can only do a little bit, you can still get something tangible for your participation, and then if you do make it into the top X you get whatever additional stuff. This is tough if you don't know your prize list going in unless you're willing to offer stuff yourself that you know you can fulfill, though.

The other thing I might suggest is doing prize distribution privately, i.e. you PM/DM the next person in line with their options and they send back their choice, and then at the end let the prize providers know who claimed from them. Just removes the awkwardness/feel-bad of some people having their prize picked much later than others and feeling like people don't like their stuff as much or w/e. Someone determined could still figure out more or less where they probably ended up in the pick order if they felt like it, but it would be much less obvious.
Top Bottom