• Welcome to Thousand Roads! You're welcome to view discussions or read our stories without registering, but you'll need an account to join in our events, interact with other members, or post one of your own fics. Why not become a member of our community? We'd love to have you!

    Join now!

Review Blitz 2021: Post-Event Feedback


Rescue Team Member
Pokemon Paradise
  1. chikorita-saltriv
  2. bench-gen
  3. charmander
  4. snivy
  5. treecko
  6. tropius
  7. arctozolt
  8. wartortle
  9. zorua


Rescue Team Member
Pokemon Paradise
  1. chikorita-saltriv
  2. bench-gen
  3. charmander
  4. snivy
  5. treecko
  6. tropius
  7. arctozolt
  8. wartortle
  9. zorua
I meant moreso the 300 points per chapter. If you could get 300 points on a single chapter, something is horribly wrong.

Chibi Pika

Stay positive
somewhere in spacetime
  1. pikachu-chibi
  2. lugia
  3. palkia
  4. lucario-shiny
  5. incineroar-starr
The advantages of that is being incredibly easy for Kint, doable with only the stats she already logs, as well making each word count. A 400 word review of one chapter doesn't 'waste' 200 words, so there's no incentive to pad out an extra 100 words to make it hit 2 points in and of itself, since those words still count towards total words written. It's also easy to log: We already said our chapters and total words, which is what would be needed.
Oh, I like this a lot. I know I already gave the post a quag, but here is another quag: :quag:


Flygon connoisseur
  1. flygon
  2. swampert
  3. ho-oh
  4. crobat
  5. orbeetle
  6. joltik
  7. salandit
  8. tyrantrum
  9. porygon
So its pretty hard for me to wrap my mind around everything right now, but I just want to say, chibi made a really good point.

Again, the exact numbers don't matter. It's more about the philosophy.

This might be unpopular to say, but I really want to incentivize people to read more content, even without necessarily writing longer reviews. As a writer, if I had to choose between a very long review on my first chapter, versus more people reading more chapters and getting further into the story, I would take the latter. Every single time. And tbh, as a reader I'd like to be incentivised to read further as well. Just getting myself to read a fic these days can be a struggle, let alone write a review.

We do need to ask what we hope to achieve with review blitz. I personally agree with this philosophy. I am a bit tired of the 1000+ reviews on chapter 1. Which still happened to me, sort of. I know there was a theme thing that kind of addressed this, but I think it would be great to find ways to incentivize increased participation. And most importantly, find something that works well for Kint.

Anyways, it still boils down to trying to perhaps decide on a philosophy and working towards that. Is it about prizes, points, or encouraging reading and reviewing?

Perhaps a zero or 100 word count thing, but you just earn bonuses for things? Maybe bonus for finishing/catching up on a fic? Idk, I think the current system concept isn't bad, maybe just needs soft tweaking? Perhaps stuff like bonuses at the end of the blitz for stuff like reviewing the widest *spread* of authors? Big wordcounts? Most number? That way everyone can strive for a goal that's achievable...

Huh. Maybes that's something?
Like, keep it similar to the current system, but just tweak the end. Basically, you still get points for reviewing and through chapters, weekly bonuses, etc. But when Kint does her usual roundup, prizes or points get added for hitting bonus criteria? For example, in this years blitz:

>Pen would win for highest wordcount
>Luke could win most overall amount of chapters reviewed
> @IFBench would win for most unique authors reviewed
>Someone else could win for most amount of unique fics
> most memes made ;)

That way different niches are addressed! I noticed a lot of people all have different reviewing skills and abilities they seemed to be good at. I think maybe this would also help people to incentivize continuing to review their way? Like @Navarchu having fun memeing! idk, it's something to think about maybe?

Either, at the end of the day, I think I agree with the points Chibi made

>Low barrier


Memento mori
  1. leafeon
Much thanks to all of the prize volunteers and to Kintsugi. This was a fun community event. I liked checking out all the graphs and stats and lovely artwork.

I agree with Chibi Pika in that I would prefer to receive a short review that covers my whole story rather than a long review of chapter 1. I also like Shiny Phantump's suggestions for the scoring system. Seems you could work out a reasonable compromise between rewarding breadth and depth with that system. I also like her suggestion for revamping the weekly themes. With that said, I personally would probably just continue to review the way I normally would regardless of what point or prize system is implemented.

I wonder if it would be worth pricing different prizes differently, but I am not sure. I'm not sure I even know what all the prizes were for this event. But if nothing else I think it would make sense to allow a reroll for someone who gets an art prize but would prefer something else (working under the assumption that art prizes take more effort than other prizes and can be fulfilled by fewer volunteers—this could be wrong).


you gotta feel your lines
  1. farfetchd-galar
  2. gfetchd-kyeugh
  3. onion-san
  4. farfetchd
  5. farfetchd
i'm not really a big fan of the idea of awarding extra points for longer reviews. others have had said really good stuff about this already, but in particular i really like chibi's remark about the criteria we should apply to our rules:
- Simple, easy-to-understand scoring (will reduce workload on the organizer as well)
- Low barrier to entry
- Accessibility
i think the 250 word minimum is pretty decent, and i like that there's no reward for longer reviews, because it means that if you write a long review anyway, it's because you wanted to, not because you were incentivized to do so. i kind of resent the idea that words are "wasted" if they're not rewarded with points. and i think this blitz showed us that the current system works pretty well as-is—i don't think we really had a problem with people exploiting it by spamming a million reviews at exactly 250 words to rise in the ranks, and in fact most of the people with higher scores wrote lots of long reviews.

i guess, in short, it seems like the output of the current system was already pretty ideal. the barrier for entry remained low and people wrote long reviews even though they weren't rewarded anyway, and if you feel like doing so is a waste, you can simply not do it imo.

that said, i am still very amenable to the idea of adjusting the minimum wordcount, dynamically in particular—like i and others said before, while 250 is a very reasonable baseline, it starts being so reasonable once you're reviewing something that's like 500 words long. maybe some kind of simple stepwise function there: minimum is 150 for fics < 750 words, 250 for chapters >= 750 words, or something like that.

Oh, just thought of one thing that was affecting me this Blitz--

The full extra bonus point for theme per review meant that it was always significantly better to review one chapter of a fic at a time - that meant the theme bonus effectively doubled the number of chapters I read, while doing extra chapters in one review meant losing out on one full point per extra chapter, and that definitely shifted my my behaviour here a bit. Just reducing the relative weight of the theme bonuses would help, but I would love it if next time, one way or another, the incentive structure made it at least as good to read and review multiple chapters at once (assuming you were to write substantially about each chapter either way) - I think that'd help combat that 1000-reviews-on-chapter-one syndrome.
i also think this is a really good point. i definitely felt this during the blitz too, as most (all?) of the themes were once per fic and i ended up reviewing the same couple fics a lot. i guess you could make the argument that blitz should encourage reviewing wide rather than tall, but meh, i feel like getting to the later chapters of a fic is something we should encourage, too. maybe instead of having a theme for repeat reviews, there could be small continuous bonuses instead, potentially stacking? for every two reviews you leave on the same fic you add an extra +.25, capping out at +1 or something? idk, seems hard to balance so that people don't receive absurd numbers of points for blasting through very long fics, but i feel like there's some way to make it workable.


A cat that writes stories.
  1. purrloin-salem
  2. sneasel-dusk
  3. luz-companion
  4. brisa-companion
  5. meowth-laura
  6. delphox-jesse
  7. mewtwo
  8. zeraora
Hello, everyone!

First off, thanks to Kint for running the event. You did a great job, and are evidently an absolute machine and a credit to the community. Now, I have some words to share in the postmortem efforts. Originally I wanted to respond to everything in that way I have of taking everything as seriously as possible, but it's been a while and I've constantly felt run-ragged, so here's the bite sized version...

As far as my own experience is concerned, I had trouble with spinning way too many plates at once. I was going really hard in Blacklight, Mafia, and other creative outlets all at once. That has nothing to do with event management or rules and everything to do with my tendency to spread myself thin, but as a quality of life change I would counsel advising anyone with a competitive streak to clear their schedule and other would-be event organisers to consider not overlapping.

Regarding deadlines, I understand the rationale behind hard cutoffs, and it may well be that in practice the hard cutoffs are better than soft ones, but I think a grace period of a single day for logging (not review posting) would go a long way to easing things for some folks. I think reviews counting for whatever week they're logged for seems alright to me, also.

Now, I said the following on Discord after the blitz previous to this one ended. (I did not participate in it.)

i think the system we have where reviews ought to meet a reasonable floor but also there's no incentive to textwall is effective for incentivising multi-chapter reviews
i actually benefit hugely from this bc ~400w is my sweet spot i hit constantly
i just think the textwall titans deserve compensation!

I agree with those pointing out that the most aggressive reviewers wrote textwalls anyway, and I agree that the purpose of the blitz ought to be to get people reading more, rather than writing textwalls, but I still would like it if folks who naturally spend a long time crafting lengthy reviews felt acknowledged to some extent. However, while I don't want long reviews to be a waste of effort, I also don't want them to be an efficient way to beat out mass-reviewing. In short, I want to reward long reviews, but I do not want to incentivise a meta where long reviews are optimal. One way to do this would be to award points for review length at a less efficient rate than for chapter count. Another would be to award points only for chapter count, but acknowledge high wordcounts separately.

Regarding weekly themes, I feel that the fact that they scale indefinitely made my efficiency-obsessed brain want only to review if it met the theme, and the additional burden to my choice paralysis interacted poorly with my ADHD in a way which made it very hard for me to actually pick fics to review, especially when combined with cutoffs that sometimes incentivised me to leave things til later. I would love if theme/challenge points were capped, if they consisted of several discrete challenges available throughout the month instead of being weekly themes, and if it was as straightforward as possible to identify fics which were eligible. I had particular trouble with week three.

Now for some individual responses to particular points by commenters in this thread:

a much lower wordcount limit. I did not have 250 words of things I wanted to say about a chapter, and people were clearly fishing for things to say
I always thought 250 was a bit steep. I think reviews under 50 words getting as many points as reviews over 500 is taking the piss a bit, but I would certainly say the threshold could stand to be reduced at least to 200w, maybe even as far as 100w, especially if the ethos is to maximise review count over all other metrics.

points were dependent on number of chapters you reviewed. I think this is a bad idea, as some people post long chapters, and some people post shorter [...] I might have 2000 words to say of a fic, but if it only has 2 chapters? Why bother? 1500 of those words don't count.
I actually don't think the effort and time involved in reading two different chapters has as much to do with their wordcount as it does to do with their prose style and narrative content. I've read 4000w pieces that were harder to get through and comment on than 8000w pieces. I also feel like if someone only wants to say 1500 of their potential words if they're rewarded, and they're not, so they don't, that's on them and perfectly fine by me. I review outside of events because I have things I want to say to the author. And, again, the biggest reviews this blitz were by the most prolific reviewers anyway.

It might make sense to at least scale the minimum eligible word count with the length of the fic/chapter to some extent - it does seem a bit ridiculous to expect someone reviewing a 100-word drabble to write a review 2.5x the length of the actual content being reviewed.
A simple way of resolving this might be that we make fics under a certain length exempt from the threshold entirely. That's probably handleable, but it remains to be seen if kint would agree.

any sort of scaling on chapter word count becomes a logistical hydra
As much as I'd love to proportionately reward reviewing longer chapters, verification is obviously a nightmare, and the blitz is managed in far too painstaking and technical a fashion to rely on honour system reporting. Best compromise I can think of involves rewarding reviewing later chapters, and that leads to spreading reviews less widely between longfics, I think.

So, keeping in mind the idea of not overburdening volunteers, I thought maybe it would make logical sense that the prize tiers could also follow a curve instead of a line?
I would agree with this in principle, though I can't say whether the proposed grading is too steep or shallow, not being much for mathing it out myself.

Example using the current 250 wordcount limit: a 2k review for one chapter being worth 9 points (8 for wordcount, 1 for chapters), but 2k review for eight chapters being worth 16 points (8 for wordcount, 8 for chapters.) But a 2k review for 80 chapters not being 88 points. Would it also be 16? Would it be 24? (Thus meaning that chapters:wordcount can't exceed 2:1)

Again, the exact numbers don't matter. It's more about the philosophy.

This might be unpopular to say, but I really want to incentivize people to read more content, even without necessarily writing longer reviews.
I like this suggestion and stated attitude very much. One of the best individual suggestions we've had, in my opinion. The concept of points for chapters supplemented by points for wordcount, with wordcount not being eligible as a substitute for chapters but only as an addition, is a great approach!

each review of a chapter (to a minimum of 200 words each) or each 300 words, whichever is greater.

I don't know if that's phrased clearly, but what I'm getting at is this: If you review multiple chapters, your words convert to points more efficiently than if you write one long review, the 251th word and more isn't completely meaningless.
My gut says to reward excess words even less efficiently than that but this is definitely the direction my gut wants to go. I particularly like that it would let me further deactivate the part of my brain that cares about efficiency and consistency.

A list of theme challenges, each of which give +1 point the first time you meet the criteria, and only the first time, regardless of when that is. This does two things: It removes the incentive to not review things until they qualify, and it also limits the effect the challenges have on point totals.
This addresses the point I made earlier about challenges compelling myself and others to review based on efficiency. I feel like making it more of an achievement hunt with a finite payout would solve a number of issues simultaneously!

a largely-automated searchable database of fics on the forums.
This would be astonishingly useful, but I'm hesitant to speculate on changes to the blitz for next year as concerns this wonderful idea until such a thing actually exists. Good luck!

It'd worsen the problem of people flocking to the same few fics for a theme. At least with the weekly themes, there's some variety in what fics people flock to. If people could only choose one fic for that, though, it's entirely possible and likely for almost everyone to flock to the same exact fic for a challenge and leave every other fic in the dust.

Plus, if there's not enough challenges, they'd be practically worthless. If there were only four challenges like this blitz had, that's only four points, which could easily be replaced by a single review. Even if there were something like 10 challenges, that's still not very much.
My first thought when I saw the suggestion to have discrete challenges rather than a weekly theme was that there'd be about twenty of them, that they'd be worth a couple points each, and that they would be designed to spread reviews across more fics. I encourage everyone participating in this kind of brainstorming not to assume that the worst possible interpretation of a suggestion would go ahead without tinkering, and further to imagine the best possible interpretations!

stuff like bonuses at the end of the blitz for stuff like reviewing the widest *spread* of authors?
I love the idea of awarding bonus points for stuff like this! I mean, if we did reward longer reviews, could we not also reward a greater number of authors and fics reviewed by way of a separate mechanic? Plus it's a great way to celebrate different kinds of individual achievement.

That was quite a lot, despite being less than I wanted! Let's have a tl;dr~
  • Warn folks about trying to blitz at the same time as writing fic and participating in multiple RPs
  • Hard cutoff for review publication but 24hrs grace period for logging (fics counting for week logged is fine)
  • Reward wordcount but less efficiently than rewarding chapter count, for which there exist several proposals
  • Finite challenges rather than scaling weekly themes
  • Lower the wordcount threshold, exempt short oneshots from it entirely
  • I don't care about incentivising textwalls or rewarding reading longer chapters, just more reviews
  • Curved grading for prize payouts rather than linear
  • Fic database... good
  • Bonus points or prizes for individual achievements such as 'most unique fics/authors reviewed'
  • Let's keep a positive attitude about this, everyone wants the best community event possible~
  • Thanks based kintsugi!
Proposed alternative approaches to current point payout structure summary:
Chibi's Proposal(Phantump) Lyn's Proposal(uA) Jackie's Proposal
Pay out points for both chapters/wordcount, but points for wordcount are capped at a reasonable ratio to points-per-chapterPay out points for a lower threshold of words per chapter, then less efficiently for surplus words after that pointPay out points for chapters reviewed/wordcount separately, weighted to encourage more chapters over more words


golden scars | pfp by sun
the warmth of summer in the songs you write
  1. silvally-grass
  2. lapras
  3. golurk
  4. booper-kintsugi
  5. meloetta-kint-muse
  6. meloetta-kint-dancer
  7. murkrow
  8. yveltal
  9. celebi
hi, hello, chiming in here with math, stats, and a general wrap-up to the post mortem here. I appreciate all the feedback and did try to keep it all in mind going forward for this year; there's naturally a lot of info to parse and a tricky line of finding what works for everyone. There was also a ton more number crunching behind the scenes, so I figured I'd pull back the curtain a little so that people can at least understand our reasoning for some of the changes we've made.

(apologies on the delay for this, btw--was planning on launching this and debuting new rules a little before Blitz, but this month has been more than a little hectic for me irl)

secondary note from the future--this mostly uses general "we" here and does contain some ideas from the logistics team, but it hasn't been proofread by them/is mostly coming from me scribbling stuff down between holiday duties, in the sense that if there's errors that's on me and if there's cool ideas that's probably on them.

One of the really good questions asked here was "what is Blitz for?" and the ultimate answer is that there ... isn't really an answer. As an organizer I can pick a few reasons, but at the end of the day the goal is to get people to review in the way that's most comfortable for them. That inherently changes between reviewers (and even within reviewers--what you like one year might not be your thing next year), so a one-size-fits-all system is going to, like all one-size-fits-all things, be a little loose for some people in some areas and a little tight for some people in others.

That being said, yes, we ran lots of math over lots of time. I would hope that by now people wouldn't expect anything less.

From a logistics perspective, our main areas of focus are:
  1. A system that has some benefits for the entire spectrum of reviewers that we see--more on this point specifically in the next section
  2. A system that has some benefits for the entire spectrum of stories we see--balancing the appeal of reviewing longfics, oneshots, old fics, new fics, etc
  3. A system that is feasible for volunteers to run as far as time, sanity, things that can be automated, etc
  4. A system that will not overburden the volunteer prize team
In general I would say we tried to balance these, but realistically (3) and (4) do have higher priority--a theoretical system that works great but isn't within our resources to provide is effectively less valuable than no system at all.

Reviewer Variability
Modeling around the blitz data set is difficult each year because it's highly variable. Eight of the ten top scorers in last year's Blitz weren't present in the previous year's Blitz. Activity in a growing community is famously difficult to predict, and we definitely undershot as far as mapping out workloads for everyone (this will unfortunately be a recurring trend, albeit a known one, in all aspects of the B2 post-mortem).

That being said, with a large enough sample size and some hearty gap-filling, Blitz participation roughly follows an exponential curve--there are a fair amount of people with lower participation/points, and a few people with very high participation/points.

(points, grouped by user)

And it's true that points are not necessarily the most comprehensive metric to quantify reviewing activity, which is fair, since there's so much contention over the point system each year anyhow. That being said, we can normalize across two factors that people generally agree are good metrics of activity: total reviews written and total words written (and then also total points scored to see if points are a good predictor of those two factors).

(points/reviews/words, normalized to maximum scorer for each category and grouped by user)
Note that these are presented in chunks by user--so each red/blue/yellow bar represents one user's contributions here.

And in general these map pretty well (albeit now it's more clear that the trendline is logarithmic rather than exponential, but for the high-level overview here that means next to nothing). The trend curve fitting here is pretty decent, and this data is sorted in order of user points--so what this means is we can look at participants who are very much above or below a particular trend fitting curve, as this would mean that the point system fails to reward them for that category.

And we did do that! More on that later. The more important takeaway for this section is that the dataset is highly asymmetric. The top 2 scorers earned more points + wrote more words + wrote more reviews than the bottom 20 scorers combined. The resulting data suggests that there's a very broad spectrum of Blitz participants--there's a large group of individuals who each individually have lower participation but have high participation as a conglomerate; and there's a small group of individuals who each individually have higher participation.

This is great and I will be the last person to discourage people from playing this game however they want. However, this results in two major categories as far as logistics and modelling:
  1. A fair system will need to accommodate both crowds in some respects (although perhaps not all)
  2. Mapping prizes and activity is very difficult--we might expect to have X% more users, and we can lump them into a given category, but in general user participation is highly variable because it's highly prone to one user tipping the balance (*cue lugia movie one person can make all the difference, but like, unironically*)
tl;dr: the ideal point system is fair for everyone, but "everyone" is a broad term.


with general overview out of the way/on the table to be referenced later, here's some category-based responses on feedback.


A lot of feedback was around the difficulty of reviewing drabbles, authors of drabbles feeling put-out because the reviews they received were longer than the fic they'd written and it felt like a lot of padding was purely for the word count restriction, etc.

This was addressed in two ways:
  1. Authors can opt their fics out of Blitz. Honestly this should've always been a thing, so I'm glad that it came up in this context.
  2. Drabble collections of any size are now treated as a oneshot for blitz purposes--this should hopefully allow reviewers to read across more material and feel less pressured to speak about fics at a 2.5:1 wordcount ratio.
In general these were both relatively simple changes that got overlooked simply because we hadn't considered them the first time. Easy to accommodate; added some more definitions and clarifications in this year's masterpost to help define broader categories of fic.

This is the closest middle ground we can also get to the concern that it's "too easy" to get points by spamming shorter stories--since it's also unfair to those authors to exclude them from Blitz entirely, unless they want to self-exclude by choice.

The switch to hard deadlines (i.e. to qualify for weekly theme you must log the review for a given time) was pretty much my only hard request walking into this from last year. I'm sorry if that's unpopular; I refrained from outlining the full reasoning behind it b/c it felt kind of overkill for a tame request at the time.

The full reasoning is in this spoiler because it requires a very extensive overview of how the logging process works and isn't particularly riveting.
Points are logged with this information. (Reviewer, Title, Chapters, Word Count, and Theme are manually entered)

Points are then auto-calculated:

This is a basic stack of if loops that does the choose lesser/point system from last year, as well as the weekly theme bonus.

These are straightforward and not time locked.

There are also two helper columns that are on the side that count what the sheet thinks are the number of unique fics and unique authors and compares them to what I think are the number of unique fics and unique authors. This is silly but necessary to weed out differences such as [Pokemon]/[Pokémon], [of]/[Of], [PMD: Title]/[Pokemon Mystery Dungeon: Title]/[Title], and my personal favorite, [Dragon's]/[Dragon’s]. An ideal world doesn't deal with this shit but it's me logging 1500x6 discrete pieces of information by hand across different devices and browsers, and eventually something's bound to go sideways.

(Notably, this year's sheet has built-in protections that strip most of the types of formatting that I hate, which will reduce the risk of making these errors, but does not remove the need for the helper columns).

To figure out how many fics I think there should be: I generate a list of fics and authors by taking the plaintext list of fics and the plaintext list of authors, assigning everyone a number, alphabetizing them, removing obvious and non-obvious duplicates, and then re-sorting the list by number to revert everyone to their original number. This is again a process that I've stripped out as much error as I can from (macro'ing out trailing and inter-string spaces, converting accent-e to e, removing all non-alphanumerics), but especially now that the logistics team grows and we aren't 1000000% in sync on some of the more esoteric conventions, I don't see a way around this.

(Kintsugi, couldn't you script this in the API? In theory yes. In five years of being the spreadsheet bitch for pokemon forums no one has ever paid me enough to engage with google spreadsheet API, and I doubt anyone ever will. I would rather pull out my toenails, or sort through a plaintext list of authors for non-obvious duplicates, than try to teach an api the subtle nuance between PMD and Pokemon Mystery Dungeon).

To figure out how many fics the spreadsheet thinks there should be:

robots continue to be better than people tbh

but anyway this actually causes a whole host of problems in its own right because unique() creates an order sensitive list without actually taking up a space in the array--unique() effectively just reserves the entire column and says "this is mine now", and fills it with what it thinks should be there. Entries are strings and can be bullied around the spreadsheet/used as needed, but unique() basically subsumes its column here and will auto-update whenever it sees fit. This is important knowledge that we will need to understand later, that my-past-self did not understand this time last year.

Unrelatedly, there is an entirely different sheet that's cross-referencing how many times a user has reviewed for a given variable (either for a fic or for a user, depending). In the first use case (tracking reviewer vs user), this was relevant for double-checking Week 4 (friendly fire/review other participants), since most participants weren't doing that themselves and we didn't tell them that they had to. This array was created by pulling the list of users from the unique() column into a new column in a spreadsheet, and creating a top row function that pulls the transpose of the unique() row, and then checking across that matrix to see if User (Column) pops up for a given Reviewer (Row). A similar matrix exists for transposing (reviewer vs fic), which is relevant for Weeks 1 and 3 and also for the entirety of Blitz III, now that we've implemented a stable fix for it and are able to reward repeat reviews on stories).

(Relatedly, I have no pictures of this project because shortly after I did this I broke the spreadsheet and sent it into an error state that was so crippling that I managed to crash my not-brick computer using only Chrome. This is impressive for many reasons but the gist is that I hate myself.)

There is an extremely unlucky failstate that is possible here that is introduced when adding a user retroactively to the spreadsheet, where either the user is new or the story they are reviewing is entirely new, and the correct location to add them happens to insert blank space into both the unique() and the plaintext column, and I (admittedly in a bad case of data entry stacking with some shitty excel) simply insert a new row into the correct chronological section--the unique() column, which auto-updates, and the plaintext column, which doesn't, don't line up, which in short terms means that I'm telling the spreadsheet to look for fewer users/stories than exist (when the sheet is only ever expecting to have more due to the aforementioned duplicating thing). This causes the spreadsheet great emotional pain, which it compensates for by being confused and trying to think about how to best resolve this. This would probably not be a big deal if the spreadsheet weren't simultaneously trying to repeat this failed operation across 1500 entries. Which, again, bad coding practice and I get that but there's not really a sane way to introduce breakpoints in excel, but at this point it's far too late and the damage is propagating across the entire ladder, in a very subtle way that's difficult to debug/detect and even moreso because in this state the entire sheet has basically a 1/5 chance of crashing every minute if you try to move it around.

tl;dr: the author who relishes writing achron stories made a spreadsheet that is irreversibly tied to having things entered in chronological order. theatrical irony at its finest. god is dead.

Q: Why don't you just log them in the order they come in?
A: Weekly theme statistics are gathered by lump (i.e. all rows above this are week 1, all rows above this are week 2, etc). Mixing these makes future analysis for weekly theme inaccurate and borderline impossible.

Q: Is this fixed now?
A: Technically yes but I do not relish finding the next edge case that reveals that it isn't. It's fixed in the sense that I loosely know how to not repeat it, not that it can't be repeated--and again with multiple people signing on to log this year, I'm not terribly excited to take that risk.

Q: Could you [add a separate column for dates that lets you do the sorting achronologically later]/[identifies separate weekly themes by week]/etc.
A: Theoretically yes. In actuality I performed 1500 data entry operations last year and anything that adds time to each entry, either just from typing, thinking about what to type, verifying what to type--is going to balloon into monstrous proportions. Signups for the logistics team are always open.

Q: Why don't you switch off of Google Sheets and into something that can actually handle data without shitting the bed?
  1. I don't want to deal with hosting/creating a separate codebase for Blitz. It's true that this isn't optimized for Sheets and a database could handle it better. It's also true that I do not have time to make an entirely separate, hostable, web-secure way to run Blitz.
  2. Implementing above would be cool, but the amount of effort to make it user-friendly for people who aren't me would be even less trivial--this would effectively require me to lock anyone else from collab'ing on Blitz logistics unless I make them git/repo familiar, make the UI friendly enough for them to be able to run/edit database requests, etc. Again, cool, but a huge barrier for participation that I don't want to put on the already overburdened volunteer crew.
  3. Sheets has the easiest multi-person revision control. This is basically an offshoot of (2) but is important enough that I am putting it in a separate bullet list.
  4. Look every year I ask myself this as well. I get it, I really do. But at some point I have a fixed amount of time to be spreadsheet bitch and I mostly need to keep the wheels on the bus instead of reupholstering the seats. If you'd like to set up a codebase for me I'm happy to send you what I want out of it and we can talk shop.
Slightly more mundane reasons for this include, but are dramatically overshadowed for the excel-induced conniption above:
  • Enforcing a "double deadline" (i.e. reviews must be written before time X but logged before time Y) system is more often than not going to be confusing for people.
  • Enforcing a double deadline requires logistics team members to have twice the amount of eyes on chat--to remind people of the first deadline and the second deadline each week. I'm already not present in the Discord, so scheduling multiple time-sensitive events/reminders ends up taking more non-me volunteer time that I'm not terribly willing to spend.
  • The Week 4 deadline will have to be rigid anyway for results to be released on time.
  • The length of a double deadline ends up feeling arbitrary and doesn't fix anything in the spoiler above. Additionally, if a week closes on a Sunday evening and the second deadline pushes it to a Monday evening, this historically conflicts with classes/work/start of the week activities that still cause people to be late in other events (I haven't gathered Catnip data to support this but I can).

I say all this not to invalidate your request here but to emphasize that like, I really, really tried, but it is simply not something I can build the infrastructure to support.

Weekly Themes - Nerfs vs Caps
So we're 2,000 words in, which I think means I can start having dramatic flashbacks.
From a logistics perspective, our main areas of focus are:
  1. A system that has some benefits for the entire spectrum of reviewers that we see--more on this point specifically in the next section
  2. A system that has some benefits for the entire spectrum of stories we see--balancing the appeal of reviewing longfics, oneshots, old fics, new fics, etc
  3. A system that is feasible for volunteers to run as far as time, sanity, things that can be automated, etc
  4. A system that will not overburden the volunteer prize team

Weekly Themes is great because it intersects with all of these points. Digging into the dataset from last year:

reviews claimed vs weekly themes claimed, grouped by user

General consensus was that weekly themes over-reward (recalling that in this dataset, it was 1 bonus point for weekly theme and on average 1 point per chapter reviewed)--basically half the points scored are from theme bonuses. It's difficult to fully argue that the change in behavior is specifically due to the weekly theme and that people are in general incentivized to review to the weekly theme because it's simply so lucrative (since the counter-argument is just that themes are so broad and overlapping that it's possible that people would've just reviewed this regardless), but I would say it's pretty compelling since several themes directly contradict one another (such as "review a story that you haven't reviewed before" and "review a story you've reviewed before") and therefore "natural reviewing habits" wouldn't encompass for someone to do reviews that are 90% one and then the other.

Loosely summarizing the suggestions for weekly theme revision in this thread:
  1. Keep themes as is, but:
    1. Cap the number of times they can be earned per week, leaving the value the same
    2. Lower their value, allowing them to be earned an infinite number of times
  2. Completely overhaul themes, including but not limited to:
    1. Wide variety of bonuses that can be earned once ("review a fic that's PMD"/"review a fic that's got a trainer"/"review a fic that's got a ground-type"/etc x 30)
    2. Leave themes across the month with no time limit or cap
Let's return to 1.1 and 1.2 separately.

2.1 is highly tempting and was considered at length and we thought it'd be sick. Ultimately it's just not feasible from a logistics perspective because it introduces a problem that scales with group size--if there are 20 (an arbitrary number but iirc one that was floated) possible points that a user can get, and there are 40 users (rounded from 39 last year), this creates 800 data points that need to be verified, in addition to each review logged by the user (500x6, not accounting for forum growth). Furthermore, each time a user joins introduces 20 more data points that need to be verified, regardless of whether or not the user even does more than one theme. With the size and time commitment of our logistics team, in addition to everything else, this didn't seem possible. Again, the recurring theme of keeping the wheels on the bus instead of reupholstering the seats dominated over how cool this idea was, unfortunately.

(Compare to weekly themes, which are binary Y/N and are tied solely to reviews logged--less fun for players but sane for us to track).

There's also the issue of what themes would qualify--"review a fic that's got a ground-type" is cool in theory but requires volunteers to know what fics have ground-types, and then also define what constitutes a permissible number of appearances by said ground-type (and then why are we even doing ground-types anyway?). This is an extreme example of a specific theme, but as themes get more broad they also overlap, so it becomes less feasible to create a net of unique themes.

Is overlapping bad? Unsure. I did try to trial a similar method via Review Bingo, which may have been poorly timed but hasn't seen much response.

There's a possible way to circumvent this by having users track their own bonus points, but 1) that'll still need to be verified at some point since there's still a mildly competitive aspect to Blitz, and 2) multiple users have already expressed disinterest/active dislike at the idea of self-tracking themes (such as week 3).

2.2 is interesting but ultimately I think it directly ends up conflicting with our "a system that has some benefits for the entire spectrum of stories we see"--the idea of rotating themes is that we get to spotlight different types of stories. We obviously can't use the current themes simultaneously because they immediately cover every possible story:
Week 1: Review a fic you haven't reviewed before.
Week 2: Review a story you've reviewed before.

But the question of what themes we could use for the entire month instead is difficult to answer without inherently excluding stories. Rotating through themes gives us a way to hedge our bets and make sure that different stories are incentivized throughout Blitz.

The distinction between 1.1 and 1.2 lives mostly in math/stats again.

1.1 - we leave the value the same and cap at a given number of themes earned per week.

This is best viewed through this chart:

weekly theme by week; average & standard deviation of scores
This feeds back into the "Reviewer Variability" section above--but in general it does track that people who review more are more likely to get more themes, and caps exacerbate that.

In general the average number of theme points scored per week are:
Week 1: 3.33
Week 2: 2.33
Week 3: 1.67
Week 4: 3.13

This is particularly interesting because the standard deviation (in short words, a number that describes how variable the data is; higher standard deviation =lots of data points that are far from the average) is higher than the average--this suggests that our data is highly asymmetric, or that even though on average people are scoring 2-3 weekly theme points per week, there are users who are scoring way more.

And this tracks with our understanding of asymmetric data in "Reviewer Variability"--to score 2-3 theme points per week, you need to review 2-3 stories per week. Users who review more frequently are inherently going to be able to score more theme points than users who don't.

But knowing that, a cap becomes mathematically meaningless--if we cap at 3, which is already pretty high (since it's the average), pretty much none of the participants are affected anyway:

weekly theme sorted by week, grouped by user
And capping at 2 or 1 seems to choke off people's access to points unnecessarily--which is unfun in general, I guess, and didn't really seem to add much benefit while still adding more rules and hoops for people to jump through. It felt like introducing caps would require more rules for people to read when, in most cases, they wouldn't even be affected.

Which brings us to:

1.2 - nerf: lower weekly theme value, allowing them to be earned an infinite number of times

Which in general seemed to be the best way to juggle the conflicting needs of 1) wanting to get people to spread around to different types of stories while 2) still rewarding participants for their effort in a straightforward way but 3) not being hell on the logistics team.

Weekly Theme - Types
The types of themes rewarded goes hand-in-hand with the degree to which the theme is rewarded; this is mostly just a section break for personal sanity.

Theme types used last year are:
Week 1 Review a fic you haven't reviewed before. This can be claimed once per fic.
Week 2 [Review a story you've reviewed but aren't caught up on. This can be claimed once per fic.
Week 3: Review a chapter (or oneshot) that had 0 or 1 reviews on TR prior to the start of this week.
Week 4: Review a fellow player in the blitz! This can be claimed once per author.

From a logistics perspective, our main areas of focus are:
  1. A system that has some benefits for the entire spectrum of reviewers that we see--more on this point specifically in the next section
  2. A system that has some benefits for the entire spectrum of stories we see--balancing the appeal of reviewing longfics, oneshots, old fics, new fics, etc
  3. A system that is feasible for volunteers to run as far as time, sanity, things that can be automated, etc
  4. A system that will not overburden the volunteer prize team
Going back to this flashback, the main purpose of weekly themes is to account for 2--we want to make sure that stories on TR have more or less a fair shot at being reviewed in Blitz, and at the very least we aren't completely shafting a type of story. This is the main thought process behind theme balance moving forward.

General feedback was that people would prefer to get more rewards for reviewing a fic multiple times--this was a really cool idea and we were happy to implement that one; everyone understands the struggles of "I've got 99 reviews on the prologue and nothing else". Authors also indicated that they'd much rather receive bulk reviews for their stories than individual ones for fewer chapters. As such, we implemented a multiplier that allowed for a bonus based on the number of times you've reviewed a given fic during Blitz. This was implemented at a 1 point per 3 chapters reviewed scale--the very very short version was that 1 point per 2 chapters reviewed seemed a little too frequent and based on previous data ran the risk of being too similar to the weekly theme issue where it was easy to rack this up on a wide spread of fics without actually returning to fics, and that 2 points per 3 chapters seemed overpowered and also unfair to fics that didn't have a chapter count divisible by 3.

This seemed reasonable enough, to the point that we'd be able to implement this without making it dramatically underpowered compared to weekly themes (and while still keeping us roughly on a 1/0.5 system instead of a 1/0.5/0.33/0.X system; keeping the 1-point base system was desirable as far as modelling other aspects such as prize wheels and keeping some semblance of scalability for participants across years).


weekly theme sorted by week, grouped by user

And it's true; if you want to play completely optimally you can only review fics that only qualify for the weekly theme. This has minimal affects at lower review counts (i.e. if you're writing 10 reviews and only doing the weekly theme instead of repeat chapters, you'll score a total of 2 points extra). At higher review counts the difference eventually becomes pronounced (i.e. if you write 40 reviews you'll score 7 more points if you review only weekly themes instead of repeat chapters), but this reaches the soft upper "cap" on weekly themes--the number of participants who were able to find 10x weekly theme fics across each of the 4 weeks was 0.

Ultimately we decided that it isn't perfectly balanced, but there's a lot of soft variables in play--if you really like a fic it's easier to read multiple chapters of it; if you're hunting around for a ton of fics to meet each weekly theme you're likely to spend a lot of time just planning out your Blitz approach instead of just purely reviewing. This seemed like a close enough spread that it wasn't worth tweaking any further.

Notably, though, adding the recurring chapters bonuses changes the balance of weekly themes a bit--since there's basically a month-long bonus in effect for works that have multiple chapters now. We would expect the multiplier for fics reviewed to benefit primarily 1) popular fics that already have a large readership and 2) chapterfics (self-explanatory, but it's worth noting that this bonus specifically disincentives oneshots)

And this is easily proven by running Blitz II's dataset with a recurring 1 point every 3 chapters bonus built in, and then sorting for number of points reviewed by fic:

Blitz II fics sorted by number of bonus points received if bonus for reviewing multiple chapters is in effect (1 bonus point every 3 chapters)
(where in general we see that, yes, popular fics are more likely to be reviewed multiple times and as such are more likely to qualify more frequently for the bonus of "be reviewed multiple times")

And that's also great. Again, the goal isn't to discourage people from visiting fics they love; in fact, that is probably a goal of Blitz. But the purpose of weekly themes then remains to make sure that we're still encouraging people to visit other fics, so themes were revised to:
Week 1: Review a story you haven’t reviewed before.
Week 2: Review a oneshot.
Week 3: Review a chapter with less than four reviews. (Any chapter with less than four reviews at the start of week three qualifies.)
Week 4: Review a story by another participant in Blitz. (Anyone with points logged in this year’s Blitz at the time of your review counts as a participant.)

Weeks 1 and 4 more or less stay the same--encouraging people to try new things (week 1) and rewarding people for participating in Blitz (week 4) is in general a good thing for us.

Week 2 is specifically tweaked to affect oneshots only, because oneshots are dramatically underrepresented in a new system that rewards for reviewing a fic multiple times.
Week 3 remains as-is in order to reward for fics that have fewer reviews--it's not much but in general the idea is that this balances the "review fics multiple times" bonus and still incentivizes reviewers to look at fics that aren't getting as many reviews.

Week 3 is historically low turnout, partially because it requires users to find a chapter that has fewer than X reviews--but in general it's something we want to keep so that there's still incentive to review less "popular" fics or chapters. Spread the love around a bit.

Word Count
This is a fun one. A lot of the suggestions in the thread more or less track with the point system that was used during Blitz I--and which was historically also not well-received either, since consensus seemed to be that it disproportionally discouraged people who wrote shorter, snappier reviews.

Ultimately for point systems revisions (this one and all the ones you see above), I run a metric that I like to call "but does it matter", which basically just takes [old point system - new point system]/[old point system]*100--a way of determining how the point system will change if we try a new system.

For word count and the ideas proposed, it made the most sense to start at the most extreme scales for "does it matter"--the old system (where word count provided no bonus for scoring), and a new system where scores are based entirely on word count.

% by which a participant's score is affected under an altered, word-count based reviewing system

The short answer to "does it matter" is "basically no"--on average, scoring purely on word count pushes the total points earned down, but it pushes almost everyone down and to a roughly equal degree, with a few outliers. The longer answer is "mostly no, see the above sentence". There are some edge cases, but in general we can throwback to this chart from three thousand years words ago:

(points/reviews/words, normalized to maximum scorer for each category and grouped by user)

It's easiest to think of the red, yellow, and blue lines as "what would a normalized points curve look like if I normalized for [word count, number of reviews written, and points scored] respectively". In other words, if a lot of users end up with word counts (red bars) above the red line, that would suggest that word count is not being rewarded fairly in the points system.

(I can walk through the least squares regression and shit that led me to determine that this mattered basically zero, but this post is already reaching a higher wordcount than most chapters and I think we're reaching a cutoff point soon)

The gist here is relatively understandable from the data--if you write more reviews, you will score more points. If you write more reviews, you tend to write more words. Number of reviews written is the dominant factor by far in any points system.

In general, running the old ladder with a variety of suggested metrics produced "does it matter" results that were less than the extreme one posted above (which only rewarded on word count)--so they were even less impactful.

So at that point it mostly became--we could implement this new points system and require people to understand even more rules when trying to log points, and there would be relatively little impact to the actual points outputs overall. Overall it didn't seem worth it, and also seemed unfairly punitive/conflicting with the author POV of "but I'd like any reviews on my story, regardless of length".

In general the main public-facing concern was that people were upset that they weren't getting prizes that they wanted. This year's prize system allows more flexibility in selecting their prizes. Tbh that is an entire post in itself.

Prize wheel is significantly underscoped so that we don't overburden volunteers this year. This is the driving factor for prizes; if you'd like to see more types and variation in prizes and prize distribution, I encourage you to sign up to be a prize volunteer! Otherwise, there exists a hard limit for how much free labor I can ask of myself and my friends.


And that's mostly where it stands. I get that from the outside a lot of the decisions made appear pretty opaque--please understand that this is a very very condensed summary of roughly six months of discussion, and it's still over 5k words and i cheated by using pictures. The ultimate tldr is just--TR has grown to the point that there's a fairly wide distribution of users, which affects us in terms of having a diverse set of reviewers/participants, as well as a diverse set of stories that we'd like to encourage people to check out. Balancing both of these aspects means that we're not going to 100% please everyone 100% of the time, but that ultimately we can strive for a middle ground where we can create a points system that awards people for playing the way they want while still encouraging them to try some new things.
Last edited:
Top Bottom