Reviewing papers for a conference is a slow, time-consuming process. Suppose you had 20 reviews due and about 4 weeks to do them. What's your approach?
I take the tortoise approach. I first try to do a quick pass over all the papers to at least have some idea of the topics and themes I'll be dealing with in the papers. This lets me find papers that, at least on a fast first reading, seem unusually good or unusually bad, and lets me see if any papers are sufficiently related that they should be compared to each other implicitly or explicitly when I do my reviews. But then, I try to set aside time to do one review a day, more or less. I'll enter the review, press the button, and put it up for others to see. I won't go back and revise things until the first round is over unless another paper I'm reading or another review I see makes me rethink substantially. At the end, I'll go back and check that my scores seem consistent, given that I've seen my full set of papers. Slow forward progress, with an eventual finish line.
Doing a paper a day does mean I limit the time I put into each review. While there's some variance, I almost never let myself go down a rabbit hole with a paper. That's not always a good thing; sometimes, finding a bug in a proof or a similar serious flaw in a paper takes several hours of careful thought, and unless I pick up that's there a problem right away, I often miss it while going on to the next review. (This is just one good reason for why we have multiple reviewers!)
Perhaps another reason this is not always a good strategy: I'm told it's noticed that my reviews are actually done on time, and apparently this leads to people asking me to be on PCs.
Thursday, October 20, 2011
Subscribe to:
Post Comments (Atom)
2 comments:
Doesn't ring true. On this blog you have publicly requested to be put on PCs. This is not entirely seemly.
How much time do you spend on the typical conference review for a theory paper? How about for a systems paper?
Post a Comment