I'm reminded by some current goings-on about "unusual" reviews, especially one of my worst reviewing experiences ever. I'm sure most everyone has stories of some really painful, inexplicable reviews -- it's like our version of "bad beat" stories in poker -- so here's one of mine.
I had been part of a project that was looking at human-guided search techniques, and specifically we had done a lot of work on 2-D strip-packing, a common problem looked at in the OR and occasionally in the CS literature. Basically, our paper introduced what we would later generalize to Bubblesearch for this problem, and demonstrated how user interaction could lead to even better performance.
We submitted it to a journal-that-will-remain-nameless that claimed it was at the intersection of OR and CS. This seemed a fit to me. This is a standard OR problem; heuristic approaches for it have certainly appeared regularly in other OR journals. We had a very CS-oriented approach, using greedy-based heuristics, and fairly nascent techniques from the interface of AI and user interfaces. We wanted it in front of the OR audience, where human-interaction optimization systems would have been a fairly novel and potentially useful idea.
The reviewers didn't go for it (even after we revised it to answer their complaints). Clearly the human-interaction stuff was a bit beyond what they were able to cope with; if that had really been the main stated objection -- "this is really too AI/UI-ish for us to cope with," then I could have been disappointed by their lack of desire to expand their worldview and moved on. But one reviewer seemed to clearly to think we didn't properly cite and compare results with what I imagine was his own work (which included a paper that was at best tangentially related, and a paper that was apparently under review at another journal and was not publicly available in any format when we wrote ours). Another reviewer simply said that the readers of the journal wouldn't be interested. This is his summary of what we did:
"You look at a simple and natural modification of pretty much the first packing that comes to mind, an idea that could be described over the phone in two minutes, assuming no previous knowledge. Beyond that, you run a bunch of experiments and find out that you get improvements over some metaheuristic." [My note: that "some metaheurisitc" was the one giving the best published results for the problem at the time.]
Yes, that's right, all we did was introduce a painfully simple heuristic -- that hadn't appeared in the literature before, anywhere -- for a well-known, well-studied standard problem, and run experiments showing it beat the best known results on a variety of benchmark instances. I could see why that wouldn't be considered interesting to readers at the OR/CS intersection. Sigh. It's one thing when a reviewer doesn't get your work. It's another when a reviewer gets your work, seems to admit that it's quite good -- at least I view simplicity combined with performance that beats several papers worth of more complicated previous work a plus -- and just says, "But that's not interesting." How do you argue with that?**
I've shied away from the OR community since then. Being fed up at that point, we sent the paper to the Journal of Experimental Algorithmics, where I felt we'd have fewer problems, even if it wasn't exactly the target audience. If you want to read the paper and decide for yourself if the original reviews were warranted, you can find a copy here. (New Heuristic and Interactive Approaches to 2D Rectangular Strip Packing.)
**I admit, I have had to give reviews like that for conference papers -- where the review ends up being, "Sure, I think this is good, you should publish it somewhere, but I'm afraid it's just not in the top x% of the papers I'm seeing and we're space limited for the conference." I hate writing such reviews, but at least I don't make up reasons why the paper isn't good...
Tuesday, July 22, 2008
Subscribe to:
Post Comments (Atom)
4 comments:
It is indeed frustrating when your paper is rejected by a referee because it doesn't fit into their rigid world view of good vs. bad research.
I have a master's in OR, and I generally stay away from the OR literature. I find EE and CS [and, uhh, economics I guess; in which discipline does the game theory corpus?] thinking to be a little more inpsiring.
Oh yeah? I was on the button in the final table of WSOP and I shoved all-in with AhAs. The big blind, and chip leader, called with 2c2d. The flop came Ac 2s 4c. The turn was a 5c. And the river was a 8c, giving the chip leader guy a flush over my set of Aces.
How do you like them apples?
Anonymous #3-- man, that is a good story. I'm not sure I'd have pushed all in but you didn't give the chip counts. That is a bad beat.
Post a Comment