Still, the last reviews for a grant we didn't get contained just the stupidest comment, I really have to share it, because it just frightens me. I'm used to reviews I don't agree with -- the typical excuse not to fund a theory grant being, "The proposal doesn't explain how it will solve the proposed problems," while if I already knew how to solve the proposed problem, I'd have written the paper already -- but again, this goes beyond that. If this was just an excuse to not fund the proposal -- because the NSF for some reason never says "We only have money for the top 10% this year, and I'm afraid there are some better proposals," but instead has to have a reason -- that's fine, but I hope it's not a real reason.
This was a CDI proposal (so apologies to my co-authors, who do not necessarily share my opinions). The primary theme was mechanism design, but we focused on network formation and ad auctions as examples. One reviewer wrote:
[ad placement] is a very active research area for corporate research labs at places such as Yahoo and Google. Given the vast resources that are being invested at these corporate labs (that have hired top economists and computer scientists) and that have direct access to logs documenting advertiser and user behavior, it is unclear how much of a contribution an academic team can make here.One review might be forgivable. The panel summary listed the following as a weakness:
- There were also questions regarding how this willLet's ignore that the PIs all have relationships with industry, that ad auctions was just an example (of pretty wide interest right now), and that (I had thought) working on problems of interest to industry is, generally, a good thing.
compete with much larger-scale multidisciplinary
efforts (CS-economics) of similar kind in
industry (Google, Yahoo!, MS, etc.).
With this kind of thinking, there's no way a couple of graduate students from Stanford (or, more academically, a future well-known Cornell professor) should have been working on a silly thing like "search engine algorithms", since Altavista was already out there leading the way. (That's my #1 big example, fill in your own.)
Is "industry will do it better than you could" really a good reason not to pursue (or fund) a research topic? How many research areas would that really preclude? I'd assume we should also stop funding research in operating systems, compilers, and even computer security based on that comment, but oddly, I don't see a rush to cancel programs in those areas. Seriously, anonymous reviewer, if you actually meant that, congratulations, you've truly scared me about the future of NSF-sponsored research.
As an addendum, some comments from Harvard colleagues:
1. Where does the reviewer think the people who are going to go work for Google/Yahoo/Microsoft will be coming from?
2. This was the kind of thinking that led Harvard (a CS powerhouse post-WW II) to decide to drop computer science decades ago. IBM was doing it, no need to have a department doing research in the area. It took a while to recover from that decision....
3. That kind of comment is common for chip/architecture research. "Isn't Intel doing that already? How can you possibly contribute?" [I have increased empathy for architecture people.]
13 comments:
One review might be forgivable. The panel summary listed the following as a weakness:
- There were also questions regarding how this will
compete with much larger-scale multidisciplinary
efforts (CS-economics) of similar kind in
industry (Google, Yahoo!, MS, etc.).
I agree that the reason given was foolish. However, I wouldn't be too angry at the entire panel for including that bullet. I am sure you know that a lot of group think goes on in the panel discussions, and with many proposals to review, they tend to give the opinion of the loudest panelist and move on.
With very high probability what happened was that one reviewer torpedoed you. In a highly competitive situation such as this, one bad review is enough to kill it, even if it's a bad reason.... For all I know, the reviewer had a hangover from drinking too much the night before and your proposal was the first discussed.... It really is that arbitrary, at least when it's not political.
While ultimately I am on your side, it is good to be reminded every once in a while that we should emphasize theoretical and academic foundations in our work, instead of day dreaming for 15 pages about how incredibly practical and immediately useful our work it.
(In other words, I'm sorry you got burnt, but it's nice to hear that proposals can also err on the side of being too applied.)
The majority of research in industry is short-term oriented**, i.e. the kind of thing that can be added to the product in the next release.
- There were also questions regarding how this will compete with much larger-scale multidisciplinary efforts (CS-economics) of similar kind in industry (Google, Yahoo!, MS, etc.).
All one has to do is aim beyond next quarter/next release, that's how little it usually takes to one-up research in industry.
**there are often sound commercial reasons for this.
Full disclosure: I work for Google Research, so I may be treading on thin ice. I think I've also irritated you with past comments, so I'll apologize in advance.
There are really two comments here - one from a reviewer and one from the committee. I kind of agree with the one from the reviewer, since access to data on ad placement can be invaluable in formulating and validating theoretical models. It's not clear why that's relevant to your proposal, since I didn't see the proposal and I don't know how you described the relationship between ad placement and mechanism design. There is still room to do a great deal of interesting research without access to real data, and simulations under reasonable conditions might prove invaluable (and still fall within the guidelines of the CDI program). Part of the goal of CDI is to further data-intensive research.
On the other hand, the comments by the panel sound colossally stupid. Whether an NSF proposal competes with industry should be completely irrelevant. The question is whether funding the research results in good science, and whether it is likely to achieve a good result for the value system of society (the people that NSF represent in funding research). Whether industry will compete in this space is outside the scope of their decision process because industry functions under different value systems. Another commenter mentioned this as well.
There is another issue that wasn't really mentioned in the comments, but that is the role that real-world data plays in theoretical research. Consider physics for example. There are two major kinds of research; theoretical physics and experimental physics. Experimental physicists generally require access to huge capital expenditures for their experiments, but as a result are able to validate hypotheses in ways that theoretical physicists can only speculate about. Their access to data and experimental equipment allows them to perform a style of research that can produce really interesting science. On the other hand, their equipment also places a constraint on the kind of research they can do, because they are limited by the capabilities of their equipment and the precision and extent of their data. Theoretical physicists are unconstrained by these matters.
The NSF directorate for physics should ideally maintain some kind of balance of funding for these two kinds of science (I have no idea if they do).
Computer science has similar kinds of issues, and funding for computer science can be divided in some fashion between experimental research and theoretical research. There are some kinds of research where the absence of data can strongly hinder the quality of research that can be done. Some examples that spring to mind are information retrieval, database, and storage systems. Without access to a representative stream of queries, a representative corpus of documents, and a sufficient supply of good training data, information retrieval research is severely limited. Without access to representative data about what real-world access patterns, queries, and what workloads look like, research on database and storage systems is similarly limited. That doesn't mean that there isn't good research to be done, but the validation of a lot of research can be improved by access to data, and that is part of what CDI is about.
It would be healthy if we could have some community discussion about the allocation of resources to these two kinds of research. Each of us has our own preference for the kind of work that we want to do, but I would hope that we can also respect and support other modes of research.
I think we can agree that the comment by the panel was inappropriate, and reflects a poor decision process on your proposal.
I think you are being too dramatic and generalizing too far. They didn't fund your proposal, but the reasons they gave shouldn't be taken as NSF policy statements. Rather you should take it as concrete advice as to what to address in your next proposal. This is a real issue and it seems you didn't address it with sufficient specificity. From your very broad comments here, I have to agree. It sounds like they wanted specific details on the relationship between your proposed work and similar work in industry. They did not want comments like, "Where does the reviewer think the people who are going to go work for Google/Yahoo/Microsoft will be coming from?"
I think panels either love proposals, like proposals very much but can't fund them so need an excuse, or think you're wasting their time. You are clearly in the second category. Accept it and move on.
I don't buy that not funding this proposal means that we aren't going to see the next Google. (If you really think this, then your proposal clearly didn't make the case for it. :)) Speaking of search, anyway, if Google hadn't been there, someone else would have. Simple ideas that have immediate and large industrial impact are going to be found by the industry. This doesn't mean that the federal government doesn't have a role in speeding things along, but I am sympathetic to the argument that we should concentrate our funds on the harder problems that will have an impact only in the longer term.
It sounds to me also like the panel summary statement, "There were also questions regarding..." is just a reference to the one review. That one review came up during the discussion and it therefore made it into the summary report. From the way it was phrased, it doesn't sound like this was the main reason the panel rejected the proposal.
First, let me play devil's advocate for a bit:
Whether an NSF proposal competes with industry should be completely irrelevant.
I wouldn't go so far as to say that. Any competition seems equally relevant: if someone else is doing a better job, then that's a valid reason not to get NSF funding. Even if you are at the top, if your field is getting plenty of attention/funding while an equally important is being neglected, then it is reasonable to take that into account.
One can also make a philosophical argument that NSF grants should not compete with industry. Why should we have an NSF at all? The reason, as I understand it, is to fund research that is of benefit to society but where that benefit is too long-term or diffuse for the research project to raise private funds. If Google/Yahoo/MS are actively investing in this area, then that's an argument against NSF funding: if they need your contribution, they'll happily fund your work, and if they don't need it, why should the government fund it?
I realize this argument is not fully compelling (and that it presumably doesn't apply to your CDI application). However, I've heard versions of the argument from people I've been on panels with. Whenever you submit a proposal that tangentially competes with industry, it's important to keep this argument in mind and try to structure things to address it (for example, by explaining explicitly that your work is more theoretical, or broader, or more long-term, or whatever differentiates it from the industrial research).
That one review came up during the discussion and it therefore made it into the summary report. From the way it was phrased, it doesn't sound like this was the main reason the panel rejected the proposal.
This is 100% right, and you can see this from the wording. Writing panel summaries is tricky, especially if things aren't unanimous. You can be sure that if there had been widespread agreement on this point, the summary would have been much stronger ("The panel was concerned..." rather than "There were also questions..."). I'm confident that what happened was that one or two people vocally complained, it ate up a lot of time in dicussion, and then a mention was made in the summary to keep the complainers happy. But notice that the wording mentions the questions but doesn't endorse them.
Still, one or two people can kill a proposal. :-(
Some of the replies reflect a circle-the-wagons attitude that is somewhat troubling.
The reviewer comment is not very good.
It is not surprising that often reviewing is the weakest link in the process as it is often done under time pressure and with little feedback built into the process.
Most of us have at one time or another written a review that didn't properly reflect our state of mind. This might be due to PC time pressures, misguided politeness, or simply failure to describe our line of thought properly. Yet, if we don't have a way to call someone on a bad review how would we ever learn?
Incidentally this is one of the main reasons in favor of an "author feedback" round during the reviewing process: it provides pretty much the only opportunity to point out to a non-sequitur on a bad review.
While I am generally sympathetic to your point of view and your frustration, isn't there a more charitable interpretation of the review board's point of view? I mean, review boards do exercise their judgement, and the hypothesis worth considering could be, "We don't think you'll get very far without access to real data, and we don't think you'll get that data. There are plenty of examples of academics interested in auctions and game theory writing interesting papers based on faulty assumptions about how ad auctions work (e.g. it's not even completely clear how Google decides what ads to place where anymore)."
If you are doing work on ad auctions and mechanism design, today you don't know how those decisions are made. Yes, you could do purely theoretical work, but is that part of the best allocation of a nation's scare resources? Maybe, but I'm just saying it's worth discussing.
My take on the blog comments:
I agree that this was just one review comment, and the way reviews (NSF reviews in particular) go, I shouldn't necessarily take it too seriously, or blame the entire panel, or throw myself under a bridge, etc.
Without getting into the rest of the proposal, I think there was adequate coverage in the proposal to suggest we'd be able to get real data, so I don't think the question of "How will you do the work if you can't get real data?" applies. (And we could argue, certainly, whether it even should.)
Nothing in the comments, though, has changed my opinion that this really was the stupidest comment I've ever received in an NSF review.
actually, that comment from NSF makes a lot of sense. have you seen the kind of papers being written in computational economics/ game theory these days by CS folks? many papers make lots of junk/unrealistic assumptions. "assuming this was known, but in real world, we can approximate, blah blah blah ...". the Google/Yahoo/Microsoft folks know what the real world problems since they need to make revenue. they have real data to evaluate research ideas. theory faculty should stick in their hidey hole or try something else if they can't get sponsored.
It is unfortunate that this proposal is rejected by NSF panel based on such irrelevant comments. Hope the proposed idea does not appear later in a reviewer's student's theses.
Speaking obvious things from experience: great companies hire great minds, yes, but many of the greatest minds are found in academia, sometimes with a better environment to furnish a unique key contribution. Industry collaborates with the academia for these reasons. The reviewer forgets about this individuality, the big difference it makes to have another really bright mind or research group working on some aspect of the problem using their unique expertise.
Post a Comment