How competitive are security research conferences? Several people have been tracking this information. Mihai Christodorescu has a nice chart of acceptance and submission rates over time. The most recent data point we have is the 2009 Usenix Security Symposium, which accepted 26 of 176 submissions (a 14.8% acceptance ratio, consistent with recent years). Acceptance rates like that, at top security conferences, are now pretty much the norm.
With its deadline one week ago, ACM CCS 2009 got 317 submissions this year (up from 274 last year, and approx. 300 the year before) and ESORICS 2009, with a submission deadline last Friday night, got 222 submissions (up from about 170 last year).
Think about that: right now there are over 500 research manuscripts in the field of computer security fighting it out, and maybe 15-20% of those will get accepted. (And that’s not counting research in cryptography, or the security-relevant papers that regularly appear in the literature on operating systems, programming languages, networking, and other fields.) Ten years ago, when I first began as an assistant professor, there would be half as many papers submitted. At the time, I grumbled that we had too many security conferences and that the quality of the proceedings suffered. Well, that problem seems mostly resolved, except rather than having half as many conferences, we now have a research community that’s apparently twice as large. I suppose that’s a good thing, although there are several structural problems that we, the academic security community, really need to address.
- What are we supposed to do with the papers that are rejected, resubmitted, rejected again, and so on? Clearly, some of this work has value and never gets seen. Should we make greater use of the arXiv.org pre-print service? There’s a crypto and computer security section, but it’s not heavily used. Alternatively, we could join on on the IACR Cryptology ePrint Archive or create our own.
- Should we try to make the conference reviewing systems more integrated across conferences, such that PC comments from one conference show up in a subsequent conference, and the subsequent PC can see both drafts of the paper? This would make conference reviewing somewhat more like journal reviewing, providing a measure of consistency from one conference to the next.
- Low acceptance ratios don’t necessarily achieve higher quality proceedings. There’s a distinctive problem that occurs when a conference has a huge PC and only three of them review any given paper. Great papers still get in and garbage papers are still rejected, but the outcomes for papers “on the bubble” becomes more volatile, depending on whether those papers get the right reviewers. Asking PC members to do more reviews is just going to lower the quality of the reviews or discourage people from accepting positions on PCs. Adding additional PC members could help, but it also can be unwieldy to manage a large PC, and there will be even more volatility.
- Do we need another major annual computer security conference? Should more workshops be willing to take conference-length submissions? Or should our conferences raise their acceptance rates up to something like 25%, even if that means compressed presentations and the end of printed proceedings? How much “good” work is out there, if only there was a venue in which to print it?
About the only one of these ideas I don’t like is adding another top-level security conference. Otherwise, we could well do all-of-the-above, and that would be a good thing. I’m particularly curious if arbitrarily increasing the acceptance rates would resolve some of the volatility issues on the bubble. I think I’d rather that our conferences err on the side of taking the occasional bad/broken/flawed paper rather than rejecting the occasional good-but-misunderstood paper.
Maybe we just need to harness the power of our graduate students. When you give a grad student a paper to review, they treat it like a treasure and write a detailed review, even if they may not be the greatest expert in the field. Conversely, when you give an overworked professor a paper to review, they blast through it, because they don’t have the time to spend a full day on any given paper. Well, it’s not like our grad students have anything better to be doing. But does the additional time they can spend per paper make up for the relative lack of experience and perspective? Can they make good accept-or-reject judgements for papers on the bubble?
For additional thoughts on this topic, check out Matt Welsh’s thoughts on scaling systems conferences. He argues that there’s a real disparity between the top programs / labs and everybody else and that it’s worthwhile to take steps to fix this. (I’ll argue that security conferences don’t seem to have this particular problem.) He also points out what I think is the deeper problem, which is that hotshot grad students must get themselves a long list of publications to have a crack at a decent faculty job. This was emphatically not the case ten years ago.
See also, Birman and Schneider’s CACM article (behind a paywall, unless your university has a site license). They argue that the focus on short, incremental results is harming our field’s ability to have impact. They suggest improving the standing of journals in the tenure game and they suggest disincentivizing people from submitting junk / preliminary papers by creating something of a short-cut reject that gets little or no feedback and also, by virtue of the conferences not being blind-review, creates the possibility that a rejected paper could harm the submitter’s reputation.
Video Converter OS X is a universal and versatile os x video conversion program speciall for Mac users to convert between almost all video and audio formats.
I’ve only been seriously doing the research game for about a year now (that’s not including work experience and classes), and I have to say I am not currently a fan. I’ve submitted papers to some of the conferences you mentioned above. I’m doing my part to keep acceptance rates low by providing more material for the rejection pile. 😉 My advisor assures me that the work I am doing is good and it will get published, but I’m having my doubts.
The biggest frustration that I’ve seen comes from the reviewer’s comments. In all of my reviews, I get helpful information from at most one of the reviewers (I am very thankful when I do get that). With every single rejection, though, I am firmly convinced that at least one of the reviewers did not read my paper. I know this because they have a comment like, “You didn’t address insert-random-attack-here,” despite the fact that the attack in question WAS addressed in the security analysis section. For each of my papers, I can point out multiple criticisms in the reviews that are factually wrong. I don’t know if that’s the overworked professor not having enough time, or if it’s the grad student not understanding the paper. Alas, I can’t do anything about it, and it just becomes yet another rejection.
I like to think that I’m pretty good at my particular aspect of security. I have a good grasp of the major issues. When I talk to people about incorporating these ideas into real systems, we generally have a good exchange of ideas. I’m currently looking at applying for a patent for one of my ideas with the hope of transferring the technology to a company that will actually build the system. Sure, my papers haven’t been perfect, but they haven’t been bad, either. Maybe I’m close to breaking through and making real progress. At this point, though, I just feel like I’m banging my head against a wall.
I guess that’s the problem of just being part of the supply.
In my experience, on both sides of these issues, errors in the reviews aren’t necessarily what kills a paper. Rather, what happens is that you’ll have a paper that one of the reviewers wants dead, while there’s no corresponding champion who wants to rescue it. The reviews, of course, fail to capture the dynamic of the PC discussions on your papers.
Now, I’ve seen some of the arguments go on, get tabled, resume later, go on for a while, get tabled, and so forth.
None of this helps you, of course. What you should consider, for better or for worse, is that when a reviewer “doesn’t get it”, you should translate that as some fact or feature of your paper that was underpresented, underemphasized, or whatever. Basically, blame yourself for the reviewer’s faulty brain and look at that as some feature that needs better exposition, better organization, and/or better figures.
Do you know why the discussions of the anonymous reviewers regarding each rejected paper are not made available to the authors? In many cases these discussions would be more valuable in improving a paper for a subsequent resubmission than the reviews themselves.
Also, there are conferences at which, the authors of rejected papers only get the reviews but not the grades the reviewers gave to the paper (neither the overall grad nor the grades for specific aspects). What would be the reasons for not telling the authors whether their paper totally sucked or was a border-line case? Maybe one of the reasons so many papers are resubmitted is that some authors have no clue how far they were from having their paper accepted.
What you should consider, for better or for worse, is that when a reviewer “doesn’t get it”, you should translate that as some fact or feature of your paper that was underpresented, underemphasized, or whatever.
This is the approach that I’ve been trying to follow. I do feel that my papers are getting better every time I submit a new version (and the comments reflect that). Furthermore, the erroneous criticisms have actually been helpful on occasion. In a couple of cases, I’ve turned them into new paper ideas. But that’s small comfort when I get another rejection notice.
It’s just frustrating being on this side of things. I am confident that my ideas are novel and interesting enough to be shared, and I’ve spent a lot of time developing them further. Yet I have no results to show for it.
papers could be ranked and those that would normally be rejected can be place in a workshop (with or without proceedings)? Maybe an opt-in can be allowed, since some researchers might think they are too prestigious for a “compensation prize”
A workshop without proceedings can allow presenting the results and still encourage publication in a journal.
Claim: Author’s reputation and their institution’s reputation provide a non-negligible advantage to acceptance.
For an author whose reputation provides a non-negligible advantage to acceptance, her identity is probably easy to determine based on the prior work cited in the paper.
..for members of political parties. After all, the public knows (probably) how they voted. 🙂
for the same reason.
There is a fundamental tension here. On the one hand, “hotshot grad students must get themselves a long list of publications to have a crack at a decent faculty job.”
But on the other it has been suggested that we consider “disincentivizing people from submitting junk / preliminary papers by creating something of a short-cut reject that gets little or no feedback and also, by virtue of the conferences not being blind-review, creates the possibility that a rejected paper could harm the submitter’s reputation.”
Speaking as a PhD student, I’ve felt incredible pressure to produce as many publications whilst doing my PhD to maximise my chances of getting employed at the end of it. However, I readily admit this has led me in some cases to submit work that, in hindsight, was not ready to be submitted.
I would love to get people’s opinions, both young grads and seasoned acadmics, on whether it’s better to finish a PhD or postdoc position having produced only a single exceptional publication during that time or to have produced 4 or 5 less exceptional ones, assuming you want to pursue an academic research career.
What advice would you offer to young graduates in this position?
I too have recently become fascinated with the concept of “getting made.” Also a PhD student, I would say, the best I can tell, success comes down to your school’s reputation, time, luck, and the 10000 hours rule.
I finished my PhD in 1998. At the time, CS departments were expanding and the original Internet boom was alive and well, drawing away talented CS PhDs with the allure of money. The net effect was a lower-than-normal supply of fresh PhDs and a higher-than-normal demand for them.
Today, all of these trends have reversed, and that’s causing this scramble for students to appear on the top of the pile. More supply and less demand leads those with the demand (your prospective employers) the ability to be more selective with the supply (you, the PhD student).
Of course, in 1998, the academic security community was much, much smaller than it is now. I’m very happy to have such a larger community, but we need to make sure we don’t scare good people off just because their papers aren’t well-considered by three arbitrary PC members at any given time.
In my experience the place to find computer science papers is CiteSeer, not arXiv.org.
For the purposes of this piece, I don’t really care what site ends up hosting such a service, so long as it’s sufficiently scalable, reliable, and so forth. CiteSeer, though, is a regular annoyance to me because it generates really awful BibTeX entries. (Dear research community: please proofread your bibliographies!)
The big question is whether we might adopt such a thing, wholesale, as a community, as a mechanism to disseminate work absent conference acceptance of a paper.
If we wanted to be really different, we could have all security papers submitted first to the online archive with a flag for which conference(s) might want to have a look. At that point, if a conference picks up the paper and the paper gets revised, then the old paper gets supplanted with the new one. However, no matter how much a paper is rejected, it’s still out there, still getting cited (if it’s relevant), and so forth.