June 25, 2019

Identifying Trends that Drive Technology

I’m trying to compile a list of major technological and societal trends that influence U.S. computing research. Here’s my initial list. Please post your own suggestions!

  • Ubiquitous connectivity, and thus true mobility
  • Massive computational capability available to everyone, through the cloud
  • Exponentially increasing data volumes – from ubiquitous sensors, from higher-volume sensors (digital imagers everywhere!), and from the creation of all information in digital form – has led to a torrent of data which must be transferred, stored, and mined: “data to knowledge to action”
  • Social computing – the way people interact has been transformed; the data we have from and about people is transforming
  • All transactions (from purchasing to banking to voting to health) are online, creating the need for dramatic improvements in privacy and security
  • Cybercrime
  • The end of single-processor performance increases, and thus the need for parallelism to increase performance in operating systems and productivity applications, not just high-end applications; also power issues
  • Asymmetric threats, need for surveillance, reconnaissance
  • Globalization – of innovation, of consumption, of workforce
  • Pressing national and global challenges: climate change, education, energy / sustainability, health care (these replace the cold war)

What’s on your list? Please post below!

[cross-posted from CCC Blog]

Acceptance rates at security conferences

How competitive are security research conferences? Several people have been tracking this information. Mihai Christodorescu has a nice chart of acceptance and submission rates over time. The most recent data point we have is the 2009 Usenix Security Symposium, which accepted 26 of 176 submissions (a 14.8% acceptance ratio, consistent with recent years). Acceptance rates like that, at top security conferences, are now pretty much the norm.

With its deadline one week ago, ACM CCS 2009 got 317 submissions this year (up from 274 last year, and approx. 300 the year before) and ESORICS 2009, with a submission deadline last Friday night, got 222 submissions (up from about 170 last year).

Think about that: right now there are over 500 research manuscripts in the field of computer security fighting it out, and maybe 15-20% of those will get accepted. (And that’s not counting research in cryptography, or the security-relevant papers that regularly appear in the literature on operating systems, programming languages, networking, and other fields.) Ten years ago, when I first began as an assistant professor, there would be half as many papers submitted. At the time, I grumbled that we had too many security conferences and that the quality of the proceedings suffered. Well, that problem seems mostly resolved, except rather than having half as many conferences, we now have a research community that’s apparently twice as large. I suppose that’s a good thing, although there are several structural problems that we, the academic security community, really need to address.

  • What are we supposed to do with the papers that are rejected, resubmitted, rejected again, and so on? Clearly, some of this work has value and never gets seen. Should we make greater use of the arXiv.org pre-print service? There’s a crypto and computer security section, but it’s not heavily used. Alternatively, we could join on on the IACR Cryptology ePrint Archive or create our own.
  • Should we try to make the conference reviewing systems more integrated across conferences, such that PC comments from one conference show up in a subsequent conference, and the subsequent PC can see both drafts of the paper? This would make conference reviewing somewhat more like journal reviewing, providing a measure of consistency from one conference to the next.
  • Low acceptance ratios don’t necessarily achieve higher quality proceedings. There’s a distinctive problem that occurs when a conference has a huge PC and only three of them review any given paper. Great papers still get in and garbage papers are still rejected, but the outcomes for papers “on the bubble” becomes more volatile, depending on whether those papers get the right reviewers. Asking PC members to do more reviews is just going to lower the quality of the reviews or discourage people from accepting positions on PCs. Adding additional PC members could help, but it also can be unwieldy to manage a large PC, and there will be even more volatility.
  • Do we need another major annual computer security conference? Should more workshops be willing to take conference-length submissions? Or should our conferences raise their acceptance rates up to something like 25%, even if that means compressed presentations and the end of printed proceedings? How much “good” work is out there, if only there was a venue in which to print it?

About the only one of these ideas I don’t like is adding another top-level security conference. Otherwise, we could well do all-of-the-above, and that would be a good thing. I’m particularly curious if arbitrarily increasing the acceptance rates would resolve some of the volatility issues on the bubble. I think I’d rather that our conferences err on the side of taking the occasional bad/broken/flawed paper rather than rejecting the occasional good-but-misunderstood paper.

Maybe we just need to harness the power of our graduate students. When you give a grad student a paper to review, they treat it like a treasure and write a detailed review, even if they may not be the greatest expert in the field. Conversely, when you give an overworked professor a paper to review, they blast through it, because they don’t have the time to spend a full day on any given paper. Well, it’s not like our grad students have anything better to be doing. But does the additional time they can spend per paper make up for the relative lack of experience and perspective? Can they make good accept-or-reject judgements for papers on the bubble?

For additional thoughts on this topic, check out Matt Welsh’s thoughts on scaling systems conferences. He argues that there’s a real disparity between the top programs / labs and everybody else and that it’s worthwhile to take steps to fix this. (I’ll argue that security conferences don’t seem to have this particular problem.) He also points out what I think is the deeper problem, which is that hotshot grad students must get themselves a long list of publications to have a crack at a decent faculty job. This was emphatically not the case ten years ago.

See also, Birman and Schneider’s CACM article (behind a paywall, unless your university has a site license). They argue that the focus on short, incremental results is harming our field’s ability to have impact. They suggest improving the standing of journals in the tenure game and they suggest disincentivizing people from submitting junk / preliminary papers by creating something of a short-cut reject that gets little or no feedback and also, by virtue of the conferences not being blind-review, creates the possibility that a rejected paper could harm the submitter’s reputation.

Fingerprinting Blank Paper Using Commodity Scanners

Today Will Clarkson, Tim Weyrich, Adam Finkelstein, Nadia Heninger, Alex Halderman and I released a paper, Fingerprinting Blank Paper Using Commodity Scanners. The paper will appear in the Proceedings of the IEEE Symposium on Security and Privacy, in May 2009.

Here’s the paper’s abstract:

This paper presents a novel technique for authenticating physical documents based on random, naturally occurring imperfections in paper texture. We introduce a new method for measuring the three-dimensional surface of a page using only a commodity scanner and without modifying the document in any way. From this physical feature, we generate a concise fingerprint that uniquely identifies the document. Our technique is secure against counterfeiting and robust to harsh handling; it can be used even before any content is printed on a page. It has a wide range of applications, including detecting forged currency and tickets, authenticating passports, and halting counterfeit goods. Document identification could also be applied maliciously to de-anonymize printed surveys and to compromise the secrecy of paper ballots.

Viewed under a microscope, an ordinary piece of paper looks like this:

The microscope clearly shows individual wood fibers, laid down in a pattern that is unique to this piece of paper.

If you scan a piece of paper on an ordinary desktop scanner, it just looks white. But pick a small area of the paper, digitally enhance the contrast and expand the image, and you see something like this:

The light and dark areas you see are due to two factors: inherent color variation in the surface, and partial shadows cast by fibers in the paper surface. If you rotate the paper and scan again, the inherent color at each point will be the same, but the shadows will be different because the scanner’s light source will strike the paper from a different angle. These differences allow us to map out the tiny hills and valleys on the surface of the paper.

Here is a visualization of surface shape from one of our experiments:

This part of the paper had the word “sum” printed on it. You can clearly see the raised areas where toner was applied to the paper to make the letters. Around the letters you can see the background texture of the paper.

Computing the surface texture is only one part of the job. From the texture, you want to compute a concise, secure “fingerprint” which can survive ordinary wear and tear on the paper, such as crumpling, scribbling or printing, and moisture. You also want to understand how secure the technology will be in various applications. Our full paper addresses these issues too. The bottom-line result is a sort of unique fingerprint for each piece of paper, which can be determined using an ordinary desktop scanner.

For more information, see the project website or our research paper.