February 19, 2018

Archives for September 2013

On the NSA's capabilities

Last Thursday brought significant new revelations about the capacities of the National Security Agency. While the articles in the New York Times, ProPublica, and The Guardian skirted around technical specifics, several broad themes came out.

  • NSA has the capacity to read significant amounts of encrypted Internet traffic.
  • NSA has some amount of cooperation from vendors to weaken cryptographic aspects of their products.
  • NSA has a stable of exploits designed to break into specifically targeted computers (“tailored access”, in NSA parlance)
  • NSA shares this technology with its counterparts at some of our close allies, apparently the “five eyes” group (USA, Canada, Great Britain, Australia, and New Zealand)

The NSA appears to be taking a holistic approach toward its interception technologies. [Read more…]

Axciom Opens (Some) Consumer Data; What Should You Do?

Yesterday Axciom, a large data broker, rolled out their data transparency site, aboutthedata.com. The sites lets you view some data that Axciom has about you, including demographic data, family status, financials, commercial history, and shopping preferences.

The site also lets you correct any errors in the data. It looks like you can modify the data arbitrarily, but the Terms of Use require that any modifications be truthful.

Several people have asked how they should approach the site. Should they look? Should they correct errors? My thoughts are below.
[Read more…]

Ethical dilemmas faced by software engineers: A roundup of responses

Two weeks ago I asked for real-life examples of ethical dilemmas in software engineering. Many of you sent responses by email, twitter, and comments. Thank you for taking the time! Here is a quick summary (in no particular order).

Aaron Massey has written a very thoughtful post in response. I encourage you to give it a read. Let me highlight one point he makes in particular that I found very insightful:

Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.

Next, a person who wishes to be anonymous writes by email:

Here’s one that happened to me […] It was the website for a major clothing brand targeted at one gender. They were running a competition for entrants to win one of five of a very cool prize (think iphone or xbox). At the end of the competition management asked us to randomly extract five winners from the database. So we wrote a little script to pull five random entries — it returned 3 of one gender and 2 from the other.

We sent the names up the chain but then head office came back and asked us to redraw as they didn’t want any winners from the non-target gender. We refused based on the Anti-Discrimination Act here in my home state.

Alex Stamos points to the slides and audio of his Defcon talk on the ethics of the white hat industry, and notes that all of the examples in the end are real.

On a historical note, Steve Bellovin points to the history of the Computer Professionals for Social Responsibility. I’d never heard of the organization; it appears that it started in 1983, has been relatively inactive for the last few years and was dissolved a few months ago.

Augie Fackler writes

designing a security algorithm without any peer review — this comes to mind because of a recent blackhat presentation about z-wave hardware showing that the algorithm (might) be sound, but there are defects in door lock implementations that can cause the security to be worthwhile. The ethical dilemma comes in because it’s clearly an under-tested security system that vendors are pushing for physical security.

Henry Corrigan-Gibbs points to his paper with Bryan Ford via Twitter:

We draw an ethical analogy between Internet freedom efforts and humanitarian aid work. This parallel motivates a number of ethical questions relating to anonymity and censorship-circumvention research.

James Grimmelmann points to several papers: Danielle Citron’s Technological Due Process, which I think is a very important paper, Bias in Computer systems by Friedman and Nissenbaum, and his own The Google Dilemma. I haven’t read the latter two yet. He also links to a Gamasutra essay on free-to-play games, which is coincidentally something I’ve been investigating in the context of my recent series on price discrimination.

Several other interesting Twitter responses: spam/mass mailing, weapons tech, Internet filtering, Facebook Beacon.

And finally, many great responses in the comments; one frequent theme was vulnerabilities/crypto/hacking/malware.

Apologies if I missed anything. Feel free to send me more! If this list keeps growing, it might be productive to set up a Wiki.