October 31, 2024

The ease of applying for a home loan

I’m currently in the process of purchasing a new house. I called up a well-known national bank and said I wanted a mortgage. In the space of 30 minutes, I was pre-approved, had my rates locked in, and so forth. Pretty much the only identifying information I had to provide was the employer, salary, and social security number for myself and my wife, as well as some basic stats on our investment portfolio. Interestingly, the agent said that for people in my situation (sterling credit, paying more than 20% of the down payment out of our own pocket), they believe I’m highly unlikely to ever default on the loan. As a result, they do not need me to go the trouble of documenting my income or assets beyond what I told them over the phone. They’ll take my word for it.

(In an earlier post, I discussed my name and social security number having been stolen from where they had been kept in Ohio. Ohio gave me a free subscription to Debix, which claims to be able to intercept requests to read my credit report, calling my cell phone to ask for my permission. Why not? I signed up. Well, my cell phone never buzzed with any sort of call from Debix. Their service, whatever it does, had no effect here.)

Obviously, there’s a lot more to finalizing a loan and completing the purchase of a home than there is to getting approved for a loan and locking a rate. Nonetheless, it’s striking how little personal information I had to divulge to get this far into the game. Could somebody who knew my social security number use this mechanism to borrow money against my good credit and run away to a Carribean island with the proceeds? I would have to hope that there’s some kind of mechanism further down the pipeline to catch such fraud, but it’s not too hard to imagine ways to game this system, given what I’ve observed so far.

Needless to say, once this home purchase is complete, I’ll be freezing my credit report. Let’s just hope the freezing mechanism is more useful than Debix’s notification system.

(Sidebar: an $18 charge appeared on my credit card last month for a car rental agency that I’ve never used, claiming to have a “swipe” of my credit card. I challenged it, so now the anti-fraud division is allegedly attempting to recover the signed charge slip from the car rental agency. The mortgage agent, mentioned above, saw a note in my credit report on this and asked me if I had “challenged my bank”. I explained the circumstances and all was well. However, it’s interesting to note that the “challenge”, as it apparently appears in my credit report, doesn’t have any indication as to what’s being challenged or how significant it might be. Again, the agent basically took my word for it.)

E-Voting Ballots Not Secret; Vendors Don't See Problem

Two Ohio researchers have discovered that some of the state’s e-voting machines put a timestamp on each ballot, which severely erodes the secrecy of ballots. The researchers, James Moyer and Jim Cropcho, used the state’s open records law to get access to ballot records, according to Declan McCullagh’s story at news.com. The pair say they have reconstructed the individual ballots for a county tax referendum in Delaware County, Ohio.

Timestamped ballots are a problem because polling-place procedures often record the time or sequence of voter’s arrivals. For example, at my polling place in New Jersey, each voter is given a sequence number which is recorded next to the voter’s name in the poll book records and is recorded in notebooks by Republican and Democratic poll watchers. If I’m the 74th voter using the machine today, and the recorded ballots on that machine are timestamped or kept in order, then anyone with access to the records can figure out how I voted. That, of course, violates the secret ballot and opens the door to coercion and vote-buying.

Most e-voting systems that have been examined get this wrong. In the recent California top-to-bottom review, researchers found that the Diebold system stores the ballots in the order they were cast and with timestamps (report pp. 49-50), and the Hart (report pp. 59) and Sequoia (report p. 64) systems “randomize” stored ballots in an easily reversible fashion. Add in the newly discovered ES&S system, and the vendors are 0-for-4 in protecting ballot secrecy.

You’d expect the vendors to hurry up and fix these problems, but instead they’re just shrugging them off.

An ES&S spokeswoman at the Fleishman-Hillard public relations firm downplayed concerns about vote linking. “It’s very difficult to make a direct correlation between the order of the sign-in and the timestamp in the unit,” said Jill Friedman-Wilson.

This is baloney. If you know the order of sign-ins, and you can put the ballots in order by timestamp, you’ll be able to connect them most of the time. You might make occasional mistakes, but that won’t reassure voters who want secrecy.

You know things are bad when questions about a technical matter like security are answered by a public-relations firm. Companies that respond constructively to security problems are those that see them not merely as a PR (public relations) problem but as a technology problem with PR implications. The constructive response in these situations is to say, “We take all security issues seriously and we’re investigating this report.”

Diebold, amazingly, claims that they don’t timestamp ballots – even though they do:

Other suppliers of electronic voting machines say they do not include time stamps in their products that provide voter-verified paper audit trails…. A spokesman for Diebold Election Systems (now Premier Election Solutions), said they don’t for security and privacy reasons: “We’re very sensitive to the integrity of the process.”

You have to wonder why e-voting vendors are so much worse at responding to security flaw reports than makers of other products. Most software vendors will admit problems when they’re real, will work constructively with the problems’ discoverers, and will issue patches promptly. Companies might try PR bluster once or twice, but they learn that bluster doesn’t work and they’re just driving away customers. The e-voting companies seem to make the same mistakes over and over.

Sony-BMG Sues Maker of Bad DRM

Major record company Sony-BMG has sued the company that made some of the dangerous DRM (anti-copying) software that shipped on Sony-BMG compact discs back in 2005, according to an Antony Bruno story in Billboard.

Longtime Freedom to Tinker readers will remember that back in 2005 Sony-BMG shipped CDs that opened security holes and invaded privacy when inserted into Windows PCs. The CDs contained anti-copying software from two companies, SunnComm and First4Internet. The companies’ attempts to fix the problems only made things worse. Sony-BMG ultimately had to recall some of the discs, and faced civil suits and government investigations that were ultimately settled. The whole episode must have cost Sony-BMG many millions of dollars. (Alex Halderman and I wrote an academic paper about it.)

One of the most interesting questions about this debacle is who deserved the blame. SunnComm and First4Internet made the dangerous products, but Sony-BMG licensed them and distributed them to the public. It’s tempting to blame the vendors, but the fact that Sony-BMG shipped two separate dangerous products has to be part of the calculus too. There’s plenty of blame to go around.

As it turned out, Sony-BMG took most of the public heat and shouldered most of the financial responsibility. That was pretty much inevitable considering that Sony-BMG had the deepest pockets, was the entity that consumers knew, and had by far the most valuable brand name. The lawsuit looks like an attempt by Sony-BMG to recoup some of its losses.

The suit will frustrate SunnComm’s latest attempt to run from its past. SunnComm had renamed itself as Amergence Group and was trying to build a new corporate image as some kind of venture capitalist or start-up incubator. (This isn’t the first swerve in SunnComm’s direction – the company started out as a booking agency for Elvis impersonators. No, I’m not making that up.) The suit and subsequent publicity won’t help the company’s image any.

The suit itself will be interesting, if it goes ahead. We have long wondered exactly what Sony knew and when, as well as how the decision to deploy the dangerous technology was made. Discovery in the lawsuit will drag all of that out, though it will probably stay behind closed doors unless the case makes it to court. Sadly for the curious public, a settlement seems likely. SunnComm/Amergence almost certainly lacks the funds to fight this suit, or to pay the $12 million Sony-BMG is asking for.

Email Protected by 4th Amendment, Court Says

The Sixth Circuit Court of Appeals ruled yesterday, in Warshak v. U.S., that people have a reasonable expectation of privacy in their email, so that the government needs a search warrant or similar process to access it. The Court’s decision was swayed by amicus briefs submitted by EFF and a group of law professors.

When Alice sends an email to Bob, the email will be stored, for a while at least, on an email server run by Bob’s email provider. Depending on how Bob uses email, the message may sit on the server just until Bob’s computer picks up mail (which happens every few minutes when Bob is online), or Bob may store his long-term email archive on the server. Either way the server, which is typically run by Bob’s ISP, will have a copy of the email and will have the ability to access its contents.

The key question in Warshak was whether, notwithstanding the ISP’s ability to read his mail, Bob still has a reasonable expectation of privacy in the email. This matters because certain Fourth Amendment protections apply where there is a reasonable expectation of privacy. The government had used a certain kind of order authorized by the Stored Communications Act to compel Warshak’s ISP to turn over Warshak’s email without notifying Warshak. Warshak argued that that was improper and the government should have been required to get a search warrant.

The key to the Court’s ruling is an analogy, offered by the amici, between email and phone calls. The phone company has the ability to listen to your calls, but courts ruled long ago that there is a reasonable expectation of privacy in the content of phone calls, so that the government cannot eavesdrop on the content of calls without a warrant. The Court accepted that email is like a phone call, for privacy purposes at least, and the ruling essentially followed from this analogy.

This is not a general ruling that warrants are required to access electronic records held by third parties. The Court’s reasoning depended on the particular attributes of email, and even on the way these particular ISPs handled email. If the ISP’s employees regularly looked at customer email in the ordinary course of business, or if there was a written agreement giving the ISP broad latitude to look at email, the Court might have found differently. Warshak had a reasonable expectation of privacy in his email, but you might not. (Randy Picker has an interesting commentary on Warshak in relation to online records held by third parties.)

Interestingly, the Court drew a line between inspection of email by computer programs, such as virus or spam checkers, versus inspection by a person. The Court found that automated analysis of email did not erode the reasonable expectation of privacy, but routine manual inspection of email would erode it.

Pragmatically, a ruling like this is only possible because email has become a routine part of life for so many people. The analogy to phone calls, and the unquestioned assumption that people value the privacy of email, are both easy for judges who have gotten used to the idea of email. Ten years ago this could not have happened. Ten years from now it will seem obvious.

Orin Kerr, who is expert in this area of the law, thinks this ruling is at higher than usual risk of being invalidated on appeal. That may be the case. But it seems to me that the long-term trend is toward treating email like phone calls, because that is how people think of it. The government may win this battle on appeal, but they’re likely to lose this point in the long run.

Apple's File Labeling: An Effective Anticopying Tool?

Recently it was revealed that Apple’s new DRM-free iTunes tracks come with the buyer’s name encoded in their headers. Randy Picker suggested that this might be designed to deter copying – if you redistribute a file you bought, your name would be all over it. It would be easy for Apple, or a copyright owner, to identify the culprit. Or so the theory goes.

Fred von Lohmann responded, suggesting that Apple should have encrypted the information, to protect privacy while still allowing Apple to identify the original buyer if necessary. Randy responded that there was a benefit to letting third parties do enforcement.

More interesting than the lack of encryption is the apparent lack of integrity checks on the data. This makes it pretty easy to change the name in a file. Fred predicts that somebody will make a tool for changing the name to “Steve Jobs” or something. Worse yet, it would be easy to change the data in a file to frame an innocent person – which makes the name information pretty much useless for enforcement.

If you’re not a crypto person, you may not realize that there are different tools for keeping information secret than for detecting tampering – in the lingo, different tools for ensuring confidentiality than for ensuring integrity.

[UPDATE (June 7): I originally wrote that Apple had apparently not put integrity checks in the files. That now appears to be wrong, so I have rewritten this post a bit.]

Apple apparently used crypto to protect the integrity of the data. Done right, this would let Apple detect whether the name information in a file was accurate. (You might worry that somebody could transplant the name header from one file to another, but proper crypto will detect that.) Whether to use this kind of integrity check is a separate question from whether to encrypt the information – you can do either, or both, or neither.

From a security standpoint, the best way to do guarantee integrity in this case is to digitally sign the name data, using a key known only to Apple. There’s a separate key used for verifying that the data hasn’t been modified. Apple could choose to publish this verification key if they wanted to let third parties verify the name information in files.

But there’s another problem – and a pretty big one. All a digital signature can do is verify that a file is the same one that was sold to a particular customer. If a file is swiped from a customer’s machine and then distributed, you’ll know where the file came from but you won’t know who is at fault. This scenario is very plausible, given that as many as 10% of the machines on the Net contain bot software that could easily be directed to swipe iTunes files.

Which brings us to the usual problem with systems that try to label files and punish people whose labels appear on infringing files. If these people are punished severely, the result will be unfair and no prudent person will buy and keep the labeled files. If punishments are mild, then users might be willing to distribute their own files and claim innocence if they’re caught. It’s unlikely that we could reliably tell the difference between a scofflaw user and one victimized by malware, so there seems to be no escape from this problem.