February 22, 2017

Engineering around social media border searches

The latest news is that the U.S. Department of Homeland Security is considering a requirement, while passing through a border checkpoint, to inspect a prospective visitor’s “online presence”. That means immigration officials would require users to divulge their passwords to Facebook and other such services, which the agent might then inspect, right there, at the border crossing. This raises a variety of concerns, from its chilling impact on freedom of speech to its being an unreasonable search or seizure, nevermind whether an airport border agent has the necessary training to make such judgments, much less the time to do it while hundreds of people are waiting in line to get through.

Rather than conduct a serious legal analysis, however, I want to talk about technical countermeasures. What might Facebook or other such services do to help defend their users as they pass a border crossing?

Fake accounts. It’s certainly feasible today to create multiple accounts for yourself, giving up the password to a fake account rather than your real account. Most users would find this unnecessarily cumbersome, and the last thing Facebook or anybody else wants is to have a bunch of fake accounts running around. It’s already a concern when somebody tries to borrow a real person’s identity to create a fake account and “friend” their actual friends.

Duress passwords. Years ago, my home alarm system had the option to have two separate PINs. One of them would disable the alarm as normal. The other would sound a silent alarm, summoning the police immediately while making it seem like I disabled the alarm. Let’s say Facebook supported something similar. You enter the duress password, then Facebook locks out your account or switches to your fake account, as above.

Temporary lockouts. If you know you’re about to go through a border crossing, you could give a duress password, as above, or you could arrange an account lockout in advance. You might, for example, designate ten trusted friends, where any five must declare that the lockout is over. Absent those declarations, your account would remain locked, and there would be no means for you to be coerced into giving access to your own account.

Temporary sanitization. Absent any action from Facebook, the best advice today for somebody about to go through a border crossing is to sanitize their account before going through. That means attempting to second-guess what border agents are looking for and delete it in advance. Facebook might assist this by providing search features to allow users to temporarily drop friends, temporarily delete comments or posts with keywords in them, etc. As with the temporary lockouts, temporary sanitization would need to have a restoration process that could be delegated to trusted friends. Once you give the all-clear, everything comes back again.

User defense in bulk. Every time a user, going through a border crossing, exercises a duress password, that’s an unambiguous signal to Facebook. Even absent such signals, Facebook would observe highly unusual login behavior coming from those specific browsers and IP addresses. Facebook could simply deny access to its services from government IP address blocks. While it’s entirely possible for the government to circumvent this, whether using Tor or whatever else, there’s no reason that Facebook needs to be complicit in the process.

So is there a reasonable alternative?

While it’s technically feasible for the government to require that Facebook give it full “backdoor” access to each and every account so it can render threat judgments in advance, this would constitute the most unreasonable search and seizure in the history of that phrase. Furthermore, if and when it became common knowledge that such unreasonable seizures were commonplace, that would be the end of the company. Facebook users have an expectation of privacy and will switch to other services if Facebook cannot protect them.

Wouldn’t it be nice if there was some less invasive way to support the government’s desire for “extreme vetting”? Can we protect ordinary users’ privacy while still enabling the government to intercept people who intend harm to our country? We certainly must assume that an actual bona fide terrorist is going to have no trouble creating a completely clean online persona to use while crossing a border. They can invent wholesome friends with healthy children sharing silly videos of cute kittens. While we don’t know too much about our existing vetting strategies to distinguish tourists from terrorists, we have to assume that the process involves the accumulation of signals and human intelligence, and other painstaking efforts by professional investigators to protect our country from harm. It’s entirely possible that they’re already doing a good job.

Regulatory Questions Abound as Mobile Payments Clamor for Position in Apps

People frequently associate mobile payments with “tap and pay” — walking into a store, flashing your smartphone, and then walking out with stuff. But in-store sales really aren’t the focus of companies working on mobile payment issues. That’s because payment in stores generally isn’t a problem in need of a fix. Swiping a payment card at a terminal is quick and painless. Even dipping a chip-card is getting faster. And thanks to regulation, consumers generally don’t have to consider data security tradeoffs when choosing between different ways to pay.

In contrast, buying things while browsing over the Internet on our phones — in apps or via browsers — is a miserable process. It’s kind of amazing that we haven’t fixed the basic process of buying things over our phones, given how dependent we are on our phones for Internet browsing. The average iPhone user unlocks his phone 80 times a day. And the average American smartphone user spends five hours a day browsing on his phone. Yet, shopping cart conversion rates are abysmal over mobile phones. Estimates vary, but one recent study found that when consumers use their phones to shop online, they purchase items they put in their shopping carts only 1.53% of the time. Imagine being a store owner where over 98% of the people in your lines just wander off because they’re too frustrated with the process of giving you money. Analysts generally attribute the difference to the difficulty consumers have completing lengthy checkout forms, which require that they input payment credentials, billing addresses, shipping addresses and other information into a tiny screen with their thumbs. For me personally, one checkout process took me about 130 thumb taps.

Last holiday season, a diverse group of companies rushed to fill that gap. In June, PayPal enabled “One Touch,” which allows consumers to stay logged into their PayPal account on specific devices and, accordingly, buy stuff with one touch. That same month Apple announced that it will be expanding Apple Pay so that consumers can use their thumbprints to purchase things in apps, as well as on the Safari browser (even when they’re surfing on a desktop). Apple also integrated payments into iMessage, making payments as casual as chatting. Not to be outdone, Facebook announced in September that it has partnered with what TechCrunch describes as “all the major players” in the payments industry to enable credit card and debit payments for Messenger’s 1 billion users.

Amazon’s Echo bypasses phones entirely by allowing you to pay for things by speaking into the air. Apple followed up with its own voice-activiated payments on Siri.

And oh by the way, Google Payments already gives you the option of storing and autofiling payment card credentials if you’re browsing the Internet using the Chrome browser. Safari does too.

Of course, big banks aren’t giving up without a fight. JPMorgan Chase launched its own mobile wallet for in-app purchases, barely in time for Black Friday. Once a consumer downloads the app and creates a login, his pre-existing Chase cards are “automatically” enrolled in the wallet. According to Chase, that touches one out of every two American households.

All of these offerings are pretty much interchangeable to consumers: they’re made to be very convenient, “frictionless” ways to pay. From a design perspective, the goal is a nearly invisible payments layer, because the aim is to minimize any disruption of the consumer’s interaction with the merchant’s website. It’s gotten to the point where some consumers are complaining that they don’t know how to slow the payments process down.

On the one hand, all of these options are great for consumers. But on the other hand, there may be all kinds of differences under the hood of these payment devices that consumers won’t be able to see. A payment tool may gather, use, or share consumer data differently than what consumers expect. They may have different standards for protecting consumer data from hackers and thieves. Or, in extreme cases, they may do things that are patently illegal — for example creating phantom account for consumers and then billing them for them. (Heck, in some cases, the apps may even be from imposter companies.) Until very recently, consumers haven’t had to think about these potential differences because they’ve been living in a payments world dominated by plastic cards offered by highly-regulated banks.

Take supervisory examinations, as an example. Banks are generally examined for compliance with consumer protection requirements. This means that regulators send specialized examiners to banks’ places of business to speak with employees and review their records to make sure they’re following the law. Examiners will review email and phone exchanges, to understand if consumers are given the proper disclosures. They’ll review consumer complaints to ensure that consumers are treated fairly. Because JPMorgan Chase is a bank, it’s subject to examinations. So when Chase Pay hits the market, it will have had its tires kicked (or it least can have its tires kicked) by the government. This is a good thing for consumers and also arguably the bank. But it’s also a business cost — compliance and preparing for examinations requires a significant investment of money and, perhaps more importantly, delays the bank’s ability to get a product to market. (Notably, despite being announced in 2015 and Chase’s position as the leading wholly-owned payment provider for merchants, Chase Pay is still only accepted at two major retailers.)

New payment-focused fintech companies are subject to a wide variety of other regulations, but generally don’t have regulators coming on-site to examine their operations for consumer protection concerns. There are odd exceptions, but it’s far from a level playing field. For instance, companies that are very large players in the market for sending payments from the U.S. to other countries (“remittances”) are subject to examination. So if a company is a “larger participant” in remittances market and also offers retail consumer payments in smartphones, the latter could be swept up in an examination for the former. Elsewhere, companies that have contractual relationships with credit card issuers may be considered “service providers” to banks. At least one commentator, for instance, has opined that Apple’s service provider relationship with credit card-issuing banks makes Apple Pay subject to consumer protection examinations for unfair, deceptive, and abusive acts and practices. But many of the new payment services being offered to consumers won’t require the companies to have pre-existing contracts with consumers’ payment card issuers. How do browser extensions fit in the patchwork quilt of consumer protection examinations? Are messaging apps that allow for payment connectivity “third party service providers” from an examination perspective? How do you even examine for consumer disclosures when a payment is made over a speaker?

There are many more unanswered questions. For instance, what responsibility — from a regulatory perspective — do app stores have for protecting consumers from imposter payment apps?

Is the lack of a level playing field fair to banks? More importantly, is it fair to consumers?

Do the old divisions that treat these companies differently still make sense?

 

Concerned about Internet of Things Security?

There is no shortage of warnings about the need to improve security for the Internet of Things:

Certainly these messages must be raising concerns in organizations that are working on Internet of Things projects.

But it doesn’t seem so.

In our recent research at MIT Sloan Management Review, we found that only 34% of the respondents felt that they needed to improve their IoT data security. If you are trying to decide if the glass is full or empty, that glass seems two-thirds empty to me.

The research included responses from 1,480 executives, managers, and IT professionals working in a wide variety of industries. It focused on the perspective of organizations, not security professionals, and tried to understand their challenges and opportunities associated with the Internet of Things.

One optimistic interpretation of these results is that the reason the 66% are not concerned about IoT data security is that they have heeded the warnings and have taken steps to reduce security concerns. But we also asked respondents about how effective their organizations were at security for IoT data. Figure 1 shows the relationship between concern for IoT data security and the organization’s perceived data security effectiveness. Reporting of a need to improve IoT security changed little with the perceived effectiveness.

Figure 1: Concern for IoT Security and IoT Security Effectiveness

An alternative, more pessimistic interpretation is that organizations need to improve IoT security, but that it is not an important concern. Instead, in order to take advantage of IoT, respondents felt more need to improve their overall analytics capability (58%), analytics talent (52%), IoT specific talent (49%), executive team’s understanding (46%), ability to communicate with customers (45%), and relationships with other groups who understand IoT (40%). In fact, need for improvements in data security (34%) and sensor-data security (27%) were selected less often than any other option we gave respondents to choose from. And in this scenario, respondents could select as many as they felt described their organization, without cost.

Our respondents had a variety of experience with IoT projects. It could be that those who are not active may not yet be aware of potential security issues. Given that most organizations are not yet active with IoT projects, our results could be driven by those inactive organizations. Figure 2 examines organizational concern for IoT data a security as they gain experience with IoT. Concern is higher for organizations active with IoT with some drop as they gain further experience. But it seems that inactive organizations are not solely responsible for the low overall need to improve IoT data security.

Figure 2: Concern for IoT Security and IoT Experience

While IoT security is inherently important, it may be even more salient when combined with another key result from our research—business value from the Internet of Things is related to the amount of data sharing between customers, suppliers, and even competitors. As organizations find value in sharing data with other organizations, they are likely to increase connections with other organizations, leading to increased potential for negative externalities.

Unfortunately, the low perception of need to improve IoT data security coupled with increased IoT deployments and interconnections between organizations seem likely to lead to more headlines that report on IoT security downfalls, not fewer.