November 13, 2024

Accountable Algorithms: An Example

I wrote yesterday about accountable algorithms. When I say that a public algorithm is “accountable” I mean that the output produced by a particular execution of the algorithm can be verified as correct after the fact by a skeptical member of the public. Today I want to work through an example.

Consider a simplified airport security algorithm where each person who goes through a security checkpoint is selected for enhanced search with 5% probability. We want to make sure that a person who is selected can verify that they really were picked by a fair random process, and not because of bias or some other off-the-books reason.

Of course, we don’t want to tell people beforehand whether they are going to be searched. For security reasons, we want the subject to be surprised by the search, finding out about the search only when they arrive at the checkpoint. The verification will happen afterward, perhaps the next day–so we are not preventing biased or incorrect choices, only detecting them.

To solve this problem, we need to use two cryptographic operations: a commitment and a selection function. I’ll give an intuitive explanation of what they do, but I won’t describe the underlying math–I assume the cryptographers among you will be able to figure that out for yourselves.

A commitment is like a sealed envelope. A party can “commit” to a value, which is like writing that value on a piece of paper, sealing the paper in an envelope, and placing the envelope in public view where nobody can tamper with it. Later, the party who sealed the envelope can “open” it, which reveals the value that was initially sealed. The key property of a commitment is that the value, once sealed, cannot be changed.

A selection function is a way of picking items. We’ll write it as S(k,n), where “k” is a key and “n” is a name. Given a particular key and name, the selection function says “yes” or “no”. The function’s behavior, intuitively, is rather like a random 5% choice among names–if you pick a key k at random, then S(k,n) will say “yes” for a “random-looking” 5% of names. Think of the selection function as knowing a great many random-looking methods for picking names, and the key as choosing which of those many methods to use on a particular day. The selection function is public and known to everybody (but the key might be secret).

Now we can create our accountable selection method. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn’t. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value Q. Now the TSA adds R+Q and makes that sum the key K for the day.

Now, when you arrive at the checkpoint, you announce your name N, and the TSA uses the selection function to compute S(K, N). The TSA announces the result, and if it’s “yes,” then you get searched. You can’t anticipate whether you’ll be searched, because that depends on the key K, which depends on the TSA’s secret value R, which you don’t know.

At the end of the day, the TSA opens its commitment to R. Now you can verify that the TSA followed the algorithm correctly in deciding whether to search you. You can add the now-public R to the already-public Q, to get the day’s (previously) secret key K. You can then evaluate the selection function S(K,N) with your name N–replicating the computation that the TSA did in deciding whether to search you. If the result you get matches the result the TSA announced earlier, then you know that the TSA did their job correctly. If it doesn’t match, you know the TSA cheated–and when you announce that they cheated, anybody can verify that your accusation is correct.

This method prevents the TSA from creating a non-random result. The reason the TSA cannot do this is that the key K is based on result of die-rolling, which is definitely random. And the TSA cannot have chosen its secret value R in a way that neutralized the effect of the random die-rolls, because the TSA had to commit to its choice of R before the dice were rolled. So citizens know that if they were chosen, it was because of randomness and not any TSA bias.

This is just one example of an algorithm that can be made accountable. But it turns out that a very broad class of algorithms can be made accountable. For example, other randomized algorithms can be made accountable by applying the same basic method I used here, and algorithms with secret inputs can be made accountable by using other cryptographic methods, which I might describe in a later post.

The bottom line is that many algorithms of public interest can be made accountable. The cost of doing so is not zero–any change has a cost, and accountable versions of an algorithm are sometimes less efficient than an unaccountable version–but it’s useful to know that accountability is possible.

Comments

  1. Wendy M. Grossman says

    Of course, the cost of *not* doing something like this is also not zero. There is a definite societal cost in the loss of confidence and trust in these systems, especially as they become more widespread.

    Ed: I’ve sent you email to ask if you can talk to me for the project I’m working on, for which this is highly relevant.

    wg

  2. Yes, there are many ways to randomize selections and make them accountable. However, I ask you: backing up, do any of you really think that randomized searches are statistically sound? Doesn’t the base rate fallacy enter into the discussion somewhere? Pretty much, it’s similar to taking a hunter out to the woods, blindfolding him, and instructing him to stand up every once in a while and fire off a round or two, and expecting to bag a deer.

    Oh, you say… it’s the deterrent effect that we’re really after. Okay – let’s explore that. Game Theory states that when one attack avenue is sealed, every other available avenue becomes relatively more valuable. So tell me: if there is such a huge terrorist threat, and if the deterrent effect of air flight security is so fantastically effective, where within the last 11 years have there been comparable attacks? Any security lines blow up? Any hotels, busses, trains, subways…. the list goes on. And the answer is no. I therefore must conclude that we overestimated the threat and commensurately overreacted. My conclusion is that this discussion, while mathematically accurate in its approaches, simply is trying to solve a problem that is not much of one.

  3. Did anyone here already come up with salting with a user-generated number? I.e. you let the one being searched (or not) pick a number. Even if it is not entirely random, you can never say that the TSA tricked you into entering four instead of three, now can you? You did that yourself. Then you only have to commit to whether you search someone when they press four and not when they press three, and you are done. What am I missing? This problem has already been solved in the “pick a number” game, but you do have to commit to a number and change your commitment per person.

  4. The random selection is pretty simple, and so is verifying you are on a list without learning anything else about the list. but you would also want to not know when you were added to the list, and preferably you want to not know if you were selected randomly or from the list. how would you do that?

  5. Love the analytical explanation of accountable algorithms. I can see good use for those in some scenarios. What I can’t see is why you would want to randomly search 5% of persons at an airport and call that “security” by any stretch of the imagination.

    Let me take it from the computer coding perspective. Suppose you have a anti-virus scanner that scans code prior to allowing it to execute. The scanner is set to scan a random 5% of code prior to executing, but lets the other 95% of code automatically execute without any additional safeguards.

    This would be a very poor anti-virus, as it would only catch an occasional virus, but you would have 95% chance every time code is executed to get infected. Even if you added the “no fly” and “always search” style lists (never execute and always scan) to such a scanner, you still have a tremendous amount of unknown code that is allowed to execute without hindrance.

    A true security system doesn’t let 95% go unsuspected… and yet, in this country it simply would not be allowed to suspect and search everyone single person (something about “unreasonable searches” comes to mind, it’s already bad enough, but supposing that “enhanced” search [pat down, strip search, etc.] were required for everone).

    This is why it doesn’t matter what TSA does to search a select few, random or not, accountable or not, they will NEVER be effective at stopping terrorists. They will only be effective at degrading citizen’s rights.

    With that said, I realize your post was about accountable algorithms and not about security in airports, and as such I enjoyed the read yet again.

    • For your analogy to be correct your anti-virus should have a cost, ex. it takes x minutes to run the inspection. Furthermore, anti-virus never has 100% confidence that it has selected a virus. It has a false positives and negatives for the ones it wrongly selects or overlooks.

      Security is all about managing risks! Accountability at least makes such processes trustworthy.

      • “Security is all about managing risks!” which is my point entirely. The process of randomly searching someone (trustworthy or not) does not manage risk very effectively–in fact it is very ineffective (as has been show by many getting past such security with weapons). My point of analogy with the virus scanner is no scanner out there uses such a random scanning technique. Rather, all scanners do at minimum two things, they check based on virus signatures (known risks), and usually they monitor portions of computer software for risky code (behavior), they never do it randomly, but rather they watch behavior.

        Now, as for the x minutes to run the inspection… well I simplified, but that is true of any virus scanner too, in fact, all virus scanners I have ever used take up so much system resources in processor time (x milliseconds), and memory (limited resources) that I have noted a huge decrees in system performance to even have one virus scanner running. And yes, I too have seen many false positives and negatives to put any credence into a virus scanner. Hence I don’t run anti-virus on my computers, I use other techniques, but I certainly don’t randomly go about business either.

        Thus, my analogy was simply taking a usually understood idea among tech circles (anti-virus) and applying it to a comparable idea of airport security. For TSA to have any effective measures they must run processes that are based on “behavior” and NOT on randomness. But, basing it on “behavior” is called profiling and gets the outcry, and so everyone opts for this insecure “randomness” for a feeling of security.

  6. Clive Robinson says

    Ed,

    There is a problem with TSA ques that means that they can not use this or many other processes.

    The problem arises from the constraint on the TSA of not being able to “enhance search every passenger due to resource limitations. Which gives rise too a secondary constraint of getting X people through the security process to meet a fixed time cut off of when the flight must leave.

    The TSA have to tell people well in advance to leave Y amount of time to get through security before their flight cut off time. And Y is an estimate based on the number of passengers Z that the TSA can deal with at the checkpoint in a given time.

    Now what happens in reality is the TSA knows what X is well in advance, but not the composition of those (N) that get a normal search and those (E) that get an enhanced search which is problematic. So what the TSA do is make the enhanced search group actualy consist of two sub groups those that are mandatory (Em) on the watch list and discretionary (Ed).

    Shortly before the flight the TSA should know how many Em people they have thus Ed people are selected to meet the difference to fill up the say 5% of Z that their targets say should be “subject to enhanced screening”.

    Thus your chance of being randomly selected for enhanced screening is constantly changing because of the number of Em people there are. But nobody knows where those Em people are in the que waiting to get through security. But what is known is that at any moment in time there can only be a small maximum of people being enhanced searched at any one point in time, based on the resources available at the checkpoint.

    So… from a terrorists point of view the trick is to “stuff the E que”. That is put the required number of Em people in the general que immediatly infront of their attacker they want in the N que.

    The game the TSA has to play is how to manage the resources such that E que stuffing is not viable but keep the resource expenditure low.

    This means that simple obvious systems suggested above of “the number you rolled matches todays number” cannot work unless Em is a very very small fraction of the population.

    And in turn your system will if implemented incorrectly will fail. That is it should only be used to “pick candidates” for enhanced searching not “mandate those who will be” enhanced searched.

    The problem of terrorists or others “gaming the system” by such tricks as E que stuffing is something the TSA has to worry about and as such they appear to have actually failed to address it correctly. In that quite a few “frequent flyers” have their own systems for avoiding enhanced searches simply by observing the process prior to joining the general que(s)…

  7. Doesn’t this ignore the fact that active profiling (based on behavior if nothing else) actually works, and similar info indicating that random inspection doesn’t? Are we so concerned about hurting feelings that we’re ready to sacrifice people in a truly random fashion?

  8. How about simplifying this greatly. Get the same envelope. Write a number from 1 to 6 (or whatever the max num between random inspections is). All of the TSA staffers are privy to this number. A traveler rolls the die in plain view as he passes through the checkpoint. If he hits the number in the envelope the TSA says “okay spread ’em”. At the end of the day, the TSA opens the envelope and publishes a notarized photo online with the checkpoint number and the random number of the day. No? You might get into D + D 20 sided dies, etc. but same concept.

    • Your method works for the exact problem I described. But I’m looking to address a much broader class of problems, which can’t all be addressed by the method you describe.

      Also, I’d like the system to have exactly the same user experience as the current one, for those users who don’t care about this stuff–so that only the TSA, and those people who want to do the verification step have to do anything different from today.

  9. Doesn’t this work only if the TSA has limited computing power? Otherwise, for the number of people going through any particular gate in a day it should be plausible to post-compute R and S.

    • But the TSA does have limited computing power. If you can set any limit at all–say, a million million million times the power of all of the computers in the world, running since the beginning of the universe–then there is a way to choose the selection function such that the kind of brute force attack you describe is infeasible.

  10. Megan Squire says

    How is this an improvement over just having a number announced each day (“Today the number is 4”) and if the traveler rolls a 4, they get searched?

    Walk me through the “name” thing in this example. I’m not following that. Maybe that’s where I’m failing to track your method. I get the crypto part, that’s not the issue.

    • Well I have to say that is elegantly simple. Just have someone roll a die and if the number for the day comes up they get searched. Then the person knows it wasn’t biased. Just have rolled behind a window where the people getting searched can’t touch it and swap it out for loaded die by slight of hand. No fancy electronic chooser. You can use the same approach that vegas uses for roulette tables. They swap out the wheel every so often and use camera fo record results to see if there is a statistical bias and exchange it if there is.

  11. Matthew Wilkes says

    The problem here is that you can start finding combinations of names that are unlikely to both get searched on a given day. You can effectively ignore the die roll, and get a table for different values of R and either watch people go through the checkpoint to refine your guess, or just pick two people to go through whose chance of both being stopped is less than some amount.

    • This isn’t a problem is you choose the selection function S carefully. If you use the right kind of cryptographic function, there will be no feasible way to spot patterns nor to usefully narrow down the possible choices of R.

  12. I like this. It also allows us to refer to the TSA as a TSR. And if you unferstand the reference, you get a +1 in your next search roll.

  13. I wonder how this idea could apply to open source, publicly verifiable electronic voting?

    I believe that electronic voting will be the only way voting will be carried out in the future.

    It is unknown the degree to which largely undetectable electronic vote switching is going on, although in some cases it has become quite obvious (GovTech Solutions – Ohio 2004*) the massive scale the problems our democracy faces with the corruptible ad-hoc electronic voting patchwork of systems in use.

    I have often thought that there has to be a way to create open source, publicly verifiable, individually private electronic voting. Do you have any insights you could share? Anyone?

    * http://www.freepress.org/departments/display/19/2011/4239

    • Electronic voting is a great example of a public algorithm that should be accountable. E-voting was what got me thinking about the accountability problem in the first place. (But it’s not the simplest example to explain.)

  14. Am I missing something? So a terrorist can defeat this by sending someone with the name of John Smith and see if he gets searched and if he doesn’t then send another with the same name with something to smuggle through… it seems that you have to choose N such that it can’t be changeable by the person coming in yet still be something they can verify at the end of the day to see if they were randomly selected.

    • A “name” is an abstract entity. It need not refer to your common legal name. It could be a hash of that plus your driver’s licence number, for example.

    • N can be a checksum of Fname, Lname and DOB. If two appear search them both. Or use name and a rolling index number or timestamp added to N.

      But why should all searches be random? If you throw away intuition, even $13 per hour intuition, you decreasing security. A person’s judgment after a year or two of looking at people is better than a random search. If the selection process proves to be skewed towards selecting a certain demographic, and that percentage of selection is higher than the number of people in that demographic are represented in terrorism relative to the general population then cull that person from the selection process, or only allow their decision to count up to demographic representation…

      • Yes. But what I said still holds true. Since N is a known entity so you can verify it later. Then all the perpetrators have to do is find an N that isn’t searched and then send that N through again with whatever prohibited item they don’t want found. It the same holds true for non-random searches. It is pretty well known that random searches are most effective. And the fact that one is using N to decide if a search takes place just made the search non-random.

        • Joe Kavanagh says

          Unless how N is created is also secret until End of Day (perhaps by switching up things like first 4 digits of ID #, flight confirmation code, seat on flight, last 4 digits of ID#, letter(s) in a specific position of name (ie 2nd letter of last name), exact ticket price, etc). Also exact time could be used as part of the algorithm so an answer that was true at 9:05:15 isn’t true at 9:05:20, etc.

      • “A person’s judgment after a year or two of looking at people is better than a random search.”

        You can’t learn a lot from looking at thousands and thousands of negative examples and no positive ones.

      • “But why should all searches be random? If you throw away intuition, even $13 per hour intuition, you decreasing security. A person’s judgment after a year or two of looking at people is better than a random search.”

        Only if you can show evidence that “intuition” produces results better than chance. Given the very small number of true positives, that’s highly unlikely. See Bruce Schneier’s response to Sam Harris (http://www.samharris.org/blog/item/the-trouble-with-profiling) on the base rate fallacy and why profiling is counterproductive.

    • It’s also vulnerable to the same person passing through in the same day. This problem could be solved by salting N with a timestamp, so that the probability changes with each passing second, but then verifying it will require recording the specific time when the algorithm decides to search a person.

    • Thanks for this thread. I did simplify things by saying “name” but of course as you guys point out, you need to use something that is unique and not under the control of the user.

      With regard to passing through the system multiple times, that’s another details that needs to be taken care of. You could incorporate the current time into the system, or if possible you could always search somebody if they have been through the checkpoint previously in the same day.

      • If its just one search per day, its a flaw.

        Go through multiple times. If searched, go around, pick up your bomb and walk through.

        • No, other way around. Your first trip through the checkpoint, it’s random whether you’re searched or not. All times after the first, you are guaranteed to be searched. This also catches anyone who tries to copy your N.

          Of course, then you need some way of proving that someone with your N really did go through earlier, or else the TSA can search anyone they want by claiming that their N is a repeat…

          Timestamp probably isn’t a good solution either, because the TSA could probably influence your timestamp by at least a few seconds to change your result.

  15. This is an important step for public processes.

    In San Francisco I’ve struggled to get the school district’s school assignment lottery results published — it’s very similar to what you describe. There was a long political process put into tilting the odds for various reasons (basically a lot of people trying to undo Brown vs. Board of Education for their particular neighborhoods), and little effort in auditing the results. The resulting rules are complex, errors are frequent and semi-hidden, and the detailed results are never published. Nobody has any confidence in the system.

    An auditable, anonymized results dataset should be a requirement of any algorithm which affects individual citizens. We seem to have sunshine laws for some public information, but no requirement to open up personal-public data like this.

  16. A much easier solution: When I landed in Mexico, there is a simple box attached to nothing but a power cord. When you press a button on the top a sign either lights up stating that you may pass or that you have to go to an inspection. Now assuming there is no wifi tricker involved or a particularly vengeful person of disadvantaged height, this gives you a random decision and has the added benefit of putting more “control” in the hands of the end user.

    • The box-and-light approach would seem to work for this case, if you assume that the box works as it is supposed to. But what I’m after is a set of methods that works for a much broader class of problems.

      • Get one of those pop-up bubbles used in board games to automatically roll the dice, mount it to a table in front of a guard. Put a D20 in with one face painted red, everyone that goes by pushes down on the bubble once under the eye of the guard. Bonus points for including a mechanism to prevent ‘double tapping.’ (use a clear tube on a central pivot?)

        Designers often forget to consider the requirements of implementation when designing an algorithm. People should take responsibility for their own security, this means implementing ideas so as to maximize their comprehension to everyone involved.

        This does make finding solutions that fit broad classes of problems more difficult. Still, the point of design is to shift time and cognitive load on to the designer and off of the user, so it’s a challenge worth surmounting.

    • Or a solution that seems to be in use in other airports in the world, at least Arlanda (ARN):
      when you go through the metal detector, it can randomly pick you for a search. Since it knows when a person is passing, it just tosses a die and signals that you are picked.

      (But the Arlanda extra search is just a patting down and a sweep with a hand-held metal detector. It takes 30 seconds.)

  17. This is crazy. The point of the TSA has been ritual humiliation and obedience training. The best way to do that is to have low-IQ uneducated minorities have power to humiliate and incovenience successful people. That’s what the TSA is for. It has nothing to do with security.

  18. This would be slightly less confusing if the symbol S did not refer both to a random die roll and a public selection function.

    • Good point. I modified the post so the die roll result is called Q. Now there is only one thing called S.

  19. So, now the challenging question: How do we make a selection function with a secret ‘always select’ list accountable, without nasty information disclosure flaws?

    • There are two kinds of information disclosure you might worry about. First, the output of the algorithm will inherently reveal information–there is nothing you can do about that. Second, you want to be able to apply a secret searchlist of people who always get searched, without having to reveal the full contents of the searchlist. This second problem has a solution, but it’s a bit too intricate to explain here in a comment. I might write another post about it.

      (If you’re a crypto geek, I can give you a summary: the TSA commits to the root hash of a Merkle tree containing the names of people on the searchlist. Then they can establish after the fact that a particular person was or wasn’t on the search list, by revealing a single path in the Merkle tree. Alternatively, there is a method that uses Yao’s circuit garbling trick.)

    • There’s no guarantee that S or R have been chosen randomly.

      Incidentally, also look up “blind substring search”, which may be relevant here.

      • R doesn’t have to be chosen randomly. Q is chosen by publicly rolling dice, so we can assume it really is random.

        (Note that what you call S is now called Q in the text–I renamed it to avoid confusion.)