March 29, 2024

We need a personal digital advocate

I recently looked up a specialized medical network. For weeks following the search, I was bombarded with ads for the network and other related services: the Internet clearly thought I was on the market for a new doctor. The funny thing is that I was looking this up for someone else and all this information, which was being pushed on me across browsers and across devices, was not really relevant. I wish I could muster such relentlessness and consistency for things that really matter to me!

This is but one example of the huge imbalance between the power of algorithms that track our economic interactions online and the power of individual consumers to have a say in the information being collected about them. So here we are being offered products that might not be in our best interest based on our past search histories. Even worse, individually, there is no way for us to know whether economic opportunities are being advertised equitably. This is true in small things, such as price discrimination on shoes, and in important things, such as job searches. Does your internet reality reflect a lower-wage world because you are known to the internet to be female?

Current rules and regulations that attempt to protect our best interests online are woefully lacking, which is understandable. They were never designed for the digital world. It is not just the difficulty of documenting (and proving) bad behavior such as discrimination or dark patterns, but the task of allocating responsibility – untangling the stack of intertwined ad technologies and entities responsible for the bad behavior. The most viable regulation-based proposals tend to revolve around holding companies such as Facebook or Google accountable through regulation. This approach is important but is limited by several factors: it does not work well across corporate and national boundaries, it exposes companies to a significant conflict of interests while implementing such regulations, and it does nothing to address the growing imbalance between the user and the data centers behind the phone screen.

What we would like to propose is a radically different approach to righting the balance of power between algorithms and individual users: the Personal Digital Advocate. The broad point of this advocate would be to give the consumers (both as individuals and as a group) an algorithm that will be answerable only to them and have equal computing power and equal access to information to that which companies currently possess. Here is a sample of the benefits such an advocate could provide:

  1. You can’t possibly know when you are being upsold by a company because your previous purchase history indicates that you are not going to check. The advocate will be able to detect that by having access to the prices on the same product that were offered to other people in the past several months.
  2. The advocate will be able to detect instances of gender and race-based discrimination in job searches by being able to detect that you are not getting access to the full range of jobs for which you qualify.
  3. Instead of incomplete, messy, often wrong data about you being bought and sold on the internet behind your back and outside of your control (which is the default mode now), you will be able to use your digital advocate to freely offer certain information about yourself to companies in a way that will benefit you, but also the company. For instance, suppose you are shopping for a minivan. You have looked at all kinds of brands, but you know that you will only buy a Toyota or a Honda. This is information that you might not mind sharing if there were a way to do so. It could mean that Kia dealerships will stop wasting their money and your time by advertising to you, and the ads you do get might actually become more relevant.

For a digital advocate to become viable, two major policy changes will need to take place – one in the legal and one in the technical domain:

Legal: In the absence of a legal framework, it will always be more profitable for a digital advocate to sell the consumer out (e.g., it is easy to see how it could start steering people toward certain products for a commission). Fortunately, a legal framework to prevent this already exists in other arenas. One good example is the lawyer/client relationship.  It might otherwise be very profitable for a law firm to betray a client and use his information against him (e.g., by leaking willingness to pay in a real estate deal and then collecting commission), but any lawyer who does that will immediately be disbarred, or worse. There needs to be a “bar” of sorts for the digital advocate.

Technical: At a technical level, a technological framework will need to be developed that would allow the advocate to access all the information it needs when it needs it. “Digital rights” laws such as GDPR and CCPA will need to incorporate a digital access mandate – allowing the end-user to nominate a bot to uphold her rights (such as the right to refuse cookies without having to go through a dark pattern, or the ability to download one’s data in a timely manner).

Regulations always tend to fall behind advances in technology. This is true across different industries and historically. For instance, there was a notable lag between the time when medications started being mass produced and the emergence of the FDA. Our ancestors who lived in the “gap” probably consumed “medications” ranging from harmlessly ineffective to outright dangerous. The algorithms that govern our online lives (which merges more and more with our regular lives) change more quickly than any other industry, and moreover, are able to adapt automatically to regulations. Regulations, which have trouble keeping up with progress in general, will especially struggle against such an adaptive opponent. Thus, the only sustainable way to protect ourselves online is to create an algorithm that will protect us and will be able to develop at the same rate as the ones that can wittingly or unwittingly harm us.

Mark Braverman is a professor of computer science at Princeton University, and is part of the theory group. His research focuses on algorithms and computational complexity theory, as well as building connections to other disciplines, including information theory, mathematical analysis, and mechanism design.

A longer version of this post is available in the form of an essay here.

Comments

  1. I think you’re heading toward what Doc Searls used to call ‘Consumer Rights Management’ and now calls ‘Customer Commons’. You should definitely check out the essays on his blog, and https://customercommons.org.