Nash and the Noise: Using Distributed Ledgers to Make Less Invasive Technology

Aaron Day
11 min readFeb 24, 2021

--

This essay was originally published at the end of 2017. It’s recently been updated with the assistance of Carl Alviani.

Expecting a human to deal with all the interruptions, stresses and invasions posed by networked machines is a bit like asking a goat to get through a salt-lick the size of the Eiger. To get around this problem, machines need to deal with these interruptions and stresses on our behalf. This is not a trivial project.

For years, we’ve struggled to create rulesets that can be automatic for day-to-day activities, free from common errors and pitfalls, and beneficial enough to actually be used and appreciated by users. But these rulesets were all based on trust in a machine or mediator — trust that we increasingly realize is misplaced. That’s why the human-machine relationship needs to change, from one of implicit trust in machines to monitor humans with open-ended risk, to one of explicit trust in machines to monitor other machines with bounded risk.

Consider the case of a networked personal assistant in your home. What we have right now is a device with a limited and strictly bounded upside, consisting of delivered (not just advertised) benefits…and a theoretically unlimited downside. With most networked consumer products, we have no way of knowing whether or not we have control of the device’s sensors (e.g. microphone) or output capability (e.g. audio hardware, displays, blinky lights). We don’t know if we are in control of these systems or not. Even if the delivered benefits are known, the results of unmonitored, uncontrolled sensing and mining of our data are not.

A more desirable state would be a tradeoff between a known and bounded amount of risk in exchange for delivered benefits. Crucially, the technology that facilitates this consensus should not have its fate tied to any of the actors it interacts with.

If networked devices are the scrum, a consensus framework is the referee. The referee might make mistakes sometimes, but it makes decisions on behalf of humans, not third parties competing with each other for human attention.

So, who legitimises the referee, and what rules are used?

Let’s use a Nash equilibrium. A Nash equilibrium is a set of strategies for a group of players where it’s in no one’s interest to change their strategy based on what other players are doing. To put it another way, it’s like a law that no one would break even if there were no chance of being caught. A classic example is outlined here. It’s ultimately in everyone’s best interest to respect them.

There’s no system like this for the devices that currently operate in our environments. Instead, we depend on beneficial (to humans) cooperative notification or intrusion: we permit machines to listen to us, or have a user-designated agent control time-gates or states within localised IoT ecosystems. This leads to a lot of networked things that make noise, or survey us in our environment. The only way to make these networked things behave is to require cooperative action.

A distributed ledger that could confirm a Nash equilibrium is the best solution to this problem, if it brings little or no transaction cost.

For example: Actor A in a group of N actors dials in its audio notifications and sound control apparatus to benefit its user at that moment. It does so in a way that doesn’t conflict with other actors. This could mean, for instance, that all the stuff in your living room agrees to make noise within parameters that are user-beneficial, rather than single-agent beneficial. In a similar way, your home assistant only interrupts you when it senses that it would benefit you to get a notification. It acts on your behalf, not its creator’s.

This potentially offers an interesting new source of value. At the fuzzy edges between networks, attention markets would emerge, along with user-specified “interruption gates.” At these points, devices could pay for attention, and advertisers would compete for your attention at a time and in a way that you (or your agent) has allowed.

Iota is a DLT designed for machine to machine communication with very low transaction costs, so using it to facilitate these kinds of negotiations makes sense. With it, you could design very specific cases for when, how and to whom you share your data, and for what price.

I’m guessing that these markets could exist in all states. It doesn’t matter if they are inside an equilibrium or in an overlapping or undefined space — there are always opportunities for actors to negotiate the exchange of value. As long as a notification stays within the limits placed on it, it’s possible for another notification to bargain for the ability to be played first, or leverage the playout capability of the system that it's bargaining with. The user doesn’t care who goes first, as long as the devices aren’t stepping out of the bounds placed on them.

Why this is important

All of this matters because none of us wants to be the goat that’s compelled to lick its way through the Eiger. We already have far too many demands placed on our attention, most of them machine-generated, and it’s getting worse. And none of us wants to spend hours making endless decisions about what to trust or not trust; what to pay attention to and what to ignore.

Because, as The Cluetrain Manifesto says: We die.

You have a limited amount of time in your existence, but machines have more. Lots more. In fact, as they get faster, they effectively get more time to play with. More cycles means time gets sliced into smaller chunks. For the machine, time dilates.

Consider Srigi’s tweet below. Although he’s talking about latency values in programming, we see that machines operate in a dynamic range of time completely outside the envelope of human existence.

We humans don’t have enough time in our existence to deal with machines operating at speeds many orders of magnitude faster than our perception. Not at their time scale, anyway. This is why we need other machines to do it for us.

To wit:

If everything is noisy, everybody loses.

and

If many things are untrustworthy, more lose than win.

I believe that the current system is already on a bad path, and it’s becoming worse. As systems compete for your attention or trick you for your data, the result is a lose-lose scenario for all humans involved.

This is what led me to the Nash equilibrium, and ultimately, to Iota. Why use a Nash equilibrium as a framework for controlling interruptions? Most of the communications we receive from our technology, including banner ads, radio, home assistants, and phone notifications, are designed to get noticed over everything else, through volume or repetition.

In my mind, the functionality we’re talking about is the polar opposite of that.

I shared this idea with Tom Munnecke and Bob Frankston in 2016, and Tom suggested that all interruptions should start from zero and be added by permission by a human. Flip the whole thing on its head, he said, and surrender to the benefit of the group and ultimately the user.

Then Bob offered another approach. Have the devices provide APIs, and let a moderator manage the overall information. The advantage to this approach is that humans get rich information and context, rather than relying on out-of-context information with each device vying for attention.

Sharing features or components could also be part of this kind of marketplace. For example, sensor groups that are in proximity with each other could coordinate their efforts, increasing accuracy in exchange for access to each other’s resources. The ticket to join would be an agreement to submit to the control of a Nash equilibrium. If you misbehave you are smacked down.

Why DLT?

DLT offers both the possibility to integrate marketplaces with consensus-controlled networked devices. The ledger serves the role of ensuring that the device’s attributes and reputation are tamper-proof.

Iota and others have prototypes — and even some working versions — of this in the wild. If their technology can support this, I’m guessing it could also broker mutually beneficial value across device networks. David Sønstebø sums this up nicely.

From: https://blog.iota.org/iota-development-roadmap-74741f37ed01

In order for IoT to securely mature into its full potential, we have to fundamentally change how we think about machines/devices. Rather than perceiving them as lifeless amalgams of metal and plastic with a specific purpose, we need to shift toward considering each device as its own identity with different attributes.

For instance, a sensor should not only have its unique identifier but also accompanying it attributes such as: who manufactured it, when it was deployed, what is the expected lifetime cycle, who owns it now, what kind of sensor data is it gathering and at what granularity, does it sell the data and if so for how much?

When each device has its own ID, one can also establish reputation systems that are vital for anomaly and intrusion detection. By observing whether a device is acting in accordance with its ID or not, the latter which can be indicative of malware being spread, the neighbouring devices can quarantine it.

There are a wide range of applications for this approach:

  • We can limit when something can interrupt based on conditions the user has set for their environment. Instead of dialling in each device individually, the devices submit to a consensus algorithm whose default is silence. We could decide that an emergency class of notifications could break this rule, but a Lands’ End flash-sale certainly could not.
  • By registering individual audio components within a system, we can track and control their validity and health. If something goes wrong, the device is prohibited from taking part in the network. Parameters used could be a single value, a range of values, or a function that approaches a limit. Component ratings such as harmonic distortion, maximum sample rate, or even user reviews or histories of trust (or betrayal) could be referenced using DLT.
  • Shared playout fields offer unique opportunities for real-time sonification and audible notifications. Imagine being able to use an entire suite of playout hardware to reproduce directional and indicative sounds. When playout fields or “audible actors” interact, the result can be more than just interference and addition. These fields could interact on a logical, interdependent level.

Certainly, there will be problems with phase coherence if different devices have clocks of varying accuracy, but this could also be criteria for participation. For audio user interfaces, the most important thing will be playing the right sound at the right time, in a way that doesn’t yank the listener out of their flow.

Problems and questions

Is DLT really the solution?

It’s the best thing I’ve seen so far. This isn’t to say there won’t be something better in the future, but current DLT is ideally designed for these kinds of machine-to-machine contexts. The technology is still in development, and if it doesn’t take off, someone else will probably develop something that does the same job. Regardless of the specific technology, a solution that leverages a Nash equilibrium to keep these kinds of lose-lose scenarios (too much noise, too little trust) from developing, and facilitates micro-brokerage of personal data, is likely to be effective and robust.

So you are condoning a networked-everything techno hellscape?

No, not at all. I’m actually pretty low-tech in real life, and If I ever build a house it will be a Faraday cage with an oak rack for my tinfoil Stetson. For now, I accept that I must rely on networked machines in my life. But I don’t accept that we must suffer from the stresses and risks outlined above. The solution I’m proposing is something I would use if I could build it.

In addition to the above framework, let’s equip products and homes with an easily accessible switch that can air-gap all data connections, sensors and noise (light, sound, display). I want to increase the certainty that something is “off” by orders of magnitude from what’s currently offered.

The switches themselves could be big enough to symbolise the relevance or risk of what they enable — who doesn’t love knife switches, after all? At dinnertime, the switch that connects the house to outside networks is open, and in full view. No sensor data from the home is exposed to the world; it’s just the family at the table. Your devices (also equipped with small dip-switches for The Big Off) are free to support you locally, but can’t talk to anyone outside the home.

I’d like more control over the endless causal threads that work against my unconscious self, creating what I call “ambient stress”.

What does this mean for UX in general?

I believe that in the not-to-distant future, user experience design as we know it will be radically transformed. As in “black swan” transformed.

Instead of asking an experience designer to create sensible interactions for systems that are becoming exponentially complex, the designer is more of a curator, assigning dynamic rulesets instead of handling every single decision gate. We already have many good UX design rules and sound design rules. Eventually these all need to be implemented in real-time.

Eventually the UX designer will specify desired states, but not necessarily the exact method to achieve that state, in the same way that Napoleon transformed warfare by supplying strategic outcomes he wanted, but not caring about how it got done.

In this paradigm, UX design is about setting up probabilistic thresholds and micro-markets for outcomes within an acceptable range. This allows the designer to spend more time thinking about ways to provide delight and wonder — more ways for humans to enjoy being humans, without worrying about so many causal complexities.

What if this has horrible side effects?

Nobody is ever good at seeing the second-order effects of their own ideas. Is there a black hat or digital kudzu scenario that I’m not seeing? You tell me. I’m putting this out in public so that someone can pulverise or, hopefully, improve it.

Oh, and without Nelson, I could have never finished this.

Nelson

Thanks to: Kellyn Bardeen, Phil Quitslund, Tom Mandel, Navin Ramachandran, alysha naples and Toby B. for suggestions and corrections. Finally, if it hadn’t been for an impromptu brainstorming session about networked assisted hearing with Sheldon Renan at Jerry Michalski’s retreat in 2011 this idea wouldn’t have started to brew.

If you want to read more of what I’ve written please take a look at the book co-authored with Amber Case called “Designing Products With Sound” available from O’Reilly Books.

--

--

Aaron Day
Aaron Day

Written by Aaron Day

Sharing ideas around UX and web3.

No responses yet